All of lore.kernel.org
 help / color / mirror / Atom feed
* bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
@ 2019-05-13 14:01 ` Meelis Roos
  0 siblings, 0 replies; 8+ messages in thread
From: Meelis Roos @ 2019-05-13 14:01 UTC (permalink / raw)
  To: Rick Edgecombe, sparclinux, netdev, bpf

I tested yesterdays 5.2 devel git and it failed to boot on my Sun Fire V445
(4x UltraSparc III). Init is started and it hangs there:

[   38.414436] Run /sbin/init as init process
[   38.530711] random: fast init done
[   39.580678] systemd[1]: Inserted module 'autofs4'
[   39.721577] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
[   40.028068] systemd[1]: Detected architecture sparc64.

Welcome to Debian GNU/Linux 10 (buster)!

[   40.168713] systemd[1]: Set hostname to <v445>.
[   61.318034] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[   61.403039] rcu:     1-...!: (0 ticks this GP) idle=602/1/0x4000000000000000 softirq=85/85 fqs=1
[   61.526780] rcu:     (detected by 3, t=5252 jiffies, g=-967, q=228)
[   61.613037]   CPU[  1]: TSTATE[0000000080001602] TPC[000000000043f2b8] TNPC[000000000043f2bc] TASK[systemd-fstab-g:90]
[   61.766828]              TPC[smp_synchronize_tick_client+0x18/0x180] O7[__do_munmap+0x204/0x3e0] I7[xcall_sync_tick+0x1c/0x2c] RPC[page_evictable+0x4/0x60]
[   61.966807] rcu: rcu_sched kthread starved for 5250 jiffies! g-967 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
[   62.113058] rcu: RCU grace-period kthread stack dump:
[   62.185558] rcu_sched       I    0    10      2 0x06000000
[   62.264312] Call Trace:
[   62.299316]  [000000000092a1fc] schedule+0x1c/0x80
[   62.368071]  [000000000092d3fc] schedule_timeout+0x13c/0x280
[   62.449328]  [00000000004b6c64] rcu_gp_kthread+0x4c4/0xa40
[   62.528077]  [000000000047e95c] kthread+0xfc/0x120
[   62.596833]  [00000000004060a4] ret_from_fork+0x1c/0x2c
[   62.671831]  [0000000000000000]           (null)

5.1.0 worked fine. I bisected it to the following commit:

d53d2f78ceadba081fc7785570798c3c8d50a718 is the first bad commit
commit d53d2f78ceadba081fc7785570798c3c8d50a718
Author: Rick Edgecombe <rick.p.edgecombe@intel.com>
Date:   Thu Apr 25 17:11:38 2019 -0700

     bpf: Use vmalloc special flag
     
     Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
     permissioned memory in vmalloc and remove places where memory was set RW
     before freeing which is no longer needed. Don't track if the memory is RO
     anymore because it is now tracked in vmalloc.
     
     Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
     Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
     Cc: <akpm@linux-foundation.org>
     Cc: <ard.biesheuvel@linaro.org>
     Cc: <deneen.t.dock@intel.com>
     Cc: <kernel-hardening@lists.openwall.com>
     Cc: <kristen@linux.intel.com>
     Cc: <linux_dti@icloud.com>
     Cc: <will.deacon@arm.com>
     Cc: Alexei Starovoitov <ast@kernel.org>
     Cc: Andy Lutomirski <luto@kernel.org>
     Cc: Borislav Petkov <bp@alien8.de>
     Cc: Daniel Borkmann <daniel@iogearbox.net>
     Cc: Dave Hansen <dave.hansen@linux.intel.com>
     Cc: H. Peter Anvin <hpa@zytor.com>
     Cc: Linus Torvalds <torvalds@linux-foundation.org>
     Cc: Nadav Amit <nadav.amit@gmail.com>
     Cc: Rik van Riel <riel@surriel.com>
     Cc: Thomas Gleixner <tglx@linutronix.de>
     Link: https://lkml.kernel.org/r/20190426001143.4983-19-namit@vmware.com
     Signed-off-by: Ingo Molnar <mingo@kernel.org>

:040000 040000 58066de53107eab0705398b5d0c407424c138a86 7a1345d43c4cacee60b9135899b775ecdb54ea7e M      include
:040000 040000 d02692cf57a359056b34e636d0f102d37de5b264 81c4c2c6408b68eb555673bd3f0bc3071db1f7ed M      kernel

-- 
Meelis Roos <mroos@linux.ee>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
@ 2019-05-13 14:01 ` Meelis Roos
  0 siblings, 0 replies; 8+ messages in thread
From: Meelis Roos @ 2019-05-13 14:01 UTC (permalink / raw)
  To: Rick Edgecombe, sparclinux, netdev, bpf

I tested yesterdays 5.2 devel git and it failed to boot on my Sun Fire V445
(4x UltraSparc III). Init is started and it hangs there:

[   38.414436] Run /sbin/init as init process
[   38.530711] random: fast init done
[   39.580678] systemd[1]: Inserted module 'autofs4'
[   39.721577] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
[   40.028068] systemd[1]: Detected architecture sparc64.

Welcome to Debian GNU/Linux 10 (buster)!

[   40.168713] systemd[1]: Set hostname to <v445>.
[   61.318034] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[   61.403039] rcu:     1-...!: (0 ticks this GP) idle`2/1/0x4000000000000000 softirq…/85 fqs=1
[   61.526780] rcu:     (detected by 3, tR52 jiffies, g=-967, q"8)
[   61.613037]   CPU[  1]: TSTATE[0000000080001602] TPC[000000000043f2b8] TNPC[000000000043f2bc] TASK[systemd-fstab-g:90]
[   61.766828]              TPC[smp_synchronize_tick_client+0x18/0x180] O7[__do_munmap+0x204/0x3e0] I7[xcall_sync_tick+0x1c/0x2c] RPC[page_evictable+0x4/0x60]
[   61.966807] rcu: rcu_sched kthread starved for 5250 jiffies! g-967 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
[   62.113058] rcu: RCU grace-period kthread stack dump:
[   62.185558] rcu_sched       I    0    10      2 0x06000000
[   62.264312] Call Trace:
[   62.299316]  [000000000092a1fc] schedule+0x1c/0x80
[   62.368071]  [000000000092d3fc] schedule_timeout+0x13c/0x280
[   62.449328]  [00000000004b6c64] rcu_gp_kthread+0x4c4/0xa40
[   62.528077]  [000000000047e95c] kthread+0xfc/0x120
[   62.596833]  [00000000004060a4] ret_from_fork+0x1c/0x2c
[   62.671831]  [0000000000000000]           (null)

5.1.0 worked fine. I bisected it to the following commit:

d53d2f78ceadba081fc7785570798c3c8d50a718 is the first bad commit
commit d53d2f78ceadba081fc7785570798c3c8d50a718
Author: Rick Edgecombe <rick.p.edgecombe@intel.com>
Date:   Thu Apr 25 17:11:38 2019 -0700

     bpf: Use vmalloc special flag
     
     Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
     permissioned memory in vmalloc and remove places where memory was set RW
     before freeing which is no longer needed. Don't track if the memory is RO
     anymore because it is now tracked in vmalloc.
     
     Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
     Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
     Cc: <akpm@linux-foundation.org>
     Cc: <ard.biesheuvel@linaro.org>
     Cc: <deneen.t.dock@intel.com>
     Cc: <kernel-hardening@lists.openwall.com>
     Cc: <kristen@linux.intel.com>
     Cc: <linux_dti@icloud.com>
     Cc: <will.deacon@arm.com>
     Cc: Alexei Starovoitov <ast@kernel.org>
     Cc: Andy Lutomirski <luto@kernel.org>
     Cc: Borislav Petkov <bp@alien8.de>
     Cc: Daniel Borkmann <daniel@iogearbox.net>
     Cc: Dave Hansen <dave.hansen@linux.intel.com>
     Cc: H. Peter Anvin <hpa@zytor.com>
     Cc: Linus Torvalds <torvalds@linux-foundation.org>
     Cc: Nadav Amit <nadav.amit@gmail.com>
     Cc: Rik van Riel <riel@surriel.com>
     Cc: Thomas Gleixner <tglx@linutronix.de>
     Link: https://lkml.kernel.org/r/20190426001143.4983-19-namit@vmware.com
     Signed-off-by: Ingo Molnar <mingo@kernel.org>

:040000 040000 58066de53107eab0705398b5d0c407424c138a86 7a1345d43c4cacee60b9135899b775ecdb54ea7e M      include
:040000 040000 d02692cf57a359056b34e636d0f102d37de5b264 81c4c2c6408b68eb555673bd3f0bc3071db1f7ed M      kernel

-- 
Meelis Roos <mroos@linux.ee>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
  2019-05-13 14:01 ` Meelis Roos
@ 2019-05-13 17:01   ` Edgecombe, Rick P
  -1 siblings, 0 replies; 8+ messages in thread
From: Edgecombe, Rick P @ 2019-05-13 17:01 UTC (permalink / raw)
  To: netdev, mroos, sparclinux, bpf; +Cc: namit

On Mon, 2019-05-13 at 17:01 +0300, Meelis Roos wrote:
> I tested yesterdays 5.2 devel git and it failed to boot on my Sun Fire V445
> (4x UltraSparc III). Init is started and it hangs there:
> 
> [   38.414436] Run /sbin/init as init process
> [   38.530711] random: fast init done
> [   39.580678] systemd[1]: Inserted module 'autofs4'
> [   39.721577] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT
> +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS
> +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-
> hierarchy=hybrid)
> [   40.028068] systemd[1]: Detected architecture sparc64.
> 
> Welcome to Debian GNU/Linux 10 (buster)!
> 
> [   40.168713] systemd[1]: Set hostname to <v445>.
> [   61.318034] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
> [   61.403039] rcu:     1-...!: (0 ticks this GP)
> idle=602/1/0x4000000000000000 softirq=85/85 fqs=1
> [   61.526780] rcu:     (detected by 3, t=5252 jiffies, g=-967, q=228)
> [   61.613037]   CPU[  1]: TSTATE[0000000080001602] TPC[000000000043f2b8]
> TNPC[000000000043f2bc] TASK[systemd-fstab-g:90]
> [   61.766828]              TPC[smp_synchronize_tick_client+0x18/0x180]
> O7[__do_munmap+0x204/0x3e0] I7[xcall_sync_tick+0x1c/0x2c]
> RPC[page_evictable+0x4/0x60]
> [   61.966807] rcu: rcu_sched kthread starved for 5250 jiffies! g-967 f0x0
> RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
> [   62.113058] rcu: RCU grace-period kthread stack dump:
> [   62.185558] rcu_sched       I    0    10      2 0x06000000
> [   62.264312] Call Trace:
> [   62.299316]  [000000000092a1fc] schedule+0x1c/0x80
> [   62.368071]  [000000000092d3fc] schedule_timeout+0x13c/0x280
> [   62.449328]  [00000000004b6c64] rcu_gp_kthread+0x4c4/0xa40
> [   62.528077]  [000000000047e95c] kthread+0xfc/0x120
> [   62.596833]  [00000000004060a4] ret_from_fork+0x1c/0x2c
> [   62.671831]  [0000000000000000]           (null)
> 
> 5.1.0 worked fine. I bisected it to the following commit:
> 
> d53d2f78ceadba081fc7785570798c3c8d50a718 is the first bad commit
> commit d53d2f78ceadba081fc7785570798c3c8d50a718
> Author: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Date:   Thu Apr 25 17:11:38 2019 -0700
> 
>      bpf: Use vmalloc special flag
>      
>      Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
>      permissioned memory in vmalloc and remove places where memory was set RW
>      before freeing which is no longer needed. Don't track if the memory is RO
>      anymore because it is now tracked in vmalloc.
>      
>      Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
>      Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>      Cc: <akpm@linux-foundation.org>
>      Cc: <ard.biesheuvel@linaro.org>
>      Cc: <deneen.t.dock@intel.com>
>      Cc: <kernel-hardening@lists.openwall.com>
>      Cc: <kristen@linux.intel.com>
>      Cc: <linux_dti@icloud.com>
>      Cc: <will.deacon@arm.com>
>      Cc: Alexei Starovoitov <ast@kernel.org>
>      Cc: Andy Lutomirski <luto@kernel.org>
>      Cc: Borislav Petkov <bp@alien8.de>
>      Cc: Daniel Borkmann <daniel@iogearbox.net>
>      Cc: Dave Hansen <dave.hansen@linux.intel.com>
>      Cc: H. Peter Anvin <hpa@zytor.com>
>      Cc: Linus Torvalds <torvalds@linux-foundation.org>
>      Cc: Nadav Amit <nadav.amit@gmail.com>
>      Cc: Rik van Riel <riel@surriel.com>
>      Cc: Thomas Gleixner <tglx@linutronix.de>
>      Link: https://lkml.kernel.org/r/20190426001143.4983-19-namit@vmware.com
>      Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> :040000 040000 58066de53107eab0705398b5d0c407424c138a86
> 7a1345d43c4cacee60b9135899b775ecdb54ea7e M      include
> :040000 040000 d02692cf57a359056b34e636d0f102d37de5b264
> 81c4c2c6408b68eb555673bd3f0bc3071db1f7ed M      kernel
> 
Thanks, I'll see if I can reproduce.

Rick

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
@ 2019-05-13 17:01   ` Edgecombe, Rick P
  0 siblings, 0 replies; 8+ messages in thread
From: Edgecombe, Rick P @ 2019-05-13 17:01 UTC (permalink / raw)
  To: netdev, mroos, sparclinux, bpf; +Cc: namit

T24gTW9uLCAyMDE5LTA1LTEzIGF0IDE3OjAxICswMzAwLCBNZWVsaXMgUm9vcyB3cm90ZToNCj4g
SSB0ZXN0ZWQgeWVzdGVyZGF5cyA1LjIgZGV2ZWwgZ2l0IGFuZCBpdCBmYWlsZWQgdG8gYm9vdCBv
biBteSBTdW4gRmlyZSBWNDQ1DQo+ICg0eCBVbHRyYVNwYXJjIElJSSkuIEluaXQgaXMgc3RhcnRl
ZCBhbmQgaXQgaGFuZ3MgdGhlcmU6DQo+IA0KPiBbICAgMzguNDE0NDM2XSBSdW4gL3NiaW4vaW5p
dCBhcyBpbml0IHByb2Nlc3MNCj4gWyAgIDM4LjUzMDcxMV0gcmFuZG9tOiBmYXN0IGluaXQgZG9u
ZQ0KPiBbICAgMzkuNTgwNjc4XSBzeXN0ZW1kWzFdOiBJbnNlcnRlZCBtb2R1bGUgJ2F1dG9mczQn
DQo+IFsgICAzOS43MjE1NzddIHN5c3RlbWRbMV06IHN5c3RlbWQgMjQxIHJ1bm5pbmcgaW4gc3lz
dGVtIG1vZGUuICgrUEFNICtBVURJVA0KPiArU0VMSU5VWCArSU1BICtBUFBBUk1PUiArU01BQ0sg
K1NZU1ZJTklUICtVVE1QICtMSUJDUllQVFNFVFVQICtHQ1JZUFQgK0dOVVRMUw0KPiArQUNMICtY
WiArTFo0IC1TRUNDT01QICtCTEtJRCArRUxGVVRJTFMgK0tNT0QgLUlETjIgK0lETiAtUENSRTIg
ZGVmYXVsdC0NCj4gaGllcmFyY2h5PWh5YnJpZCkNCj4gWyAgIDQwLjAyODA2OF0gc3lzdGVtZFsx
XTogRGV0ZWN0ZWQgYXJjaGl0ZWN0dXJlIHNwYXJjNjQuDQo+IA0KPiBXZWxjb21lIHRvIERlYmlh
biBHTlUvTGludXggMTAgKGJ1c3RlcikhDQo+IA0KPiBbICAgNDAuMTY4NzEzXSBzeXN0ZW1kWzFd
OiBTZXQgaG9zdG5hbWUgdG8gPHY0NDU+Lg0KPiBbICAgNjEuMzE4MDM0XSByY3U6IElORk86IHJj
dV9zY2hlZCBkZXRlY3RlZCBzdGFsbHMgb24gQ1BVcy90YXNrczoNCj4gWyAgIDYxLjQwMzAzOV0g
cmN1OiAgICAgMS0uLi4hOiAoMCB0aWNrcyB0aGlzIEdQKQ0KPiBpZGxlPTYwMi8xLzB4NDAwMDAw
MDAwMDAwMDAwMCBzb2Z0aXJxPTg1Lzg1IGZxcz0xDQo+IFsgICA2MS41MjY3ODBdIHJjdTogICAg
IChkZXRlY3RlZCBieSAzLCB0PTUyNTIgamlmZmllcywgZz0tOTY3LCBxPTIyOCkNCj4gWyAgIDYx
LjYxMzAzN10gICBDUFVbICAxXTogVFNUQVRFWzAwMDAwMDAwODAwMDE2MDJdIFRQQ1swMDAwMDAw
MDAwNDNmMmI4XQ0KPiBUTlBDWzAwMDAwMDAwMDA0M2YyYmNdIFRBU0tbc3lzdGVtZC1mc3RhYi1n
OjkwXQ0KPiBbICAgNjEuNzY2ODI4XSAgICAgICAgICAgICAgVFBDW3NtcF9zeW5jaHJvbml6ZV90
aWNrX2NsaWVudCsweDE4LzB4MTgwXQ0KPiBPN1tfX2RvX211bm1hcCsweDIwNC8weDNlMF0gSTdb
eGNhbGxfc3luY190aWNrKzB4MWMvMHgyY10NCj4gUlBDW3BhZ2VfZXZpY3RhYmxlKzB4NC8weDYw
XQ0KPiBbICAgNjEuOTY2ODA3XSByY3U6IHJjdV9zY2hlZCBrdGhyZWFkIHN0YXJ2ZWQgZm9yIDUy
NTAgamlmZmllcyEgZy05NjcgZjB4MA0KPiBSQ1VfR1BfV0FJVF9GUVMoNSkgLT5zdGF0ZT0weDQw
MiAtPmNwdT0yDQo+IFsgICA2Mi4xMTMwNThdIHJjdTogUkNVIGdyYWNlLXBlcmlvZCBrdGhyZWFk
IHN0YWNrIGR1bXA6DQo+IFsgICA2Mi4xODU1NThdIHJjdV9zY2hlZCAgICAgICBJICAgIDAgICAg
MTAgICAgICAyIDB4MDYwMDAwMDANCj4gWyAgIDYyLjI2NDMxMl0gQ2FsbCBUcmFjZToNCj4gWyAg
IDYyLjI5OTMxNl0gIFswMDAwMDAwMDAwOTJhMWZjXSBzY2hlZHVsZSsweDFjLzB4ODANCj4gWyAg
IDYyLjM2ODA3MV0gIFswMDAwMDAwMDAwOTJkM2ZjXSBzY2hlZHVsZV90aW1lb3V0KzB4MTNjLzB4
MjgwDQo+IFsgICA2Mi40NDkzMjhdICBbMDAwMDAwMDAwMDRiNmM2NF0gcmN1X2dwX2t0aHJlYWQr
MHg0YzQvMHhhNDANCj4gWyAgIDYyLjUyODA3N10gIFswMDAwMDAwMDAwNDdlOTVjXSBrdGhyZWFk
KzB4ZmMvMHgxMjANCj4gWyAgIDYyLjU5NjgzM10gIFswMDAwMDAwMDAwNDA2MGE0XSByZXRfZnJv
bV9mb3JrKzB4MWMvMHgyYw0KPiBbICAgNjIuNjcxODMxXSAgWzAwMDAwMDAwMDAwMDAwMDBdICAg
ICAgICAgICAobnVsbCkNCj4gDQo+IDUuMS4wIHdvcmtlZCBmaW5lLiBJIGJpc2VjdGVkIGl0IHRv
IHRoZSBmb2xsb3dpbmcgY29tbWl0Og0KPiANCj4gZDUzZDJmNzhjZWFkYmEwODFmYzc3ODU1NzA3
OThjM2M4ZDUwYTcxOCBpcyB0aGUgZmlyc3QgYmFkIGNvbW1pdA0KPiBjb21taXQgZDUzZDJmNzhj
ZWFkYmEwODFmYzc3ODU1NzA3OThjM2M4ZDUwYTcxOA0KPiBBdXRob3I6IFJpY2sgRWRnZWNvbWJl
IDxyaWNrLnAuZWRnZWNvbWJlQGludGVsLmNvbT4NCj4gRGF0ZTogICBUaHUgQXByIDI1IDE3OjEx
OjM4IDIwMTkgLTA3MDANCj4gDQo+ICAgICAgYnBmOiBVc2Ugdm1hbGxvYyBzcGVjaWFsIGZsYWcN
Cj4gICAgICANCj4gICAgICBVc2UgbmV3IGZsYWcgVk1fRkxVU0hfUkVTRVRfUEVSTVMgZm9yIGhh
bmRsaW5nIGZyZWVpbmcgb2Ygc3BlY2lhbA0KPiAgICAgIHBlcm1pc3Npb25lZCBtZW1vcnkgaW4g
dm1hbGxvYyBhbmQgcmVtb3ZlIHBsYWNlcyB3aGVyZSBtZW1vcnkgd2FzIHNldCBSVw0KPiAgICAg
IGJlZm9yZSBmcmVlaW5nIHdoaWNoIGlzIG5vIGxvbmdlciBuZWVkZWQuIERvbid0IHRyYWNrIGlm
IHRoZSBtZW1vcnkgaXMgUk8NCj4gICAgICBhbnltb3JlIGJlY2F1c2UgaXQgaXMgbm93IHRyYWNr
ZWQgaW4gdm1hbGxvYy4NCj4gICAgICANCj4gICAgICBTaWduZWQtb2ZmLWJ5OiBSaWNrIEVkZ2Vj
b21iZSA8cmljay5wLmVkZ2Vjb21iZUBpbnRlbC5jb20+DQo+ICAgICAgU2lnbmVkLW9mZi1ieTog
UGV0ZXIgWmlqbHN0cmEgKEludGVsKSA8cGV0ZXJ6QGluZnJhZGVhZC5vcmc+DQo+ICAgICAgQ2M6
IDxha3BtQGxpbnV4LWZvdW5kYXRpb24ub3JnPg0KPiAgICAgIENjOiA8YXJkLmJpZXNoZXV2ZWxA
bGluYXJvLm9yZz4NCj4gICAgICBDYzogPGRlbmVlbi50LmRvY2tAaW50ZWwuY29tPg0KPiAgICAg
IENjOiA8a2VybmVsLWhhcmRlbmluZ0BsaXN0cy5vcGVud2FsbC5jb20+DQo+ICAgICAgQ2M6IDxr
cmlzdGVuQGxpbnV4LmludGVsLmNvbT4NCj4gICAgICBDYzogPGxpbnV4X2R0aUBpY2xvdWQuY29t
Pg0KPiAgICAgIENjOiA8d2lsbC5kZWFjb25AYXJtLmNvbT4NCj4gICAgICBDYzogQWxleGVpIFN0
YXJvdm9pdG92IDxhc3RAa2VybmVsLm9yZz4NCj4gICAgICBDYzogQW5keSBMdXRvbWlyc2tpIDxs
dXRvQGtlcm5lbC5vcmc+DQo+ICAgICAgQ2M6IEJvcmlzbGF2IFBldGtvdiA8YnBAYWxpZW44LmRl
Pg0KPiAgICAgIENjOiBEYW5pZWwgQm9ya21hbm4gPGRhbmllbEBpb2dlYXJib3gubmV0Pg0KPiAg
ICAgIENjOiBEYXZlIEhhbnNlbiA8ZGF2ZS5oYW5zZW5AbGludXguaW50ZWwuY29tPg0KPiAgICAg
IENjOiBILiBQZXRlciBBbnZpbiA8aHBhQHp5dG9yLmNvbT4NCj4gICAgICBDYzogTGludXMgVG9y
dmFsZHMgPHRvcnZhbGRzQGxpbnV4LWZvdW5kYXRpb24ub3JnPg0KPiAgICAgIENjOiBOYWRhdiBB
bWl0IDxuYWRhdi5hbWl0QGdtYWlsLmNvbT4NCj4gICAgICBDYzogUmlrIHZhbiBSaWVsIDxyaWVs
QHN1cnJpZWwuY29tPg0KPiAgICAgIENjOiBUaG9tYXMgR2xlaXhuZXIgPHRnbHhAbGludXRyb25p
eC5kZT4NCj4gICAgICBMaW5rOiBodHRwczovL2xrbWwua2VybmVsLm9yZy9yLzIwMTkwNDI2MDAx
MTQzLjQ5ODMtMTktbmFtaXRAdm13YXJlLmNvbQ0KPiAgICAgIFNpZ25lZC1vZmYtYnk6IEluZ28g
TW9sbmFyIDxtaW5nb0BrZXJuZWwub3JnPg0KPiANCj4gOjA0MDAwMCAwNDAwMDAgNTgwNjZkZTUz
MTA3ZWFiMDcwNTM5OGI1ZDBjNDA3NDI0YzEzOGE4Ng0KPiA3YTEzNDVkNDNjNGNhY2VlNjBiOTEz
NTg5OWI3NzVlY2RiNTRlYTdlIE0gICAgICBpbmNsdWRlDQo+IDowNDAwMDAgMDQwMDAwIGQwMjY5
MmNmNTdhMzU5MDU2YjM0ZTYzNmQwZjEwMmQzN2RlNWIyNjQNCj4gODFjNGMyYzY0MDhiNjhlYjU1
NTY3M2JkM2YwYmMzMDcxZGIxZjdlZCBNICAgICAga2VybmVsDQo+IA0KVGhhbmtzLCBJJ2xsIHNl
ZSBpZiBJIGNhbiByZXByb2R1Y2UuDQoNClJpY2sNCg=

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
  2019-05-13 17:01   ` Edgecombe, Rick P
@ 2019-05-14  1:15     ` Edgecombe, Rick P
  -1 siblings, 0 replies; 8+ messages in thread
From: Edgecombe, Rick P @ 2019-05-14  1:15 UTC (permalink / raw)
  To: netdev, mroos, sparclinux, bpf; +Cc: davem, peterz, namit

On Mon, 2019-05-13 at 10:01 -0700, Rick Edgecombe wrote:
> On Mon, 2019-05-13 at 17:01 +0300, Meelis Roos wrote:
> > I tested yesterdays 5.2 devel git and it failed to boot on my Sun Fire V445
> > (4x UltraSparc III). Init is started and it hangs there:
> > 
> > [   38.414436] Run /sbin/init as init process
> > [   38.530711] random: fast init done
> > [   39.580678] systemd[1]: Inserted module 'autofs4'
> > [   39.721577] systemd[1]: systemd 241 running in system mode. (+PAM +AUDIT
> > +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT
> > +GNUTLS
> > +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-
> > hierarchy=hybrid)
> > [   40.028068] systemd[1]: Detected architecture sparc64.
> > 
> > Welcome to Debian GNU/Linux 10 (buster)!
> > 
> > [   40.168713] systemd[1]: Set hostname to <v445>.
> > [   61.318034] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
> > [   61.403039] rcu:     1-...!: (0 ticks this GP)
> > idle=602/1/0x4000000000000000 softirq=85/85 fqs=1
> > [   61.526780] rcu:     (detected by 3, t=5252 jiffies, g=-967, q=228)
> > [   61.613037]   CPU[  1]: TSTATE[0000000080001602] TPC[000000000043f2b8]
> > TNPC[000000000043f2bc] TASK[systemd-fstab-g:90]
> > [   61.766828]              TPC[smp_synchronize_tick_client+0x18/0x180]
> > O7[__do_munmap+0x204/0x3e0] I7[xcall_sync_tick+0x1c/0x2c]
> > RPC[page_evictable+0x4/0x60]
> > [   61.966807] rcu: rcu_sched kthread starved for 5250 jiffies! g-967 f0x0
> > RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
> > [   62.113058] rcu: RCU grace-period kthread stack dump:
> > [   62.185558] rcu_sched       I    0    10      2 0x06000000
> > [   62.264312] Call Trace:
> > [   62.299316]  [000000000092a1fc] schedule+0x1c/0x80
> > [   62.368071]  [000000000092d3fc] schedule_timeout+0x13c/0x280
> > [   62.449328]  [00000000004b6c64] rcu_gp_kthread+0x4c4/0xa40
> > [   62.528077]  [000000000047e95c] kthread+0xfc/0x120
> > [   62.596833]  [00000000004060a4] ret_from_fork+0x1c/0x2c
> > [   62.671831]  [0000000000000000]           (null)
> > 
> > 5.1.0 worked fine. I bisected it to the following commit:
> > 
> > d53d2f78ceadba081fc7785570798c3c8d50a718 is the first bad commit
> > commit d53d2f78ceadba081fc7785570798c3c8d50a718
> > Author: Rick Edgecombe <rick.p.edgecombe@intel.com>
> > Date:   Thu Apr 25 17:11:38 2019 -0700
> > 
> >      bpf: Use vmalloc special flag
> >      
> >      Use new flag VM_FLUSH_RESET_PERMS for handling freeing of special
> >      permissioned memory in vmalloc and remove places where memory was set
> > RW
> >      before freeing which is no longer needed. Don't track if the memory is
> > RO
> >      anymore because it is now tracked in vmalloc.
> >      
> >      Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> >      Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> >      Cc: <akpm@linux-foundation.org>
> >      Cc: <ard.biesheuvel@linaro.org>
> >      Cc: <deneen.t.dock@intel.com>
> >      Cc: <kernel-hardening@lists.openwall.com>
> >      Cc: <kristen@linux.intel.com>
> >      Cc: <linux_dti@icloud.com>
> >      Cc: <will.deacon@arm.com>
> >      Cc: Alexei Starovoitov <ast@kernel.org>
> >      Cc: Andy Lutomirski <luto@kernel.org>
> >      Cc: Borislav Petkov <bp@alien8.de>
> >      Cc: Daniel Borkmann <daniel@iogearbox.net>
> >      Cc: Dave Hansen <dave.hansen@linux.intel.com>
> >      Cc: H. Peter Anvin <hpa@zytor.com>
> >      Cc: Linus Torvalds <torvalds@linux-foundation.org>
> >      Cc: Nadav Amit <nadav.amit@gmail.com>
> >      Cc: Rik van Riel <riel@surriel.com>
> >      Cc: Thomas Gleixner <tglx@linutronix.de>
> >      Link: https://lkml.kernel.org/r/20190426001143.4983-19-namit@vmware.com
> >      Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > 
> > :040000 040000 58066de53107eab0705398b5d0c407424c138a86
> > 7a1345d43c4cacee60b9135899b775ecdb54ea7e M      include
> > :040000 040000 d02692cf57a359056b34e636d0f102d37de5b264
> > 81c4c2c6408b68eb555673bd3f0bc3071db1f7ed M      kernel
> > 
> 
> Thanks, I'll see if I can reproduce.
> 
> Rick

I'm having trouble getting Debian Buster up and running on qemu-system-
sparc64 and so haven't been able to reproduce. Is this currently working for
people?

This patch involves re-setting memory permissions when freeing executable
memory. It looks like Sparc64 Linux doesn't have support for the set_memory_()
functions so that part shouldn't be changing anything. The main other thing that
is changed here is always doing a TLB flush in vfree when the BPF JITs are
freed. It will already sometimes happen so that shouldn't be too different
either.

So it doesn't seem extra especially likely to cause a sparc specific problem
that I can see. Is there any chance this is an intermittent issue?

Alternatively, we could maybe just exempt architectures with no set_memory_()
implementations from this new behavior. That would unfortunately lose the
benefits for architectures with no set_memory_()'s but that have executable
permission bits.

But then this patch would have no effect on sparc64 and would possibly resolve
it without really debugging it.

Thanks,

Rick

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
@ 2019-05-14  1:15     ` Edgecombe, Rick P
  0 siblings, 0 replies; 8+ messages in thread
From: Edgecombe, Rick P @ 2019-05-14  1:15 UTC (permalink / raw)
  To: netdev, mroos, sparclinux, bpf; +Cc: davem, peterz, namit

T24gTW9uLCAyMDE5LTA1LTEzIGF0IDEwOjAxIC0wNzAwLCBSaWNrIEVkZ2Vjb21iZSB3cm90ZToN
Cj4gT24gTW9uLCAyMDE5LTA1LTEzIGF0IDE3OjAxICswMzAwLCBNZWVsaXMgUm9vcyB3cm90ZToN
Cj4gPiBJIHRlc3RlZCB5ZXN0ZXJkYXlzIDUuMiBkZXZlbCBnaXQgYW5kIGl0IGZhaWxlZCB0byBi
b290IG9uIG15IFN1biBGaXJlIFY0NDUNCj4gPiAoNHggVWx0cmFTcGFyYyBJSUkpLiBJbml0IGlz
IHN0YXJ0ZWQgYW5kIGl0IGhhbmdzIHRoZXJlOg0KPiA+IA0KPiA+IFsgICAzOC40MTQ0MzZdIFJ1
biAvc2Jpbi9pbml0IGFzIGluaXQgcHJvY2Vzcw0KPiA+IFsgICAzOC41MzA3MTFdIHJhbmRvbTog
ZmFzdCBpbml0IGRvbmUNCj4gPiBbICAgMzkuNTgwNjc4XSBzeXN0ZW1kWzFdOiBJbnNlcnRlZCBt
b2R1bGUgJ2F1dG9mczQnDQo+ID4gWyAgIDM5LjcyMTU3N10gc3lzdGVtZFsxXTogc3lzdGVtZCAy
NDEgcnVubmluZyBpbiBzeXN0ZW0gbW9kZS4gKCtQQU0gK0FVRElUDQo+ID4gK1NFTElOVVggK0lN
QSArQVBQQVJNT1IgK1NNQUNLICtTWVNWSU5JVCArVVRNUCArTElCQ1JZUFRTRVRVUCArR0NSWVBU
DQo+ID4gK0dOVVRMUw0KPiA+ICtBQ0wgK1haICtMWjQgLVNFQ0NPTVAgK0JMS0lEICtFTEZVVElM
UyArS01PRCAtSUROMiArSUROIC1QQ1JFMiBkZWZhdWx0LQ0KPiA+IGhpZXJhcmNoeT1oeWJyaWQp
DQo+ID4gWyAgIDQwLjAyODA2OF0gc3lzdGVtZFsxXTogRGV0ZWN0ZWQgYXJjaGl0ZWN0dXJlIHNw
YXJjNjQuDQo+ID4gDQo+ID4gV2VsY29tZSB0byBEZWJpYW4gR05VL0xpbnV4IDEwIChidXN0ZXIp
IQ0KPiA+IA0KPiA+IFsgICA0MC4xNjg3MTNdIHN5c3RlbWRbMV06IFNldCBob3N0bmFtZSB0byA8
djQ0NT4uDQo+ID4gWyAgIDYxLjMxODAzNF0gcmN1OiBJTkZPOiByY3Vfc2NoZWQgZGV0ZWN0ZWQg
c3RhbGxzIG9uIENQVXMvdGFza3M6DQo+ID4gWyAgIDYxLjQwMzAzOV0gcmN1OiAgICAgMS0uLi4h
OiAoMCB0aWNrcyB0aGlzIEdQKQ0KPiA+IGlkbGU9NjAyLzEvMHg0MDAwMDAwMDAwMDAwMDAwIHNv
ZnRpcnE9ODUvODUgZnFzPTENCj4gPiBbICAgNjEuNTI2NzgwXSByY3U6ICAgICAoZGV0ZWN0ZWQg
YnkgMywgdD01MjUyIGppZmZpZXMsIGc9LTk2NywgcT0yMjgpDQo+ID4gWyAgIDYxLjYxMzAzN10g
ICBDUFVbICAxXTogVFNUQVRFWzAwMDAwMDAwODAwMDE2MDJdIFRQQ1swMDAwMDAwMDAwNDNmMmI4
XQ0KPiA+IFROUENbMDAwMDAwMDAwMDQzZjJiY10gVEFTS1tzeXN0ZW1kLWZzdGFiLWc6OTBdDQo+
ID4gWyAgIDYxLjc2NjgyOF0gICAgICAgICAgICAgIFRQQ1tzbXBfc3luY2hyb25pemVfdGlja19j
bGllbnQrMHgxOC8weDE4MF0NCj4gPiBPN1tfX2RvX211bm1hcCsweDIwNC8weDNlMF0gSTdbeGNh
bGxfc3luY190aWNrKzB4MWMvMHgyY10NCj4gPiBSUENbcGFnZV9ldmljdGFibGUrMHg0LzB4NjBd
DQo+ID4gWyAgIDYxLjk2NjgwN10gcmN1OiByY3Vfc2NoZWQga3RocmVhZCBzdGFydmVkIGZvciA1
MjUwIGppZmZpZXMhIGctOTY3IGYweDANCj4gPiBSQ1VfR1BfV0FJVF9GUVMoNSkgLT5zdGF0ZT0w
eDQwMiAtPmNwdT0yDQo+ID4gWyAgIDYyLjExMzA1OF0gcmN1OiBSQ1UgZ3JhY2UtcGVyaW9kIGt0
aHJlYWQgc3RhY2sgZHVtcDoNCj4gPiBbICAgNjIuMTg1NTU4XSByY3Vfc2NoZWQgICAgICAgSSAg
ICAwICAgIDEwICAgICAgMiAweDA2MDAwMDAwDQo+ID4gWyAgIDYyLjI2NDMxMl0gQ2FsbCBUcmFj
ZToNCj4gPiBbICAgNjIuMjk5MzE2XSAgWzAwMDAwMDAwMDA5MmExZmNdIHNjaGVkdWxlKzB4MWMv
MHg4MA0KPiA+IFsgICA2Mi4zNjgwNzFdICBbMDAwMDAwMDAwMDkyZDNmY10gc2NoZWR1bGVfdGlt
ZW91dCsweDEzYy8weDI4MA0KPiA+IFsgICA2Mi40NDkzMjhdICBbMDAwMDAwMDAwMDRiNmM2NF0g
cmN1X2dwX2t0aHJlYWQrMHg0YzQvMHhhNDANCj4gPiBbICAgNjIuNTI4MDc3XSAgWzAwMDAwMDAw
MDA0N2U5NWNdIGt0aHJlYWQrMHhmYy8weDEyMA0KPiA+IFsgICA2Mi41OTY4MzNdICBbMDAwMDAw
MDAwMDQwNjBhNF0gcmV0X2Zyb21fZm9yaysweDFjLzB4MmMNCj4gPiBbICAgNjIuNjcxODMxXSAg
WzAwMDAwMDAwMDAwMDAwMDBdICAgICAgICAgICAobnVsbCkNCj4gPiANCj4gPiA1LjEuMCB3b3Jr
ZWQgZmluZS4gSSBiaXNlY3RlZCBpdCB0byB0aGUgZm9sbG93aW5nIGNvbW1pdDoNCj4gPiANCj4g
PiBkNTNkMmY3OGNlYWRiYTA4MWZjNzc4NTU3MDc5OGMzYzhkNTBhNzE4IGlzIHRoZSBmaXJzdCBi
YWQgY29tbWl0DQo+ID4gY29tbWl0IGQ1M2QyZjc4Y2VhZGJhMDgxZmM3Nzg1NTcwNzk4YzNjOGQ1
MGE3MTgNCj4gPiBBdXRob3I6IFJpY2sgRWRnZWNvbWJlIDxyaWNrLnAuZWRnZWNvbWJlQGludGVs
LmNvbT4NCj4gPiBEYXRlOiAgIFRodSBBcHIgMjUgMTc6MTE6MzggMjAxOSAtMDcwMA0KPiA+IA0K
PiA+ICAgICAgYnBmOiBVc2Ugdm1hbGxvYyBzcGVjaWFsIGZsYWcNCj4gPiAgICAgIA0KPiA+ICAg
ICAgVXNlIG5ldyBmbGFnIFZNX0ZMVVNIX1JFU0VUX1BFUk1TIGZvciBoYW5kbGluZyBmcmVlaW5n
IG9mIHNwZWNpYWwNCj4gPiAgICAgIHBlcm1pc3Npb25lZCBtZW1vcnkgaW4gdm1hbGxvYyBhbmQg
cmVtb3ZlIHBsYWNlcyB3aGVyZSBtZW1vcnkgd2FzIHNldA0KPiA+IFJXDQo+ID4gICAgICBiZWZv
cmUgZnJlZWluZyB3aGljaCBpcyBubyBsb25nZXIgbmVlZGVkLiBEb24ndCB0cmFjayBpZiB0aGUg
bWVtb3J5IGlzDQo+ID4gUk8NCj4gPiAgICAgIGFueW1vcmUgYmVjYXVzZSBpdCBpcyBub3cgdHJh
Y2tlZCBpbiB2bWFsbG9jLg0KPiA+ICAgICAgDQo+ID4gICAgICBTaWduZWQtb2ZmLWJ5OiBSaWNr
IEVkZ2Vjb21iZSA8cmljay5wLmVkZ2Vjb21iZUBpbnRlbC5jb20+DQo+ID4gICAgICBTaWduZWQt
b2ZmLWJ5OiBQZXRlciBaaWpsc3RyYSAoSW50ZWwpIDxwZXRlcnpAaW5mcmFkZWFkLm9yZz4NCj4g
PiAgICAgIENjOiA8YWtwbUBsaW51eC1mb3VuZGF0aW9uLm9yZz4NCj4gPiAgICAgIENjOiA8YXJk
LmJpZXNoZXV2ZWxAbGluYXJvLm9yZz4NCj4gPiAgICAgIENjOiA8ZGVuZWVuLnQuZG9ja0BpbnRl
bC5jb20+DQo+ID4gICAgICBDYzogPGtlcm5lbC1oYXJkZW5pbmdAbGlzdHMub3BlbndhbGwuY29t
Pg0KPiA+ICAgICAgQ2M6IDxrcmlzdGVuQGxpbnV4LmludGVsLmNvbT4NCj4gPiAgICAgIENjOiA8
bGludXhfZHRpQGljbG91ZC5jb20+DQo+ID4gICAgICBDYzogPHdpbGwuZGVhY29uQGFybS5jb20+
DQo+ID4gICAgICBDYzogQWxleGVpIFN0YXJvdm9pdG92IDxhc3RAa2VybmVsLm9yZz4NCj4gPiAg
ICAgIENjOiBBbmR5IEx1dG9taXJza2kgPGx1dG9Aa2VybmVsLm9yZz4NCj4gPiAgICAgIENjOiBC
b3Jpc2xhdiBQZXRrb3YgPGJwQGFsaWVuOC5kZT4NCj4gPiAgICAgIENjOiBEYW5pZWwgQm9ya21h
bm4gPGRhbmllbEBpb2dlYXJib3gubmV0Pg0KPiA+ICAgICAgQ2M6IERhdmUgSGFuc2VuIDxkYXZl
LmhhbnNlbkBsaW51eC5pbnRlbC5jb20+DQo+ID4gICAgICBDYzogSC4gUGV0ZXIgQW52aW4gPGhw
YUB6eXRvci5jb20+DQo+ID4gICAgICBDYzogTGludXMgVG9ydmFsZHMgPHRvcnZhbGRzQGxpbnV4
LWZvdW5kYXRpb24ub3JnPg0KPiA+ICAgICAgQ2M6IE5hZGF2IEFtaXQgPG5hZGF2LmFtaXRAZ21h
aWwuY29tPg0KPiA+ICAgICAgQ2M6IFJpayB2YW4gUmllbCA8cmllbEBzdXJyaWVsLmNvbT4NCj4g
PiAgICAgIENjOiBUaG9tYXMgR2xlaXhuZXIgPHRnbHhAbGludXRyb25peC5kZT4NCj4gPiAgICAg
IExpbms6IGh0dHBzOi8vbGttbC5rZXJuZWwub3JnL3IvMjAxOTA0MjYwMDExNDMuNDk4My0xOS1u
YW1pdEB2bXdhcmUuY29tDQo+ID4gICAgICBTaWduZWQtb2ZmLWJ5OiBJbmdvIE1vbG5hciA8bWlu
Z29Aa2VybmVsLm9yZz4NCj4gPiANCj4gPiA6MDQwMDAwIDA0MDAwMCA1ODA2NmRlNTMxMDdlYWIw
NzA1Mzk4YjVkMGM0MDc0MjRjMTM4YTg2DQo+ID4gN2ExMzQ1ZDQzYzRjYWNlZTYwYjkxMzU4OTli
Nzc1ZWNkYjU0ZWE3ZSBNICAgICAgaW5jbHVkZQ0KPiA+IDowNDAwMDAgMDQwMDAwIGQwMjY5MmNm
NTdhMzU5MDU2YjM0ZTYzNmQwZjEwMmQzN2RlNWIyNjQNCj4gPiA4MWM0YzJjNjQwOGI2OGViNTU1
NjczYmQzZjBiYzMwNzFkYjFmN2VkIE0gICAgICBrZXJuZWwNCj4gPiANCj4gDQo+IFRoYW5rcywg
SSdsbCBzZWUgaWYgSSBjYW4gcmVwcm9kdWNlLg0KPiANCj4gUmljaw0KDQpJJ20gaGF2aW5nIHRy
b3VibGUgZ2V0dGluZyBEZWJpYW4gQnVzdGVyIHVwIGFuZCBydW5uaW5nIG9uIHFlbXUtc3lzdGVt
LQ0Kc3BhcmM2NCBhbmQgc28gaGF2ZW4ndCBiZWVuIGFibGUgdG8gcmVwcm9kdWNlLiBJcyB0aGlz
IGN1cnJlbnRseSB3b3JraW5nIGZvcg0KcGVvcGxlPw0KDQpUaGlzIHBhdGNoIGludm9sdmVzIHJl
LXNldHRpbmcgbWVtb3J5IHBlcm1pc3Npb25zIHdoZW4gZnJlZWluZyBleGVjdXRhYmxlDQptZW1v
cnkuIEl0IGxvb2tzIGxpa2UgU3BhcmM2NCBMaW51eCBkb2Vzbid0IGhhdmUgc3VwcG9ydCBmb3Ig
dGhlIHNldF9tZW1vcnlfKCkNCmZ1bmN0aW9ucyBzbyB0aGF0IHBhcnQgc2hvdWxkbid0IGJlIGNo
YW5naW5nIGFueXRoaW5nLiBUaGUgbWFpbiBvdGhlciB0aGluZyB0aGF0DQppcyBjaGFuZ2VkIGhl
cmUgaXMgYWx3YXlzIGRvaW5nIGEgVExCIGZsdXNoIGluIHZmcmVlIHdoZW4gdGhlIEJQRiBKSVRz
IGFyZQ0KZnJlZWQuIEl0IHdpbGwgYWxyZWFkeSBzb21ldGltZXMgaGFwcGVuIHNvIHRoYXQgc2hv
dWxkbid0IGJlIHRvbyBkaWZmZXJlbnQNCmVpdGhlci4NCg0KU28gaXQgZG9lc24ndCBzZWVtIGV4
dHJhIGVzcGVjaWFsbHkgbGlrZWx5IHRvIGNhdXNlIGEgc3BhcmMgc3BlY2lmaWMgcHJvYmxlbQ0K
dGhhdCBJIGNhbiBzZWUuIElzIHRoZXJlIGFueSBjaGFuY2UgdGhpcyBpcyBhbiBpbnRlcm1pdHRl
bnQgaXNzdWU/DQoNCkFsdGVybmF0aXZlbHksIHdlIGNvdWxkIG1heWJlIGp1c3QgZXhlbXB0IGFy
Y2hpdGVjdHVyZXMgd2l0aCBubyBzZXRfbWVtb3J5XygpDQppbXBsZW1lbnRhdGlvbnMgZnJvbSB0
aGlzIG5ldyBiZWhhdmlvci4gVGhhdCB3b3VsZCB1bmZvcnR1bmF0ZWx5IGxvc2UgdGhlDQpiZW5l
Zml0cyBmb3IgYXJjaGl0ZWN0dXJlcyB3aXRoIG5vIHNldF9tZW1vcnlfKCkncyBidXQgdGhhdCBo
YXZlIGV4ZWN1dGFibGUNCnBlcm1pc3Npb24gYml0cy4NCg0KQnV0IHRoZW4gdGhpcyBwYXRjaCB3
b3VsZCBoYXZlIG5vIGVmZmVjdCBvbiBzcGFyYzY0IGFuZCB3b3VsZCBwb3NzaWJseSByZXNvbHZl
DQppdCB3aXRob3V0IHJlYWxseSBkZWJ1Z2dpbmcgaXQuDQoNClRoYW5rcywNCg0KUmljaw0K

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
  2019-05-14  1:15     ` Edgecombe, Rick P
@ 2019-05-14  4:58       ` Meelis Roos
  -1 siblings, 0 replies; 8+ messages in thread
From: Meelis Roos @ 2019-05-14  4:58 UTC (permalink / raw)
  To: Edgecombe, Rick P, netdev, sparclinux, bpf; +Cc: davem, peterz, namit

> I'm having trouble getting Debian Buster up and running on qemu-system-
> sparc64 and so haven't been able to reproduce. Is this currently working for
> people?

I just reinstalled the machine from
https://cdimage.debian.org/cdimage/ports/2019-05-09/debian-10.0-sparc64-NETINST-1.iso
and there's a even newer build upe one directory level.

> This patch involves re-setting memory permissions when freeing executable
> memory. It looks like Sparc64 Linux doesn't have support for the set_memory_()
> functions so that part shouldn't be changing anything. The main other thing that
> is changed here is always doing a TLB flush in vfree when the BPF JITs are
> freed. It will already sometimes happen so that shouldn't be too different
> either.

That part I do not know.

> So it doesn't seem extra especially likely to cause a sparc specific problem
> that I can see. Is there any chance this is an intermittent issue?

So far it seemed 100% reproducible, at least in the bisect that led here.
The only variation I saw was if it just sat there for newer git snapshot or spew
out RCU and workqueue lockup warnings soon like I posed.

I can do some tests and boot the same kernel some more times.

-- 
Meelis Roos

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot
@ 2019-05-14  4:58       ` Meelis Roos
  0 siblings, 0 replies; 8+ messages in thread
From: Meelis Roos @ 2019-05-14  4:58 UTC (permalink / raw)
  To: Edgecombe, Rick P, netdev, sparclinux, bpf; +Cc: davem, peterz, namit

> I'm having trouble getting Debian Buster up and running on qemu-system-
> sparc64 and so haven't been able to reproduce. Is this currently working for
> people?

I just reinstalled the machine from
https://cdimage.debian.org/cdimage/ports/2019-05-09/debian-10.0-sparc64-NETINST-1.iso
and there's a even newer build upe one directory level.

> This patch involves re-setting memory permissions when freeing executable
> memory. It looks like Sparc64 Linux doesn't have support for the set_memory_()
> functions so that part shouldn't be changing anything. The main other thing that
> is changed here is always doing a TLB flush in vfree when the BPF JITs are
> freed. It will already sometimes happen so that shouldn't be too different
> either.

That part I do not know.

> So it doesn't seem extra especially likely to cause a sparc specific problem
> that I can see. Is there any chance this is an intermittent issue?

So far it seemed 100% reproducible, at least in the bisect that led here.
The only variation I saw was if it just sat there for newer git snapshot or spew
out RCU and workqueue lockup warnings soon like I posed.

I can do some tests and boot the same kernel some more times.

-- 
Meelis Roos

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-05-14  4:58 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-13 14:01 bpf VM_FLUSH_RESET_PERMS breaks sparc64 boot Meelis Roos
2019-05-13 14:01 ` Meelis Roos
2019-05-13 17:01 ` Edgecombe, Rick P
2019-05-13 17:01   ` Edgecombe, Rick P
2019-05-14  1:15   ` Edgecombe, Rick P
2019-05-14  1:15     ` Edgecombe, Rick P
2019-05-14  4:58     ` Meelis Roos
2019-05-14  4:58       ` Meelis Roos

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.