linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* d_lookup: Unable to handle kernel paging request
@ 2019-05-22 10:40 Vicente Bergas
  2019-05-22 13:53 ` Al Viro
  0 siblings, 1 reply; 19+ messages in thread
From: Vicente Bergas @ 2019-05-22 10:40 UTC (permalink / raw)
  To: Alexander Viro; +Cc: linux-fsdevel, linux-kernel

Hi,
since a recent update the kernel is reporting d_lookup errors.
They appear randomly and after each error the affected file or directory
is no longer accessible.
The kernel is built with GCC 9.1.0 on ARM64.
Four traces from different workloads follow.

This trace is from v5.1-12511-g72cf0b07418a while untaring into a tmpfs
filesystem:

Unable to handle kernel paging request at virtual address 0000880001000018
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp = 000000007ccc6c7d
[0000880001000018] pgd=0000000000000000
Internal error: Oops: 96000004 [#1] SMP
Process tar (pid: 1673, stack limit = 0x0000000083be9793)
CPU: 5 PID: 1673 Comm: tar Not tainted 5.1.0 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup+0x58/0x198
lr : d_lookup+0x38/0x68
sp : ffff0000126e3ba0
x29: ffff0000126e3ba0 x28: ffff0000126e3d68 
x27: 0000000000000000 x26: ffff80008201d300 
x25: 0000000000000001 x24: ffffffffffffffff 
x23: 00000000ce986489 x22: 0000000000000000 
x21: 0000000000000001 x20: ffff0000126e3d68 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: a4d0a4a8a4fea3d0 
x9 : 2f062c662d62dfa7 x8 : f2025989e6593ef3 
x7 : b24a95208032f7e2 x6 : 0000000000000001 
x5 : 0000000000000000 x4 : ffff0000126e3d68 
x3 : ffff000010828a68 x2 : ffff000010828000 
x1 : ffff8000f3000000 x0 : 00000000000674c3 
Call trace:
 __d_lookup+0x58/0x198
 d_lookup+0x38/0x68
 path_openat+0x4a8/0xfb8
 do_filp_open+0x60/0xd8
 do_sys_open+0x144/0x1f8
 __arm64_sys_openat+0x20/0x28
 el0_svc_handler+0x68/0xd8
 el0_svc+0x8/0xc
Code: 92800018 a9025bf5 d2800016 52800035 (b9401a62) 
---[ end trace 8d5c8dc953aa6402 ]---

This trace is from v5.2.0-rc1:

Unable to handle kernel paging request at virtual address 0000880001000018
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp = 000000004850c69c
[0000880001000018] pgd=0000000000000000
Internal error: Oops: 96000004 [#1] SMP
Process read_sensors (pid: 926, stack limit = 0x00000000aaf00007)
CPU: 0 PID: 926 Comm: read_sensors Not tainted 5.2.0-rc1 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup+0x58/0x198
lr : d_lookup+0x38/0x68
sp : ffff000011ee3c60
x29: ffff000011ee3c60 x28: ffff000011ee3d98 
x27: 0000000000000000 x26: ffff8000f28083c0 
x25: 0000000000000276 x24: ffffffffffffffff 
x23: 00000000ce97b3cf x22: 0000000000000000 
x21: 0000000000000001 x20: ffff000011ee3d98 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000002 x16: 0000000000000001 
x15: ffff8000e4b3a8c8 x14: ffffffffffffffff 
x13: ffff000011ee3db8 x12: ffff000011ee3dad 
x11: 0000000000000000 x10: ffff000011ee3d20 
x9 : 00000000ffffffd8 x8 : 000000000000039e 
x7 : 0000000000000000 x6 : 0000000000000002 
x5 : 61c8864680b583eb x4 : 42bed11fefc04553 
x3 : ffff000010828a68 x2 : ffff000010828000 
x1 : ffff8000f3000000 x0 : 00000000000674bd 
Call trace:
 __d_lookup+0x58/0x198
 d_lookup+0x38/0x68
 d_hash_and_lookup+0x50/0x68
 proc_flush_task+0x98/0x198
 release_task+0x60/0x4b8
 do_exit+0x680/0xa68
 __arm64_sys_exit+0x14/0x18
 el0_svc_handler+0x68/0xd8
 el0_svc+0x8/0xc
Code: 92800018 a9025bf5 d2800016 52800035 (b9401a62) 
---[ end trace c9b8ee5d6aa547ae ]---

This trace is from v5.2.0-rc1 while executing 'git pull -r' from f2fs. It
got repeated several times:

Unable to handle kernel paging request at virtual address 0000000000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp = 0000000092bdb9cd
[0000000000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#2] SMP
Process git (pid: 2996, stack limit = 0x000000004b733f9b)
CPU: 5 PID: 2996 Comm: git Tainted: G      D           5.2.0-rc1 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup_rcu+0x68/0x198
lr : lookup_fast+0x44/0x2e8
sp : ffff000013f83aa0
x29: ffff000013f83aa0 x28: 00000000ce9798e9 
x27: ffffffffffffffff x26: 0000000000000015 
x25: ffff800009268043 x24: ffff000013f83b6c 
x23: 0000000000000005 x22: 00000015ce9798e9 
x21: ffff800039a7a780 x20: 0000000000000000 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: d0d0d0b3b3fea4a3 
x9 : f2862b1e24d6cb78 x8 : fac1836b95d6b53a 
x7 : c1462108f502da45 x6 : 0847a816d22e0a31 
x5 : ffff800009268043 x4 : ffff8000f3000000 
x3 : ffff000013f83c88 x2 : ffff000013f83b6c 
x1 : 00000000000674bc x0 : ffff800039a7a780 
Call trace:
 __d_lookup_rcu+0x68/0x198
 lookup_fast+0x44/0x2e8
 walk_component+0x34/0x2e0
 path_lookupat.isra.0+0x5c/0x1e0
 filename_lookup+0x78/0xf0
 user_path_at_empty+0x44/0x58
 vfs_statx+0x70/0xd0
 __se_sys_newfstatat+0x20/0x40
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x68/0xd8
 el0_svc+0x8/0xc
Code: 9280001b 14000003 f9400273 b4000793 (b85fc265) 
---[ end trace c9b8ee5d6aa547af ]---

This trace is from v5.2.0-rc1 while executing 'rm -rf' the directory
affected from the previous trace:

Unable to handle kernel paging request at virtual address 0000000001000018
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp = 00000000649981ae
[0000000001000018] pgd=0000000000000000
Internal error: Oops: 96000004 [#10] SMP
Process rm (pid: 6401, stack limit = 0x00000000e524cae1)
CPU: 5 PID: 6401 Comm: rm Tainted: G      D           5.2.0-rc1 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup+0x58/0x198
lr : d_lookup+0x38/0x68
sp : ffff000016993d00
x29: ffff000016993d00 x28: ffff000016993e70 
x27: 0000000000000000 x26: ffff800039a7a780 
x25: 0000000056000000 x24: ffffffffffffffff 
x23: 00000000ce9798e9 x22: 0000000000000000 
x21: 0000000000000001 x20: ffff000016993e70 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: d0d0d0b3b3fea4a3 
x9 : f2862b1e24d6cb78 x8 : fac1836b95d6b53a 
x7 : c1462108f502da45 x6 : 00000000ffffffff 
x5 : 0000000000000000 x4 : ffff8000eb31d500 
x3 : ffff000010828a68 x2 : ffff000010828000 
x1 : ffff8000f3000000 x0 : 00000000000674bc 
Call trace:
 __d_lookup+0x58/0x198
 d_lookup+0x38/0x68
 lookup_dcache+0x20/0x80
 __lookup_hash+0x20/0xc8
 do_unlinkat+0x10c/0x278
 __arm64_sys_unlinkat+0x34/0x60
 el0_svc_handler+0x68/0xd8
 el0_svc+0x8/0xc
Code: 92800018 a9025bf5 d2800016 52800035 (b9401a62) 
---[ end trace c9b8ee5d6aa547b7 ]---

Regards,
  Vicenç.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-05-22 10:40 d_lookup: Unable to handle kernel paging request Vicente Bergas
@ 2019-05-22 13:53 ` Al Viro
  2019-05-22 15:44   ` Vicente Bergas
  0 siblings, 1 reply; 19+ messages in thread
From: Al Viro @ 2019-05-22 13:53 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: linux-fsdevel, linux-kernel

On Wed, May 22, 2019 at 12:40:55PM +0200, Vicente Bergas wrote:
> Hi,
> since a recent update the kernel is reporting d_lookup errors.
> They appear randomly and after each error the affected file or directory
> is no longer accessible.
> The kernel is built with GCC 9.1.0 on ARM64.
> Four traces from different workloads follow.

Interesting...  bisection would be useful.

> This trace is from v5.1-12511-g72cf0b07418a while untaring into a tmpfs
> filesystem:
> 
> Unable to handle kernel paging request at virtual address 0000880001000018
> user pgtable: 4k pages, 48-bit VAs, pgdp = 000000007ccc6c7d
> [0000880001000018] pgd=0000000000000000

Attempt to dereference 0x0000880001000018, which is not mapped at all?

> pc : __d_lookup+0x58/0x198

... and so would objdump of the function in question.

> This trace is from v5.2.0-rc1:
> Unable to handle kernel paging request at virtual address 0000880001000018
[apparently identical oops, modulo the call chain to d_lookup(); since that's
almost certainly buggered data structures encountered during the hash lookup,
exact callchain doesn't matter all that much; procfs is the filesystem involved]

> This trace is from v5.2.0-rc1 while executing 'git pull -r' from f2fs. It
> got repeated several times:
> 
> Unable to handle kernel paging request at virtual address 0000000000fffffc
> user pgtable: 4k pages, 48-bit VAs, pgdp = 0000000092bdb9cd
> [0000000000fffffc] pgd=0000000000000000
> pc : __d_lookup_rcu+0x68/0x198

> This trace is from v5.2.0-rc1 while executing 'rm -rf' the directory
> affected from the previous trace:
> 
> Unable to handle kernel paging request at virtual address 0000000001000018

... and addresses involved are

0000880001000018
0000000000fffffc
0000000001000018

AFAICS, the only registers with the value in the vicinity of those addresses
had been (in all cases so far) x19 - 0000880001000000 in the first two traces,
0000000001000000 in the last two...

I'd really like to see the disassembly of the functions involved (as well as
.config in question).

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-05-22 13:53 ` Al Viro
@ 2019-05-22 15:44   ` Vicente Bergas
  2019-05-22 16:29     ` Al Viro
  0 siblings, 1 reply; 19+ messages in thread
From: Vicente Bergas @ 2019-05-22 15:44 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, linux-kernel

Hi Al,

On Wednesday, May 22, 2019 3:53:31 PM CEST, Al Viro wrote:
> On Wed, May 22, 2019 at 12:40:55PM +0200, Vicente Bergas wrote:
>> Hi,
>> since a recent update the kernel is reporting d_lookup errors.
>> They appear randomly and after each error the affected file or directory
>> is no longer accessible.
>> The kernel is built with GCC 9.1.0 on ARM64.
>> Four traces from different workloads follow.
>
> Interesting...  bisection would be useful.

Agreed, but that would be difficult because of the randomness.
I have been running days with no issues with a known-bad kernel.
The issue could also be related to the upgrade to GCC 9.

>> This trace is from v5.1-12511-g72cf0b07418a while untaring into a tmpfs
>> filesystem:
>> 
>> Unable to handle kernel paging request at virtual address 0000880001000018
>> user pgtable: 4k pages, 48-bit VAs, pgdp = 000000007ccc6c7d
>> [0000880001000018] pgd=0000000000000000
>
> Attempt to dereference 0x0000880001000018, which is not mapped at all?
>
>> pc : __d_lookup+0x58/0x198
>
> ... and so would objdump of the function in question.

Here is the dump from another build of the exact
same version (the build is reproducible).

objdump -x -S fs/dcache.o

...
0000000000002d00 <__d_lookup_rcu>:
    2d00:	a9b97bfd 	stp	x29, x30, [sp, #-112]!
    2d04:	aa0103e3 	mov	x3, x1
    2d08:	90000004 	adrp	x4, 0 <find_submount>
			2d08: R_AARCH64_ADR_PREL_PG_HI21	.data..read_mostly
    2d0c:	910003fd 	mov	x29, sp
    2d10:	a90153f3 	stp	x19, x20, [sp, #16]
    2d14:	91000081 	add	x1, x4, #0x0
			2d14: R_AARCH64_ADD_ABS_LO12_NC	.data..read_mostly
    2d18:	a9025bf5 	stp	x21, x22, [sp, #32]
    2d1c:	a9046bf9 	stp	x25, x26, [sp, #64]
    2d20:	a9406476 	ldp	x22, x25, [x3]
    2d24:	b9400821 	ldr	w1, [x1, #8]
    2d28:	f9400084 	ldr	x4, [x4]
			2d28: R_AARCH64_LDST64_ABS_LO12_NC	.data..read_mostly
    2d2c:	1ac126c1 	lsr	w1, w22, w1
    2d30:	f8617893 	ldr	x19, [x4, x1, lsl #3]
    2d34:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
    2d38:	54000920 	b.eq	2e5c <__d_lookup_rcu+0x15c>  // b.none
    2d3c:	aa0003f5 	mov	x21, x0
    2d40:	d360feda 	lsr	x26, x22, #32
    2d44:	a90363f7 	stp	x23, x24, [sp, #48]
    2d48:	aa0203f8 	mov	x24, x2
    2d4c:	d3608ad7 	ubfx	x23, x22, #32, #3
    2d50:	a90573fb 	stp	x27, x28, [sp, #80]
    2d54:	2a1603fc 	mov	w28, w22
    2d58:	9280001b 	mov	x27, #0xffffffffffffffff    	// #-1
    2d5c:	14000003 	b	2d68 <__d_lookup_rcu+0x68>
    2d60:	f9400273 	ldr	x19, [x19]
    2d64:	b4000793 	cbz	x19, 2e54 <__d_lookup_rcu+0x154>
    2d68:	b85fc265 	ldur	w5, [x19, #-4]
    2d6c:	d50339bf 	dmb	ishld
    2d70:	f9400a64 	ldr	x4, [x19, #16]
    2d74:	d1002260 	sub	x0, x19, #0x8
    2d78:	eb0402bf 	cmp	x21, x4
    2d7c:	54ffff21 	b.ne	2d60 <__d_lookup_rcu+0x60>  // b.any
    2d80:	121f78b4 	and	w20, w5, #0xfffffffe
    2d84:	aa0003e9 	mov	x9, x0
    2d88:	f9400664 	ldr	x4, [x19, #8]
    2d8c:	b4fffea4 	cbz	x4, 2d60 <__d_lookup_rcu+0x60>
    2d90:	b94002a4 	ldr	w4, [x21]
    2d94:	37080404 	tbnz	w4, #1, 2e14 <__d_lookup_rcu+0x114>
    2d98:	f9401000 	ldr	x0, [x0, #32]
    2d9c:	eb16001f 	cmp	x0, x22
    2da0:	54fffe01 	b.ne	2d60 <__d_lookup_rcu+0x60>  // b.any
    2da4:	f9401265 	ldr	x5, [x19, #32]
    2da8:	2a1a03e6 	mov	w6, w26
    2dac:	cb050328 	sub	x8, x25, x5
    2db0:	14000006 	b	2dc8 <__d_lookup_rcu+0xc8>
    2db4:	910020a5 	add	x5, x5, #0x8
    2db8:	eb07001f 	cmp	x0, x7
    2dbc:	54fffd21 	b.ne	2d60 <__d_lookup_rcu+0x60>  // b.any
    2dc0:	710020c6 	subs	w6, w6, #0x8
    2dc4:	54000160 	b.eq	2df0 <__d_lookup_rcu+0xf0>  // b.none
    2dc8:	8b0800a4 	add	x4, x5, x8
    2dcc:	6b1700df 	cmp	w6, w23
    2dd0:	f9400087 	ldr	x7, [x4]
    2dd4:	f94000a0 	ldr	x0, [x5]
    2dd8:	54fffee1 	b.ne	2db4 <__d_lookup_rcu+0xb4>  // b.any
    2ddc:	531d72e1 	lsl	w1, w23, #3
    2de0:	ca070000 	eor	x0, x0, x7
    2de4:	9ac12361 	lsl	x1, x27, x1
    2de8:	ea21001f 	bics	xzr, x0, x1
    2dec:	54fffba1 	b.ne	2d60 <__d_lookup_rcu+0x60>  // b.any
    2df0:	b9000314 	str	w20, [x24]
    2df4:	aa0903e0 	mov	x0, x9
    2df8:	a94153f3 	ldp	x19, x20, [sp, #16]
    2dfc:	a9425bf5 	ldp	x21, x22, [sp, #32]
    2e00:	a94363f7 	ldp	x23, x24, [sp, #48]
    2e04:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2e08:	a94573fb 	ldp	x27, x28, [sp, #80]
    2e0c:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2e10:	d65f03c0 	ret
    2e14:	b9402001 	ldr	w1, [x0, #32]
    2e18:	6b01039f 	cmp	w28, w1
    2e1c:	54fffa21 	b.ne	2d60 <__d_lookup_rcu+0x60>  // b.any
    2e20:	b9402401 	ldr	w1, [x0, #36]
    2e24:	f9401402 	ldr	x2, [x0, #40]
    2e28:	d50339bf 	dmb	ishld
    2e2c:	b85fc264 	ldur	w4, [x19, #-4]
    2e30:	6b04029f 	cmp	w20, w4
    2e34:	54000221 	b.ne	2e78 <__d_lookup_rcu+0x178>  // b.any
    2e38:	f94032a4 	ldr	x4, [x21, #96]
    2e3c:	a90627e3 	stp	x3, x9, [sp, #96]
    2e40:	f9400c84 	ldr	x4, [x4, #24]
    2e44:	d63f0080 	blr	x4
    2e48:	a94627e3 	ldp	x3, x9, [sp, #96]
    2e4c:	34fffd20 	cbz	w0, 2df0 <__d_lookup_rcu+0xf0>
    2e50:	17ffffc4 	b	2d60 <__d_lookup_rcu+0x60>
    2e54:	a94363f7 	ldp	x23, x24, [sp, #48]
    2e58:	a94573fb 	ldp	x27, x28, [sp, #80]
    2e5c:	d2800009 	mov	x9, #0x0                   	// #0
    2e60:	aa0903e0 	mov	x0, x9
    2e64:	a94153f3 	ldp	x19, x20, [sp, #16]
    2e68:	a9425bf5 	ldp	x21, x22, [sp, #32]
    2e6c:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2e70:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2e74:	d65f03c0 	ret
    2e78:	d503203f 	yield
    2e7c:	b85fc265 	ldur	w5, [x19, #-4]
    2e80:	d50339bf 	dmb	ishld
    2e84:	f9400c01 	ldr	x1, [x0, #24]
    2e88:	121f78b4 	and	w20, w5, #0xfffffffe
    2e8c:	eb15003f 	cmp	x1, x21
    2e90:	54fff681 	b.ne	2d60 <__d_lookup_rcu+0x60>  // b.any
    2e94:	17ffffbd 	b	2d88 <__d_lookup_rcu+0x88>

0000000000002e98 <__d_lookup>:
    2e98:	a9b97bfd 	stp	x29, x30, [sp, #-112]!
    2e9c:	90000002 	adrp	x2, 0 <find_submount>
			2e9c: R_AARCH64_ADR_PREL_PG_HI21	.data..read_mostly
    2ea0:	91000043 	add	x3, x2, #0x0
			2ea0: R_AARCH64_ADD_ABS_LO12_NC	.data..read_mostly
    2ea4:	910003fd 	mov	x29, sp
    2ea8:	a90573fb 	stp	x27, x28, [sp, #80]
    2eac:	aa0103fc 	mov	x28, x1
    2eb0:	a90153f3 	stp	x19, x20, [sp, #16]
    2eb4:	a90363f7 	stp	x23, x24, [sp, #48]
    2eb8:	a9046bf9 	stp	x25, x26, [sp, #64]
    2ebc:	aa0003fa 	mov	x26, x0
    2ec0:	b9400397 	ldr	w23, [x28]
    2ec4:	b9400860 	ldr	w0, [x3, #8]
    2ec8:	f9400041 	ldr	x1, [x2]
			2ec8: R_AARCH64_LDST64_ABS_LO12_NC	.data..read_mostly
    2ecc:	1ac026e0 	lsr	w0, w23, w0
    2ed0:	f8607833 	ldr	x19, [x1, x0, lsl #3]
    2ed4:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
    2ed8:	54000320 	b.eq	2f3c <__d_lookup+0xa4>  // b.none
    2edc:	5280001b 	mov	w27, #0x0                   	// #0
    2ee0:	92800018 	mov	x24, #0xffffffffffffffff    	// #-1
    2ee4:	a9025bf5 	stp	x21, x22, [sp, #32]
    2ee8:	d2800016 	mov	x22, #0x0                   	// #0
    2eec:	52800035 	mov	w21, #0x1                   	// #1
    2ef0:	b9401a62 	ldr	w2, [x19, #24]
    2ef4:	d1002274 	sub	x20, x19, #0x8
    2ef8:	6b17005f 	cmp	w2, w23
    2efc:	540001a1 	b.ne	2f30 <__d_lookup+0x98>  // b.any
    2f00:	91014279 	add	x25, x19, #0x50
    2f04:	f9800331 	prfm	pstl1strm, [x25]
    2f08:	885fff21 	ldaxr	w1, [x25]
    2f0c:	4a160020 	eor	w0, w1, w22
    2f10:	35000060 	cbnz	w0, 2f1c <__d_lookup+0x84>
    2f14:	88007f35 	stxr	w0, w21, [x25]
    2f18:	35ffff80 	cbnz	w0, 2f08 <__d_lookup+0x70>
    2f1c:	35000521 	cbnz	w1, 2fc0 <__d_lookup+0x128>
    2f20:	f9400e82 	ldr	x2, [x20, #24]
    2f24:	eb1a005f 	cmp	x2, x26
    2f28:	540001a0 	b.eq	2f5c <__d_lookup+0xc4>  // b.none
    2f2c:	089fff3b 	stlrb	w27, [x25]
    2f30:	f9400273 	ldr	x19, [x19]
    2f34:	b5fffdf3 	cbnz	x19, 2ef0 <__d_lookup+0x58>
    2f38:	a9425bf5 	ldp	x21, x22, [sp, #32]
    2f3c:	d2800008 	mov	x8, #0x0                   	// #0
    2f40:	aa0803e0 	mov	x0, x8
    2f44:	a94153f3 	ldp	x19, x20, [sp, #16]
    2f48:	a94363f7 	ldp	x23, x24, [sp, #48]
    2f4c:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2f50:	a94573fb 	ldp	x27, x28, [sp, #80]
    2f54:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2f58:	d65f03c0 	ret
    2f5c:	f9400660 	ldr	x0, [x19, #8]
    2f60:	b4fffe60 	cbz	x0, 2f2c <__d_lookup+0x94>
    2f64:	b9400340 	ldr	w0, [x26]
    2f68:	aa1403e8 	mov	x8, x20
    2f6c:	b9402681 	ldr	w1, [x20, #36]
    2f70:	370802e0 	tbnz	w0, #1, 2fcc <__d_lookup+0x134>
    2f74:	b9400784 	ldr	w4, [x28, #4]
    2f78:	6b04003f 	cmp	w1, w4
    2f7c:	54fffd81 	b.ne	2f2c <__d_lookup+0x94>  // b.any
    2f80:	f9400787 	ldr	x7, [x28, #8]
    2f84:	12000881 	and	w1, w4, #0x7
    2f88:	f9401265 	ldr	x5, [x19, #32]
    2f8c:	cb0500e7 	sub	x7, x7, x5
    2f90:	14000003 	b	2f9c <__d_lookup+0x104>
    2f94:	71002084 	subs	w4, w4, #0x8
    2f98:	54000300 	b.eq	2ff8 <__d_lookup+0x160>  // b.none
    2f9c:	8b0700a2 	add	x2, x5, x7
    2fa0:	6b04003f 	cmp	w1, w4
    2fa4:	f9400046 	ldr	x6, [x2]
    2fa8:	f94000a0 	ldr	x0, [x5]
    2fac:	54000340 	b.eq	3014 <__d_lookup+0x17c>  // b.none
    2fb0:	910020a5 	add	x5, x5, #0x8
    2fb4:	eb06001f 	cmp	x0, x6
    2fb8:	54fffee0 	b.eq	2f94 <__d_lookup+0xfc>  // b.none
    2fbc:	17ffffdc 	b	2f2c <__d_lookup+0x94>
    2fc0:	aa1903e0 	mov	x0, x25
    2fc4:	94000000 	bl	0 <queued_spin_lock_slowpath>
			2fc4: R_AARCH64_CALL26	queued_spin_lock_slowpath
    2fc8:	17ffffd6 	b	2f20 <__d_lookup+0x88>
    2fcc:	f9403340 	ldr	x0, [x26, #96]
    2fd0:	aa1c03e3 	mov	x3, x28
    2fd4:	f9401682 	ldr	x2, [x20, #40]
    2fd8:	f90037f4 	str	x20, [sp, #104]
    2fdc:	f9400c04 	ldr	x4, [x0, #24]
    2fe0:	aa1403e0 	mov	x0, x20
    2fe4:	d63f0080 	blr	x4
    2fe8:	7100001f 	cmp	w0, #0x0
    2fec:	1a9f17e0 	cset	w0, eq  // eq = none
    2ff0:	f94037e8 	ldr	x8, [sp, #104]
    2ff4:	34fff9c0 	cbz	w0, 2f2c <__d_lookup+0x94>
    2ff8:	b9405e80 	ldr	w0, [x20, #92]
    2ffc:	52800001 	mov	w1, #0x0                   	// #0
    3000:	11000400 	add	w0, w0, #0x1
    3004:	b9005e80 	str	w0, [x20, #92]
    3008:	089fff21 	stlrb	w1, [x25]
    300c:	a9425bf5 	ldp	x21, x22, [sp, #32]
    3010:	17ffffcc 	b	2f40 <__d_lookup+0xa8>
    3014:	531d7021 	lsl	w1, w1, #3
    3018:	ca060000 	eor	x0, x0, x6
    301c:	9ac12301 	lsl	x1, x24, x1
    3020:	ea21001f 	bics	xzr, x0, x1
    3024:	1a9f17e0 	cset	w0, eq  // eq = none
    3028:	34fff820 	cbz	w0, 2f2c <__d_lookup+0x94>
    302c:	17fffff3 	b	2ff8 <__d_lookup+0x160>

0000000000003030 <d_lookup>:
    3030:	a9bd7bfd 	stp	x29, x30, [sp, #-48]!
    3034:	910003fd 	mov	x29, sp
    3038:	a90153f3 	stp	x19, x20, [sp, #16]
    303c:	90000013 	adrp	x19, 0 <find_submount>
			303c: R_AARCH64_ADR_PREL_PG_HI21	.data..cacheline_aligned
    3040:	aa0103f4 	mov	x20, x1
    3044:	91000273 	add	x19, x19, #0x0
			3044: R_AARCH64_ADD_ABS_LO12_NC	.data..cacheline_aligned
    3048:	a9025bf5 	stp	x21, x22, [sp, #32]
    304c:	aa0003f5 	mov	x21, x0
    3050:	b9400276 	ldr	w22, [x19]
    3054:	370001d6 	tbnz	w22, #0, 308c <d_lookup+0x5c>
    3058:	d50339bf 	dmb	ishld
    305c:	aa1403e1 	mov	x1, x20
    3060:	aa1503e0 	mov	x0, x21
    3064:	94000000 	bl	2e98 <__d_lookup>
			3064: R_AARCH64_CALL26	__d_lookup
    3068:	b50000a0 	cbnz	x0, 307c <d_lookup+0x4c>
    306c:	d50339bf 	dmb	ishld
    3070:	b9400261 	ldr	w1, [x19]
    3074:	6b16003f 	cmp	w1, w22
    3078:	54fffec1 	b.ne	3050 <d_lookup+0x20>  // b.any
    307c:	a94153f3 	ldp	x19, x20, [sp, #16]
    3080:	a9425bf5 	ldp	x21, x22, [sp, #32]
    3084:	a8c37bfd 	ldp	x29, x30, [sp], #48
    3088:	d65f03c0 	ret
    308c:	d503203f 	yield
    3090:	17fffff0 	b	3050 <d_lookup+0x20>
    3094:	d503201f 	nop
...

>> This trace is from v5.2.0-rc1:
>> Unable to handle kernel paging request at virtual address 0000880001000018
> [apparently identical oops, modulo the call chain to 
> d_lookup(); since that's
> almost certainly buggered data structures encountered during 
> the hash lookup,
> exact callchain doesn't matter all that much; procfs is the 
> filesystem involved]
>
>> This trace is from v5.2.0-rc1 while executing 'git pull -r' from f2fs. It
>> got repeated several times:
>> 
>> Unable to handle kernel paging request at virtual address 0000000000fffffc
>> user pgtable: 4k pages, 48-bit VAs, pgdp = 0000000092bdb9cd
>> [0000000000fffffc] pgd=0000000000000000
>> pc : __d_lookup_rcu+0x68/0x198
>
>> This trace is from v5.2.0-rc1 while executing 'rm -rf' the directory
>> affected from the previous trace:
>> 
>> Unable to handle kernel paging request at virtual address 0000000001000018
>
> ... and addresses involved are
>
> 0000880001000018
> 0000000000fffffc
> 0000000001000018
>
> AFAICS, the only registers with the value in the vicinity of those addresses
> had been (in all cases so far) x19 - 0000880001000000 in the 
> first two traces,
> 0000000001000000 in the last two...
>
> I'd really like to see the disassembly of the functions involved (as well as
> .config in question).

Here is the .config: https://paste.debian.net/1082689

Regards,
  Vicenç.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-05-22 15:44   ` Vicente Bergas
@ 2019-05-22 16:29     ` Al Viro
  2019-05-24 22:21       ` Vicente Bergas
  2019-05-28  9:38       ` Vicente Bergas
  0 siblings, 2 replies; 19+ messages in thread
From: Al Viro @ 2019-05-22 16:29 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: linux-fsdevel, linux-kernel

On Wed, May 22, 2019 at 05:44:30PM +0200, Vicente Bergas wrote:

>    2d30:	f8617893 	ldr	x19, [x4, x1, lsl #3]
>    2d34:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
>    2d38:	54000920 	b.eq	2e5c <__d_lookup_rcu+0x15c>  // b.none
>    2d3c:	aa0003f5 	mov	x21, x0
>    2d40:	d360feda 	lsr	x26, x22, #32
>    2d44:	a90363f7 	stp	x23, x24, [sp, #48]
>    2d48:	aa0203f8 	mov	x24, x2
>    2d4c:	d3608ad7 	ubfx	x23, x22, #32, #3
>    2d50:	a90573fb 	stp	x27, x28, [sp, #80]
>    2d54:	2a1603fc 	mov	w28, w22
>    2d58:	9280001b 	mov	x27, #0xffffffffffffffff    	// #-1
>    2d5c:	14000003 	b	2d68 <__d_lookup_rcu+0x68>
>    2d60:	f9400273 	ldr	x19, [x19]
>    2d64:	b4000793 	cbz	x19, 2e54 <__d_lookup_rcu+0x154>

OK, that looks like
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash)
in there, x19 being 'node'.

>    2d68:	b85fc265 	ldur	w5, [x19, #-4]
>    2d6c:	d50339bf 	dmb	ishld

... and that's seq = raw_seqcount_begin(&dentry->d_seq), with
->d_seq being 4 bytes before ->d_hash.  So that one has stepped into
0x1000000 (i.e. 1<<24) in hlist forward pointer (or head - either is
possible).

> 0000000000002e98 <__d_lookup>:

>    2ed0:	f8607833 	ldr	x19, [x1, x0, lsl #3]
>    2ed4:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
>    2ed8:	54000320 	b.eq	2f3c <__d_lookup+0xa4>  // b.none
>    2edc:	5280001b 	mov	w27, #0x0                   	// #0
>    2ee0:	92800018 	mov	x24, #0xffffffffffffffff    	// #-1
>    2ee4:	a9025bf5 	stp	x21, x22, [sp, #32]
>    2ee8:	d2800016 	mov	x22, #0x0                   	// #0
>    2eec:	52800035 	mov	w21, #0x1                   	// #1

That's
        hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {

>    2ef0:	b9401a62 	ldr	w2, [x19, #24]

... and fetching dentry->d_name.hash for subsequent
                if (dentry->d_name.hash != hash)
                        continue;

>    2ef4:	d1002274 	sub	x20, x19, #0x8
>    2ef8:	6b17005f 	cmp	w2, w23
>    2efc:	540001a1 	b.ne	2f30 <__d_lookup+0x98>  // b.any

IOW, here we have also run into bogus hlist forward pointer or head -
same 0x1000000 in one case and 0x0000880001000000 in two others.

Have you tried to see if KASAN catches anything on those loads?
Use-after-free, for example...  Another thing to try: slap
	WARN_ON(entry->d_flags & DCACHE_NORCU);
in __d_rehash() and see if it triggers.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-05-22 16:29     ` Al Viro
@ 2019-05-24 22:21       ` Vicente Bergas
  2019-05-28  9:38       ` Vicente Bergas
  1 sibling, 0 replies; 19+ messages in thread
From: Vicente Bergas @ 2019-05-24 22:21 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, linux-kernel

On Wednesday, May 22, 2019 6:29:46 PM CEST, Al Viro wrote:
> On Wed, May 22, 2019 at 05:44:30PM +0200, Vicente Bergas wrote:
> ...
> IOW, here we have also run into bogus hlist forward pointer or head -
> same 0x1000000 in one case and 0x0000880001000000 in two others.
>
> Have you tried to see if KASAN catches anything on those loads?
> Use-after-free, for example...  Another thing to try: slap
> 	WARN_ON(entry->d_flags & DCACHE_NORCU);
> in __d_rehash() and see if it triggers.

Hi,
i have been running 3 days with KASAN enabled and also with
diff a/fs/dcache.c b/fs/dcache.c
@@ -2395,3 +2395,4 @@ static void __d_rehash(struct dentry *entry)
 	struct hlist_bl_head *b = d_hash(entry->d_name.hash);
 
+	WARN_ON(entry->d_flags & DCACHE_NORCU);
 	hlist_bl_lock(b);
but the issue has not appeared again.
Next week i will try -rc2 without KASAN and with WARN_ON and see if it
triggers.

Regards,
  Vicenç.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-05-22 16:29     ` Al Viro
  2019-05-24 22:21       ` Vicente Bergas
@ 2019-05-28  9:38       ` Vicente Bergas
  2019-06-18 18:35         ` Al Viro
  1 sibling, 1 reply; 19+ messages in thread
From: Vicente Bergas @ 2019-05-28  9:38 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, linux-kernel

On Wednesday, May 22, 2019 6:29:46 PM CEST, Al Viro wrote:
>...
> IOW, here we have also run into bogus hlist forward pointer or head -
> same 0x1000000 in one case and 0x0000880001000000 in two others.
>
> Have you tried to see if KASAN catches anything on those loads?
> Use-after-free, for example...  Another thing to try: slap
> 	WARN_ON(entry->d_flags & DCACHE_NORCU);
> in __d_rehash() and see if it triggers.

Hi Al,
after 5 days with v5.2-rc1 + KASAN + WARN_ON could not reproduce the issue.
Neither the first day running v5.3-rc2 + WARN_ON. But today 6 times.
So, there is no KASAN and also the WARN_ON, being there, did not trigger.
The first trace hapenned while untaring a big file into tmpfs. The other
five while "git pull -r" severeal repos on f2fs.

Regards,
  Vicenç.

Unable to handle kernel paging request at virtual address 0000000001000018
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=00000000aeab4000
[0000000001000018] pgd=0000000000000000
Internal error: Oops: 96000004 [#1] SMP
CPU: 4 PID: 1172 Comm: tar Not tainted 5.2.0-rc2 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup+0x58/0x198
lr : d_lookup+0x38/0x68
sp : ffff000012663b90
x29: ffff000012663b90 x28: ffff000012663d58 
x27: 0000000000000000 x26: ffff8000ae7cc900 
x25: 0000000000000001 x24: ffffffffffffffff 
x23: 00000000ce9c8f81 x22: 0000000000000000 
x21: 0000000000000001 x20: ffff000012663d58 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: b4fea3d0a3a4b4fe 
x9 : d237122a91454b69 x8 : a0591ae4450bed6a 
x7 : 5845a2c80f79d4e7 x6 : 0000000000000004 
x5 : 0000000000000000 x4 : ffff000012663d58 
x3 : ffff000010828a68 x2 : ffff000010828000 
x1 : ffff8000f3000000 x0 : 00000000000674e4 
Call trace:
 __d_lookup+0x58/0x198
 d_lookup+0x38/0x68
 path_openat+0x4a8/0xfb8
 do_filp_open+0x60/0xd8
 do_sys_open+0x144/0x1f8
 __arm64_sys_openat+0x20/0x28
 el0_svc_handler+0x74/0x140
 el0_svc+0x8/0xc
Code: 92800018 a9025bf5 d2800016 52800035 (b9401a62) 
---[ end trace 7fc40d1e6d2ed53e ]---
Unable to handle kernel paging request at virtual address 0000000000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000007af3e000
[0000000000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#1] SMP
CPU: 4 PID: 2124 Comm: git Not tainted 5.2.0-rc2 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup_rcu+0x68/0x198
lr : lookup_fast+0x44/0x2e8
sp : ffff0000130b3b60
x29: ffff0000130b3b60 x28: 00000000ce99d070 
x27: ffffffffffffffff x26: 0000000000000026 
x25: ffff8000ecec6030 x24: ffff0000130b3c2c 
x23: 0000000000000006 x22: 00000026ce99d070 
x21: ffff8000811f3d80 x20: 0000000000020000 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: e4d0b2e6e2b4b6e9 
x9 : 5096e90463dfacb0 x8 : 2b4f8961c30ebc93 
x7 : aec349fb204a7256 x6 : 4fd9025392b5761a 
x5 : 02ff010101030100 x4 : ffff8000f3000000 
x3 : ffff0000130b3d58 x2 : ffff0000130b3c2c 
x1 : 00000000000674ce x0 : ffff8000811f3d80 
Call trace:
 __d_lookup_rcu+0x68/0x198
 lookup_fast+0x44/0x2e8
 path_openat+0x19c/0xfb8
 do_filp_open+0x60/0xd8
 do_sys_open+0x144/0x1f8
 __arm64_sys_openat+0x20/0x28
 el0_svc_handler+0x74/0x140
 el0_svc+0x8/0xc
Code: 9280001b 14000003 f9400273 b4000793 (b85fc265) 
---[ end trace 6bd1b3b7588a78fe ]---
Unable to handle kernel paging request at virtual address 0000880000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=00000000867ac000
[0000880000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#2] SMP
CPU: 4 PID: 2183 Comm: git Tainted: G      D           5.2.0-rc2 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup_rcu+0x68/0x198
lr : lookup_fast+0x44/0x2e8
sp : ffff00001325ba90
x29: ffff00001325ba90 x28: 00000000ce99f075 
x27: ffffffffffffffff x26: 0000000000000007 
x25: ffff8000ecec402a x24: ffff00001325bb5c 
x23: 0000000000000007 x22: 00000007ce99f075 
x21: ffff80007a810c00 x20: 0000000000000000 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: d0bbbcbfa6b2b9bc 
x9 : 0000000000000000 x8 : ffff80007a810c00 
x7 : 6cad9ff29d8de19c x6 : ff94ec6f0ce3656c 
x5 : ffff8000ecec402a x4 : ffff8000f3000000 
x3 : ffff00001325bc78 x2 : ffff00001325bb5c 
x1 : 00000000000674cf x0 : ffff80007a810c00 
Call trace:
 __d_lookup_rcu+0x68/0x198
 lookup_fast+0x44/0x2e8
 walk_component+0x34/0x2e0
 path_lookupat.isra.0+0x5c/0x1e0
 filename_lookup+0x78/0xf0
 user_path_at_empty+0x44/0x58
 vfs_statx+0x70/0xd0
 __se_sys_newfstatat+0x20/0x40
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x74/0x140
 el0_svc+0x8/0xc
Code: 9280001b 14000003 f9400273 b4000793 (b85fc265) 
---[ end trace 6bd1b3b7588a78ff ]---
Unable to handle kernel paging request at virtual address 0000880000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=00000000867ac000
[0000880000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#3] SMP
CPU: 4 PID: 2180 Comm: git Tainted: G      D           5.2.0-rc2 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup_rcu+0x68/0x198
lr : lookup_fast+0x44/0x2e8
sp : ffff000012a3ba90
x29: ffff000012a3ba90 x28: 00000000ce99f075 
x27: ffffffffffffffff x26: 0000000000000007 
x25: ffff8000ecec702a x24: ffff000012a3bb5c 
x23: 0000000000000007 x22: 00000007ce99f075 
x21: ffff80007a810c00 x20: 0000000000000000 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: d0bbbcbfa6b2b9bc 
x9 : 0000000000000000 x8 : ffff80007a810c00 
x7 : 6cad9ff29d8de19c x6 : ff94ec6f0ce3656c 
x5 : ffff8000ecec702a x4 : ffff8000f3000000 
x3 : ffff000012a3bc78 x2 : ffff000012a3bb5c 
x1 : 00000000000674cf x0 : ffff80007a810c00 
Call trace:
 __d_lookup_rcu+0x68/0x198
 lookup_fast+0x44/0x2e8
 walk_component+0x34/0x2e0
 path_lookupat.isra.0+0x5c/0x1e0
 filename_lookup+0x78/0xf0
 user_path_at_empty+0x44/0x58
 vfs_statx+0x70/0xd0
 __se_sys_newfstatat+0x20/0x40
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x74/0x140
 el0_svc+0x8/0xc
Code: 9280001b 14000003 f9400273 b4000793 (b85fc265) 
---[ end trace 6bd1b3b7588a7900 ]---
Unable to handle kernel paging request at virtual address 0000880000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=0000000073f2f000
[0000880000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#4] SMP
CPU: 4 PID: 2210 Comm: git Tainted: G      D           5.2.0-rc2 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup_rcu+0x68/0x198
lr : lookup_fast+0x44/0x2e8
sp : ffff0000132bba90
x29: ffff0000132bba90 x28: 00000000ce99e1a6 
x27: ffffffffffffffff x26: 000000000000000c 
x25: ffff8000f21dd036 x24: ffff0000132bbb5c 
x23: 0000000000000004 x22: 0000000cce99e1a6 
x21: ffff800074dd8d80 x20: 0000000000000000 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: d0d0d0d0b8fea4b3 
x9 : 40bcd8645005512e x8 : c433ade89ebd10f9 
x7 : c6b69091eeb194d2 x6 : 848f758ca69635b4 
x5 : ffff8000f21dd036 x4 : ffff8000f3000000 
x3 : ffff0000132bbc78 x2 : ffff0000132bbb5c 
x1 : 00000000000674cf x0 : ffff800074dd8d80 
Call trace:
 __d_lookup_rcu+0x68/0x198
 lookup_fast+0x44/0x2e8
 walk_component+0x34/0x2e0
 path_lookupat.isra.0+0x5c/0x1e0
 filename_lookup+0x78/0xf0
 user_path_at_empty+0x44/0x58
 vfs_statx+0x70/0xd0
 __se_sys_newfstatat+0x20/0x40
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x74/0x140
 el0_svc+0x8/0xc
Code: 9280001b 14000003 f9400273 b4000793 (b85fc265) 
---[ end trace 6bd1b3b7588a7901 ]---
Unable to handle kernel paging request at virtual address 0000880000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=0000000073f2f000
[0000880000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#5] SMP
CPU: 5 PID: 2200 Comm: git Tainted: G      D           5.2.0-rc2 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 00000005 (nzcv daif -PAN -UAO)
pc : __d_lookup_rcu+0x68/0x198
lr : lookup_fast+0x44/0x2e8
sp : ffff000013263a90
x29: ffff000013263a90 x28: 00000000ce99e1a6 
x27: ffffffffffffffff x26: 000000000000000c 
x25: ffff8000f0a6f036 x24: ffff000013263b5c 
x23: 0000000000000004 x22: 0000000cce99e1a6 
x21: ffff800074dd8d80 x20: 0000000000000000 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: fefefefefefefeff x10: d0d0d0d0b8fea4b3 
x9 : 40bcd8645005512e x8 : c433ade89ebd10f9 
x7 : c6b69091eeb194d2 x6 : 848f758ca69635b4 
x5 : ffff8000f0a6f036 x4 : ffff8000f3000000 
x3 : ffff000013263c78 x2 : ffff000013263b5c 
x1 : 00000000000674cf x0 : ffff800074dd8d80 
Call trace:
 __d_lookup_rcu+0x68/0x198
 lookup_fast+0x44/0x2e8
 walk_component+0x34/0x2e0
 path_lookupat.isra.0+0x5c/0x1e0
 filename_lookup+0x78/0xf0
 user_path_at_empty+0x44/0x58
 vfs_statx+0x70/0xd0
 __se_sys_newfstatat+0x20/0x40
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x74/0x140
 el0_svc+0x8/0xc
Code: 9280001b 14000003 f9400273 b4000793 (b85fc265) 
---[ end trace 6bd1b3b7588a7902 ]---



00000000000001c8 <__d_rehash>:
	return dentry_hashtable + (hash >> d_hash_shift);
     1c8:	90000001 	adrp	x1, 0 <find_submount>
			1c8: R_AARCH64_ADR_PREL_PG_HI21	.data..read_mostly
     1cc:	91000022 	add	x2, x1, #0x0
			1cc: R_AARCH64_ADD_ABS_LO12_NC	.data..read_mostly

static void __d_rehash(struct dentry *entry)
{
	struct hlist_bl_head *b = d_hash(entry->d_name.hash);

	WARN_ON(entry->d_flags & DCACHE_NORCU);
     1d0:	b9400003 	ldr	w3, [x0]
	return dentry_hashtable + (hash >> d_hash_shift);
     1d4:	f9400025 	ldr	x5, [x1]
			1d4: R_AARCH64_LDST64_ABS_LO12_NC	.data..read_mostly
     1d8:	b9400841 	ldr	w1, [x2, #8]
     1dc:	b9402002 	ldr	w2, [x0, #32]
     1e0:	1ac12442 	lsr	w2, w2, w1
     1e4:	8b020ca1 	add	x1, x5, x2, lsl #3
	WARN_ON(entry->d_flags & DCACHE_NORCU);
     1e8:	37f00343 	tbnz	w3, #30, 250 <__d_rehash+0x88>
	__READ_ONCE_SIZE;
     1ec:	f9400023 	ldr	x3, [x1]
	if (READ_ONCE(*p) & mask)
     1f0:	37000283 	tbnz	w3, #0, 240 <__d_rehash+0x78>
     1f4:	f9800031 	prfm	pstl1strm, [x1]
     1f8:	c85ffc23 	ldaxr	x3, [x1]
     1fc:	b2400064 	orr	x4, x3, #0x1
     200:	c8067c24 	stxr	w6, x4, [x1]
     204:	35ffffa6 	cbnz	w6, 1f8 <__d_rehash+0x30>
	while (unlikely(test_and_set_bit_lock(bitnum, addr))) {
     208:	370001c3 	tbnz	w3, #0, 240 <__d_rehash+0x78>
		((unsigned long)h->first & ~LIST_BL_LOCKMASK);
     20c:	f86278a3 	ldr	x3, [x5, x2, lsl #3]
	hlist_bl_lock(b);
	hlist_bl_add_head_rcu(&entry->d_hash, b);
     210:	91002004 	add	x4, x0, #0x8
     214:	927ff863 	and	x3, x3, #0xfffffffffffffffe
	struct hlist_bl_node *first;

	/* don't need hlist_bl_first_rcu because we're under lock */
	first = hlist_bl_first(h);

	n->next = first;
     218:	f9000403 	str	x3, [x0, #8]
	if (first)
     21c:	b4000043 	cbz	x3, 224 <__d_rehash+0x5c>
		first->pprev = &n->next;
     220:	f9000464 	str	x4, [x3, #8]
	rcu_assign_pointer(h->first,
     224:	b2400084 	orr	x4, x4, #0x1
	n->pprev = &h->first;
     228:	f9000801 	str	x1, [x0, #16]
	rcu_assign_pointer(h->first,
     22c:	c89ffc24 	stlr	x4, [x1]
     230:	f86278a0 	ldr	x0, [x5, x2, lsl #3]
	old &= ~BIT_MASK(nr);
     234:	927ff800 	and	x0, x0, #0xfffffffffffffffe
     238:	c89ffc20 	stlr	x0, [x1]
	hlist_bl_unlock(b);
}
     23c:	d65f03c0 	ret
     240:	d503203f 	yield
     244:	f9400023 	ldr	x3, [x1]
		} while (test_bit(bitnum, addr));
     248:	3707ffc3 	tbnz	w3, #0, 240 <__d_rehash+0x78>
     24c:	17ffffe8 	b	1ec <__d_rehash+0x24>
	WARN_ON(entry->d_flags & DCACHE_NORCU);
     250:	d4210000 	brk	#0x800
	preempt_disable();
     254:	17ffffe6 	b	1ec <__d_rehash+0x24>

...

0000000000002d10 <__d_lookup_rcu>:
{
    2d10:	a9b97bfd 	stp	x29, x30, [sp, #-112]!
    2d14:	aa0103e3 	mov	x3, x1
	return dentry_hashtable + (hash >> d_hash_shift);
    2d18:	90000004 	adrp	x4, 0 <find_submount>
			2d18: R_AARCH64_ADR_PREL_PG_HI21	.data..read_mostly
{
    2d1c:	910003fd 	mov	x29, sp
    2d20:	a90153f3 	stp	x19, x20, [sp, #16]
	return dentry_hashtable + (hash >> d_hash_shift);
    2d24:	91000081 	add	x1, x4, #0x0
			2d24: R_AARCH64_ADD_ABS_LO12_NC	.data..read_mostly
{
    2d28:	a9025bf5 	stp	x21, x22, [sp, #32]
    2d2c:	a9046bf9 	stp	x25, x26, [sp, #64]
	const unsigned char *str = name->name;
    2d30:	a9406476 	ldp	x22, x25, [x3]
	return dentry_hashtable + (hash >> d_hash_shift);
    2d34:	b9400821 	ldr	w1, [x1, #8]
    2d38:	f9400084 	ldr	x4, [x4]
			2d38: R_AARCH64_LDST64_ABS_LO12_NC	.data..read_mostly
    2d3c:	1ac126c1 	lsr	w1, w22, w1
	__READ_ONCE_SIZE;
    2d40:	f8617893 	ldr	x19, [x4, x1, lsl #3]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2d44:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
    2d48:	54000920 	b.eq	2e6c <__d_lookup_rcu+0x15c>  // b.none
    2d4c:	aa0003f5 	mov	x21, x0
			if (dentry_cmp(dentry, str, hashlen_len(hashlen)) != 0)
    2d50:	d360feda 	lsr	x26, x22, #32
    2d54:	a90363f7 	stp	x23, x24, [sp, #48]
    2d58:	aa0203f8 	mov	x24, x2
    2d5c:	d3608ad7 	ubfx	x23, x22, #32, #3
    2d60:	a90573fb 	stp	x27, x28, [sp, #80]
    2d64:	2a1603fc 	mov	w28, w22
	mask = bytemask_from_count(tcount);
    2d68:	9280001b 	mov	x27, #0xffffffffffffffff    	// #-1
    2d6c:	14000003 	b	2d78 <__d_lookup_rcu+0x68>
    2d70:	f9400273 	ldr	x19, [x19]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2d74:	b4000793 	cbz	x19, 2e64 <__d_lookup_rcu+0x154>
    2d78:	b85fc265 	ldur	w5, [x19, #-4]
	smp_rmb();
    2d7c:	d50339bf 	dmb	ishld
		if (dentry->d_parent != parent)
    2d80:	f9400a64 	ldr	x4, [x19, #16]
    2d84:	d1002260 	sub	x0, x19, #0x8
    2d88:	eb0402bf 	cmp	x21, x4
    2d8c:	54ffff21 	b.ne	2d70 <__d_lookup_rcu+0x60>  // b.any
	return ret & ~1;
    2d90:	121f78b4 	and	w20, w5, #0xfffffffe
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2d94:	aa0003e9 	mov	x9, x0
		if (d_unhashed(dentry))
    2d98:	f9400664 	ldr	x4, [x19, #8]
    2d9c:	b4fffea4 	cbz	x4, 2d70 <__d_lookup_rcu+0x60>
		if (unlikely(parent->d_flags & DCACHE_OP_COMPARE)) {
    2da0:	b94002a4 	ldr	w4, [x21]
    2da4:	37080404 	tbnz	w4, #1, 2e24 <__d_lookup_rcu+0x114>
			if (dentry->d_name.hash_len != hashlen)
    2da8:	f9401000 	ldr	x0, [x0, #32]
    2dac:	eb16001f 	cmp	x0, x22
    2db0:	54fffe01 	b.ne	2d70 <__d_lookup_rcu+0x60>  // b.any
    2db4:	f9401265 	ldr	x5, [x19, #32]
	const unsigned char *cs = READ_ONCE(dentry->d_name.name);
    2db8:	2a1a03e6 	mov	w6, w26
    2dbc:	cb050328 	sub	x8, x25, x5
    2dc0:	14000006 	b	2dd8 <__d_lookup_rcu+0xc8>
		cs += sizeof(unsigned long);
    2dc4:	910020a5 	add	x5, x5, #0x8
		if (unlikely(a != b))
    2dc8:	eb07001f 	cmp	x0, x7
    2dcc:	54fffd21 	b.ne	2d70 <__d_lookup_rcu+0x60>  // b.any
		if (!tcount)
    2dd0:	710020c6 	subs	w6, w6, #0x8
    2dd4:	54000160 	b.eq	2e00 <__d_lookup_rcu+0xf0>  // b.none
		cs += sizeof(unsigned long);
    2dd8:	8b0800a4 	add	x4, x5, x8
		if (tcount < sizeof(unsigned long))
    2ddc:	6b1700df 	cmp	w6, w23
static inline unsigned long load_unaligned_zeropad(const void *addr)
{
	unsigned long ret, offset;

	/* Load word from unaligned pointer addr */
	asm(
    2de0:	f9400087 	ldr	x7, [x4]

static __no_kasan_or_inline
unsigned long read_word_at_a_time(const void *addr)
{
	kasan_check_read(addr, 1);
	return *(unsigned long *)addr;
    2de4:	f94000a0 	ldr	x0, [x5]
    2de8:	54fffee1 	b.ne	2dc4 <__d_lookup_rcu+0xb4>  // b.any
	mask = bytemask_from_count(tcount);
    2dec:	531d72e1 	lsl	w1, w23, #3
	return unlikely(!!((a ^ b) & mask));
    2df0:	ca070000 	eor	x0, x0, x7
	mask = bytemask_from_count(tcount);
    2df4:	9ac12361 	lsl	x1, x27, x1
			if (dentry_cmp(dentry, str, hashlen_len(hashlen)) != 0)
    2df8:	ea21001f 	bics	xzr, x0, x1
    2dfc:	54fffba1 	b.ne	2d70 <__d_lookup_rcu+0x60>  // b.any
		*seqp = seq;
    2e00:	b9000314 	str	w20, [x24]
}
    2e04:	aa0903e0 	mov	x0, x9
    2e08:	a94153f3 	ldp	x19, x20, [sp, #16]
    2e0c:	a9425bf5 	ldp	x21, x22, [sp, #32]
		return dentry;
    2e10:	a94363f7 	ldp	x23, x24, [sp, #48]
}
    2e14:	a9446bf9 	ldp	x25, x26, [sp, #64]
		return dentry;
    2e18:	a94573fb 	ldp	x27, x28, [sp, #80]
}
    2e1c:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2e20:	d65f03c0 	ret
			if (dentry->d_name.hash != hashlen_hash(hashlen))
    2e24:	b9402001 	ldr	w1, [x0, #32]
    2e28:	6b01039f 	cmp	w28, w1
    2e2c:	54fffa21 	b.ne	2d70 <__d_lookup_rcu+0x60>  // b.any
			tlen = dentry->d_name.len;
    2e30:	b9402401 	ldr	w1, [x0, #36]
			tname = dentry->d_name.name;
    2e34:	f9401402 	ldr	x2, [x0, #40]
	smp_rmb();
    2e38:	d50339bf 	dmb	ishld
	return unlikely(s->sequence != start);
    2e3c:	b85fc264 	ldur	w4, [x19, #-4]
			if (read_seqcount_retry(&dentry->d_seq, seq)) {
    2e40:	6b04029f 	cmp	w20, w4
    2e44:	54000221 	b.ne	2e88 <__d_lookup_rcu+0x178>  // b.any
			if (parent->d_op->d_compare(dentry,
    2e48:	f94032a4 	ldr	x4, [x21, #96]
    2e4c:	a90627e3 	stp	x3, x9, [sp, #96]
    2e50:	f9400c84 	ldr	x4, [x4, #24]
    2e54:	d63f0080 	blr	x4
    2e58:	a94627e3 	ldp	x3, x9, [sp, #96]
    2e5c:	34fffd20 	cbz	w0, 2e00 <__d_lookup_rcu+0xf0>
    2e60:	17ffffc4 	b	2d70 <__d_lookup_rcu+0x60>
    2e64:	a94363f7 	ldp	x23, x24, [sp, #48]
    2e68:	a94573fb 	ldp	x27, x28, [sp, #80]
	return NULL;
    2e6c:	d2800009 	mov	x9, #0x0                   	// #0
}
    2e70:	aa0903e0 	mov	x0, x9
    2e74:	a94153f3 	ldp	x19, x20, [sp, #16]
    2e78:	a9425bf5 	ldp	x21, x22, [sp, #32]
    2e7c:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2e80:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2e84:	d65f03c0 	ret
    2e88:	d503203f 	yield
	__READ_ONCE_SIZE;
    2e8c:	b85fc265 	ldur	w5, [x19, #-4]
	smp_rmb();
    2e90:	d50339bf 	dmb	ishld
		if (dentry->d_parent != parent)
    2e94:	f9400c01 	ldr	x1, [x0, #24]
	return ret & ~1;
    2e98:	121f78b4 	and	w20, w5, #0xfffffffe
    2e9c:	eb15003f 	cmp	x1, x21
    2ea0:	54fff681 	b.ne	2d70 <__d_lookup_rcu+0x60>  // b.any
    2ea4:	17ffffbd 	b	2d98 <__d_lookup_rcu+0x88>

0000000000002ea8 <__d_lookup>:
{
    2ea8:	a9b97bfd 	stp	x29, x30, [sp, #-112]!
	return dentry_hashtable + (hash >> d_hash_shift);
    2eac:	90000002 	adrp	x2, 0 <find_submount>
			2eac: R_AARCH64_ADR_PREL_PG_HI21	.data..read_mostly
    2eb0:	91000043 	add	x3, x2, #0x0
			2eb0: R_AARCH64_ADD_ABS_LO12_NC	.data..read_mostly
{
    2eb4:	910003fd 	mov	x29, sp
    2eb8:	a90573fb 	stp	x27, x28, [sp, #80]
    2ebc:	aa0103fc 	mov	x28, x1
    2ec0:	a90153f3 	stp	x19, x20, [sp, #16]
    2ec4:	a90363f7 	stp	x23, x24, [sp, #48]
    2ec8:	a9046bf9 	stp	x25, x26, [sp, #64]
    2ecc:	aa0003fa 	mov	x26, x0
	unsigned int hash = name->hash;
    2ed0:	b9400397 	ldr	w23, [x28]
	return dentry_hashtable + (hash >> d_hash_shift);
    2ed4:	b9400860 	ldr	w0, [x3, #8]
    2ed8:	f9400041 	ldr	x1, [x2]
			2ed8: R_AARCH64_LDST64_ABS_LO12_NC	.data..read_mostly
    2edc:	1ac026e0 	lsr	w0, w23, w0
    2ee0:	f8607833 	ldr	x19, [x1, x0, lsl #3]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2ee4:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
    2ee8:	54000320 	b.eq	2f4c <__d_lookup+0xa4>  // b.none
	smp_store_release(&lock->locked, 0);
    2eec:	5280001b 	mov	w27, #0x0                   	// #0
	mask = bytemask_from_count(tcount);
    2ef0:	92800018 	mov	x24, #0xffffffffffffffff    	// #-1
    2ef4:	a9025bf5 	stp	x21, x22, [sp, #32]
    2ef8:	d2800016 	mov	x22, #0x0                   	// #0
    2efc:	52800035 	mov	w21, #0x1                   	// #1
		if (dentry->d_name.hash != hash)
    2f00:	b9401a62 	ldr	w2, [x19, #24]
    2f04:	d1002274 	sub	x20, x19, #0x8
    2f08:	6b17005f 	cmp	w2, w23
    2f0c:	540001a1 	b.ne	2f40 <__d_lookup+0x98>  // b.any
    2f10:	91014279 	add	x25, x19, #0x50
    2f14:	f9800331 	prfm	pstl1strm, [x25]
    2f18:	885fff21 	ldaxr	w1, [x25]
    2f1c:	4a160020 	eor	w0, w1, w22
    2f20:	35000060 	cbnz	w0, 2f2c <__d_lookup+0x84>
    2f24:	88007f35 	stxr	w0, w21, [x25]
    2f28:	35ffff80 	cbnz	w0, 2f18 <__d_lookup+0x70>
    2f2c:	35000521 	cbnz	w1, 2fd0 <__d_lookup+0x128>
		if (dentry->d_parent != parent)
    2f30:	f9400e82 	ldr	x2, [x20, #24]
    2f34:	eb1a005f 	cmp	x2, x26
    2f38:	540001a0 	b.eq	2f6c <__d_lookup+0xc4>  // b.none
    2f3c:	089fff3b 	stlrb	w27, [x25]
    2f40:	f9400273 	ldr	x19, [x19]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2f44:	b5fffdf3 	cbnz	x19, 2f00 <__d_lookup+0x58>
    2f48:	a9425bf5 	ldp	x21, x22, [sp, #32]
	struct dentry *found = NULL;
    2f4c:	d2800008 	mov	x8, #0x0                   	// #0
}
    2f50:	aa0803e0 	mov	x0, x8
    2f54:	a94153f3 	ldp	x19, x20, [sp, #16]
    2f58:	a94363f7 	ldp	x23, x24, [sp, #48]
    2f5c:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2f60:	a94573fb 	ldp	x27, x28, [sp, #80]
    2f64:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2f68:	d65f03c0 	ret
		if (d_unhashed(dentry))
    2f6c:	f9400660 	ldr	x0, [x19, #8]
    2f70:	b4fffe60 	cbz	x0, 2f3c <__d_lookup+0x94>
	if (likely(!(parent->d_flags & DCACHE_OP_COMPARE))) {
    2f74:	b9400340 	ldr	w0, [x26]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2f78:	aa1403e8 	mov	x8, x20
	if (likely(!(parent->d_flags & DCACHE_OP_COMPARE))) {
    2f7c:	b9402681 	ldr	w1, [x20, #36]
    2f80:	370802e0 	tbnz	w0, #1, 2fdc <__d_lookup+0x134>
		if (dentry->d_name.len != name->len)
    2f84:	b9400784 	ldr	w4, [x28, #4]
    2f88:	6b04003f 	cmp	w1, w4
    2f8c:	54fffd81 	b.ne	2f3c <__d_lookup+0x94>  // b.any
		return dentry_cmp(dentry, name->name, name->len) == 0;
    2f90:	f9400787 	ldr	x7, [x28, #8]
static inline int dentry_string_cmp(const unsigned char *cs, const unsigned 
char *ct, unsigned tcount)
    2f94:	12000881 	and	w1, w4, #0x7
    2f98:	f9401265 	ldr	x5, [x19, #32]
    2f9c:	cb0500e7 	sub	x7, x7, x5
    2fa0:	14000003 	b	2fac <__d_lookup+0x104>
		if (!tcount)
    2fa4:	71002084 	subs	w4, w4, #0x8
    2fa8:	54000300 	b.eq	3008 <__d_lookup+0x160>  // b.none
		cs += sizeof(unsigned long);
    2fac:	8b0700a2 	add	x2, x5, x7
		if (tcount < sizeof(unsigned long))
    2fb0:	6b04003f 	cmp	w1, w4
    2fb4:	f9400046 	ldr	x6, [x2]
	return *(unsigned long *)addr;
    2fb8:	f94000a0 	ldr	x0, [x5]
    2fbc:	54000340 	b.eq	3024 <__d_lookup+0x17c>  // b.none
		cs += sizeof(unsigned long);
    2fc0:	910020a5 	add	x5, x5, #0x8
		if (unlikely(a != b))
    2fc4:	eb06001f 	cmp	x0, x6
    2fc8:	54fffee0 	b.eq	2fa4 <__d_lookup+0xfc>  // b.none
    2fcc:	17ffffdc 	b	2f3c <__d_lookup+0x94>
	queued_spin_lock_slowpath(lock, val);
    2fd0:	aa1903e0 	mov	x0, x25
    2fd4:	94000000 	bl	0 <queued_spin_lock_slowpath>
			2fd4: R_AARCH64_CALL26	queued_spin_lock_slowpath
    2fd8:	17ffffd6 	b	2f30 <__d_lookup+0x88>
	return parent->d_op->d_compare(dentry,
    2fdc:	f9403340 	ldr	x0, [x26, #96]
    2fe0:	aa1c03e3 	mov	x3, x28
    2fe4:	f9401682 	ldr	x2, [x20, #40]
    2fe8:	f90037f4 	str	x20, [sp, #104]
    2fec:	f9400c04 	ldr	x4, [x0, #24]
    2ff0:	aa1403e0 	mov	x0, x20
    2ff4:	d63f0080 	blr	x4
				       name) == 0;
    2ff8:	7100001f 	cmp	w0, #0x0
    2ffc:	1a9f17e0 	cset	w0, eq  // eq = none
    3000:	f94037e8 	ldr	x8, [sp, #104]
		if (!d_same_name(dentry, parent, name))
    3004:	34fff9c0 	cbz	w0, 2f3c <__d_lookup+0x94>
		dentry->d_lockref.count++;
    3008:	b9405e80 	ldr	w0, [x20, #92]
	smp_store_release(&lock->locked, 0);
    300c:	52800001 	mov	w1, #0x0                   	// #0
    3010:	11000400 	add	w0, w0, #0x1
    3014:	b9005e80 	str	w0, [x20, #92]
    3018:	089fff21 	stlrb	w1, [x25]
}
    301c:	a9425bf5 	ldp	x21, x22, [sp, #32]
    3020:	17ffffcc 	b	2f50 <__d_lookup+0xa8>
	mask = bytemask_from_count(tcount);
    3024:	531d7021 	lsl	w1, w1, #3
	return unlikely(!!((a ^ b) & mask));
    3028:	ca060000 	eor	x0, x0, x6
	mask = bytemask_from_count(tcount);
    302c:	9ac12301 	lsl	x1, x24, x1
    3030:	ea21001f 	bics	xzr, x0, x1
    3034:	1a9f17e0 	cset	w0, eq  // eq = none
		if (!d_same_name(dentry, parent, name))
    3038:	34fff820 	cbz	w0, 2f3c <__d_lookup+0x94>
    303c:	17fffff3 	b	3008 <__d_lookup+0x160>

0000000000003040 <d_lookup>:
{
    3040:	a9bd7bfd 	stp	x29, x30, [sp, #-48]!
    3044:	910003fd 	mov	x29, sp
    3048:	a90153f3 	stp	x19, x20, [sp, #16]
    304c:	90000013 	adrp	x19, 0 <find_submount>
			304c: R_AARCH64_ADR_PREL_PG_HI21	.data..cacheline_aligned
    3050:	aa0103f4 	mov	x20, x1
    3054:	91000273 	add	x19, x19, #0x0
			3054: R_AARCH64_ADD_ABS_LO12_NC	.data..cacheline_aligned
    3058:	a9025bf5 	stp	x21, x22, [sp, #32]
    305c:	aa0003f5 	mov	x21, x0
	__READ_ONCE_SIZE;
    3060:	b9400276 	ldr	w22, [x19]
	if (unlikely(ret & 1)) {
    3064:	370001d6 	tbnz	w22, #0, 309c <d_lookup+0x5c>
	smp_rmb();
    3068:	d50339bf 	dmb	ishld
		dentry = __d_lookup(parent, name);
    306c:	aa1403e1 	mov	x1, x20
    3070:	aa1503e0 	mov	x0, x21
    3074:	94000000 	bl	2ea8 <__d_lookup>
			3074: R_AARCH64_CALL26	__d_lookup
		if (dentry)
    3078:	b50000a0 	cbnz	x0, 308c <d_lookup+0x4c>
	smp_rmb();
    307c:	d50339bf 	dmb	ishld
	} while (read_seqretry(&rename_lock, seq));
    3080:	b9400261 	ldr	w1, [x19]
    3084:	6b16003f 	cmp	w1, w22
    3088:	54fffec1 	b.ne	3060 <d_lookup+0x20>  // b.any
}
    308c:	a94153f3 	ldp	x19, x20, [sp, #16]
    3090:	a9425bf5 	ldp	x21, x22, [sp, #32]
    3094:	a8c37bfd 	ldp	x29, x30, [sp], #48
    3098:	d65f03c0 	ret
    309c:	d503203f 	yield
    30a0:	17fffff0 	b	3060 <d_lookup+0x20>
    30a4:	d503201f 	nop


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-05-28  9:38       ` Vicente Bergas
@ 2019-06-18 18:35         ` Al Viro
  2019-06-18 18:48           ` Al Viro
  2019-06-19 12:42           ` Vicente Bergas
  0 siblings, 2 replies; 19+ messages in thread
From: Al Viro @ 2019-06-18 18:35 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: linux-fsdevel, linux-kernel

On Tue, May 28, 2019 at 11:38:43AM +0200, Vicente Bergas wrote:
> On Wednesday, May 22, 2019 6:29:46 PM CEST, Al Viro wrote:
> > ...
> > IOW, here we have also run into bogus hlist forward pointer or head -
> > same 0x1000000 in one case and 0x0000880001000000 in two others.
> > 
> > Have you tried to see if KASAN catches anything on those loads?
> > Use-after-free, for example...  Another thing to try: slap
> > 	WARN_ON(entry->d_flags & DCACHE_NORCU);
> > in __d_rehash() and see if it triggers.
> 
> Hi Al,
> after 5 days with v5.2-rc1 + KASAN + WARN_ON could not reproduce the issue.
> Neither the first day running v5.3-rc2 + WARN_ON. But today 6 times.
> So, there is no KASAN and also the WARN_ON, being there, did not trigger.
> The first trace hapenned while untaring a big file into tmpfs. The other
> five while "git pull -r" severeal repos on f2fs.
> 
> Regards,
>  Vicenç.
> 

__d_lookup() running into &dentry->d_hash == 0x01000000 at some point in hash chain
and trying to look at ->d_name.hash:

> pc : __d_lookup+0x58/0x198
> lr : d_lookup+0x38/0x68
> sp : ffff000012663b90
> x29: ffff000012663b90 x28: ffff000012663d58 x27: 0000000000000000 x26:
> ffff8000ae7cc900 x25: 0000000000000001 x24: ffffffffffffffff x23:
> 00000000ce9c8f81 x22: 0000000000000000 x21: 0000000000000001 x20:
> ffff000012663d58 x19: 0000000001000000 x18: 0000000000000000 x17:
> 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11:
> fefefefefefefeff x10: b4fea3d0a3a4b4fe x9 : d237122a91454b69 x8 :
> a0591ae4450bed6a x7 : 5845a2c80f79d4e7 x6 : 0000000000000004 x5 :
> 0000000000000000 x4 : ffff000012663d58 x3 : ffff000010828a68 x2 :
> ffff000010828000 x1 : ffff8000f3000000 x0 : 00000000000674e4 Call trace:

__d_lookup_rcu() running into &dentry->d_hash == 0x01000000 at some point in hash
chain and trying to look at ->d_seq:

> pc : __d_lookup_rcu+0x68/0x198
> lr : lookup_fast+0x44/0x2e8
> sp : ffff0000130b3b60
> x29: ffff0000130b3b60 x28: 00000000ce99d070 x27: ffffffffffffffff x26:
> 0000000000000026 x25: ffff8000ecec6030 x24: ffff0000130b3c2c x23:
> 0000000000000006 x22: 00000026ce99d070 x21: ffff8000811f3d80 x20:
> 0000000000020000 x19: 0000000001000000 x18: 0000000000000000 x17:
> 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11:
> fefefefefefefeff x10: e4d0b2e6e2b4b6e9 x9 : 5096e90463dfacb0 x8 :
> 2b4f8961c30ebc93 x7 : aec349fb204a7256 x6 : 4fd9025392b5761a x5 :
> 02ff010101030100 x4 : ffff8000f3000000 x3 : ffff0000130b3d58 x2 :
> ffff0000130b3c2c x1 : 00000000000674ce x0 : ffff8000811f3d80 Call trace:

__d_lookup_rcu() running into &dentry->d_hash == 0x0000880001000000 at some point
in hash chain and trying to look at ->d_seq:

> pc : __d_lookup_rcu+0x68/0x198
> lr : lookup_fast+0x44/0x2e8
> sp : ffff00001325ba90
> x29: ffff00001325ba90 x28: 00000000ce99f075 x27: ffffffffffffffff x26:
> 0000000000000007 x25: ffff8000ecec402a x24: ffff00001325bb5c x23:
> 0000000000000007 x22: 00000007ce99f075 x21: ffff80007a810c00 x20:
> 0000000000000000 x19: 0000880001000000 x18: 0000000000000000 x17:
> 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11:
> fefefefefefefeff x10: d0bbbcbfa6b2b9bc x9 : 0000000000000000 x8 :
> ffff80007a810c00 x7 : 6cad9ff29d8de19c x6 : ff94ec6f0ce3656c x5 :
> ffff8000ecec402a x4 : ffff8000f3000000 x3 : ffff00001325bc78 x2 :
> ffff00001325bb5c x1 : 00000000000674cf x0 : ffff80007a810c00 Call trace:

ditto

> pc : __d_lookup_rcu+0x68/0x198
> lr : lookup_fast+0x44/0x2e8
> sp : ffff000012a3ba90
> x29: ffff000012a3ba90 x28: 00000000ce99f075 x27: ffffffffffffffff x26:
> 0000000000000007 x25: ffff8000ecec702a x24: ffff000012a3bb5c x23:
> 0000000000000007 x22: 00000007ce99f075 x21: ffff80007a810c00 x20:
> 0000000000000000 x19: 0000880001000000 x18: 0000000000000000 x17:
> 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11:
> fefefefefefefeff x10: d0bbbcbfa6b2b9bc x9 : 0000000000000000 x8 :
> ffff80007a810c00 x7 : 6cad9ff29d8de19c x6 : ff94ec6f0ce3656c x5 :
> ffff8000ecec702a x4 : ffff8000f3000000 x3 : ffff000012a3bc78 x2 :
> ffff000012a3bb5c x1 : 00000000000674cf x0 : ffff80007a810c00 Call trace:

ditto

> pc : __d_lookup_rcu+0x68/0x198
> lr : lookup_fast+0x44/0x2e8
> sp : ffff0000132bba90
> x29: ffff0000132bba90 x28: 00000000ce99e1a6 x27: ffffffffffffffff x26:
> 000000000000000c x25: ffff8000f21dd036 x24: ffff0000132bbb5c x23:
> 0000000000000004 x22: 0000000cce99e1a6 x21: ffff800074dd8d80 x20:
> 0000000000000000 x19: 0000880001000000 x18: 0000000000000000 x17:
> 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11:
> fefefefefefefeff x10: d0d0d0d0b8fea4b3 x9 : 40bcd8645005512e x8 :
> c433ade89ebd10f9 x7 : c6b69091eeb194d2 x6 : 848f758ca69635b4 x5 :
> ffff8000f21dd036 x4 : ffff8000f3000000 x3 : ffff0000132bbc78 x2 :
> ffff0000132bbb5c x1 : 00000000000674cf x0 : ffff800074dd8d80 Call trace:

... and ditto:

> pc : __d_lookup_rcu+0x68/0x198
> lr : lookup_fast+0x44/0x2e8
> sp : ffff000013263a90
> x29: ffff000013263a90 x28: 00000000ce99e1a6 x27: ffffffffffffffff x26:
> 000000000000000c x25: ffff8000f0a6f036 x24: ffff000013263b5c x23:
> 0000000000000004 x22: 0000000cce99e1a6 x21: ffff800074dd8d80 x20:
> 0000000000000000 x19: 0000880001000000 x18: 0000000000000000 x17:
> 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14:
> 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11:
> fefefefefefefeff x10: d0d0d0d0b8fea4b3 x9 : 40bcd8645005512e x8 :
> c433ade89ebd10f9 x7 : c6b69091eeb194d2 x6 : 848f758ca69635b4 x5 :
> ffff8000f0a6f036 x4 : ffff8000f3000000 x3 : ffff000013263c78 x2 :
> ffff000013263b5c x1 : 00000000000674cf x0 : ffff800074dd8d80 Call trace:


All of those run under rcu_read_lock() and no dentry with DCACHE_NORCU has
ever been inserted into a hash chain, so it doesn't look like a plain
use-after-free.  Could you try something like the following to see a bit
more about where it comes from?  

So far it looks like something is buggering a forward reference
in hash chain in a fairly specific way - the values seen had been
00000000010000000 and
00008800010000000.  Does that smell like anything from arm64-specific
data structures (PTE, etc.)?

Alternatively, we might've gone off rails a step (or more) before,
with the previous iteration going through bogus, but at least mapped
address - the one that has never been a dentry in the first place.


diff --git a/fs/dcache.c b/fs/dcache.c
index c435398f2c81..cb555edb5b55 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2114,6 +2114,22 @@ static inline bool d_same_name(const struct dentry *dentry,
 				       name) == 0;
 }
 
+static void dump(struct dentry *dentry)
+{
+	int i;
+	if (!dentry) {
+		printk(KERN_ERR "list fucked in head");
+		return;
+	}
+	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = %x, count = %d",
+			dentry, dentry->d_hash.next, dentry->d_flags,
+			dentry->d_lockref.count
+			);
+	for (i = 0; i < sizeof(struct dentry); i++)
+		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
+			((unsigned char *)dentry)[i]);
+}
+
 /**
  * __d_lookup_rcu - search for a dentry (racy, store-free)
  * @parent: parent dentry
@@ -2151,7 +2167,7 @@ struct dentry *__d_lookup_rcu(const struct dentry *parent,
 	const unsigned char *str = name->name;
 	struct hlist_bl_head *b = d_hash(hashlen_hash(hashlen));
 	struct hlist_bl_node *node;
-	struct dentry *dentry;
+	struct dentry *dentry, *last = NULL;
 
 	/*
 	 * Note: There is significant duplication with __d_lookup_rcu which is
@@ -2176,6 +2192,10 @@ struct dentry *__d_lookup_rcu(const struct dentry *parent,
 	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
 		unsigned seq;
 
+		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
+			dump(last);
+		last = dentry;
+
 seqretry:
 		/*
 		 * The dentry sequence count protects us from concurrent
@@ -2274,7 +2294,7 @@ struct dentry *__d_lookup(const struct dentry *parent, const struct qstr *name)
 	struct hlist_bl_head *b = d_hash(hash);
 	struct hlist_bl_node *node;
 	struct dentry *found = NULL;
-	struct dentry *dentry;
+	struct dentry *dentry, *last = NULL;
 
 	/*
 	 * Note: There is significant duplication with __d_lookup_rcu which is
@@ -2300,6 +2320,10 @@ struct dentry *__d_lookup(const struct dentry *parent, const struct qstr *name)
 	
 	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
 
+		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
+			dump(last);
+		last = dentry;
+
 		if (dentry->d_name.hash != hash)
 			continue;
 

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-18 18:35         ` Al Viro
@ 2019-06-18 18:48           ` Al Viro
  2019-06-19 12:42           ` Vicente Bergas
  1 sibling, 0 replies; 19+ messages in thread
From: Al Viro @ 2019-06-18 18:48 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: linux-fsdevel, linux-kernel

On Tue, Jun 18, 2019 at 07:35:48PM +0100, Al Viro wrote:

> So far it looks like something is buggering a forward reference
> in hash chain in a fairly specific way - the values seen had been
> 00000000010000000 and
> 00008800010000000.  Does that smell like anything from arm64-specific
> data structures (PTE, etc.)?

make that 0000000001000000 and 0000880001000000 resp.  Tests in the
patch are correct, just mistyped it here...

> Alternatively, we might've gone off rails a step (or more) before,
> with the previous iteration going through bogus, but at least mapped
> address - the one that has never been a dentry in the first place.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-18 18:35         ` Al Viro
  2019-06-18 18:48           ` Al Viro
@ 2019-06-19 12:42           ` Vicente Bergas
  2019-06-19 16:28             ` Al Viro
  1 sibling, 1 reply; 19+ messages in thread
From: Vicente Bergas @ 2019-06-19 12:42 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, linux-kernel

On Tuesday, June 18, 2019 8:35:48 PM CEST, Al Viro wrote:
> On Tue, May 28, 2019 at 11:38:43AM +0200, Vicente Bergas wrote:
>> On Wednesday, May 22, 2019 6:29:46 PM CEST, Al Viro wrote: ...
>
> __d_lookup() running into &dentry->d_hash == 0x01000000 at some 
> point in hash chain
> and trying to look at ->d_name.hash:
>
>> pc : __d_lookup+0x58/0x198
>> lr : d_lookup+0x38/0x68
>> sp : ffff000012663b90
>> x29: ffff000012663b90 x28: ffff000012663d58 x27: 0000000000000000 x26:
>> ffff8000ae7cc900 x25: 0000000000000001 x24: ffffffffffffffff x23: ...
>
> __d_lookup_rcu() running into &dentry->d_hash == 0x01000000 at 
> some point in hash
> chain and trying to look at ->d_seq:
>
>> pc : __d_lookup_rcu+0x68/0x198
>> lr : lookup_fast+0x44/0x2e8
>> sp : ffff0000130b3b60
>> x29: ffff0000130b3b60 x28: 00000000ce99d070 x27: ffffffffffffffff x26:
>> 0000000000000026 x25: ffff8000ecec6030 x24: ffff0000130b3c2c x23: ...
>
> __d_lookup_rcu() running into &dentry->d_hash == 
> 0x0000880001000000 at some point
> in hash chain and trying to look at ->d_seq:
>
>> pc : __d_lookup_rcu+0x68/0x198
>> lr : lookup_fast+0x44/0x2e8
>> sp : ffff00001325ba90
>> x29: ffff00001325ba90 x28: 00000000ce99f075 x27: ffffffffffffffff x26:
>> 0000000000000007 x25: ffff8000ecec402a x24: ffff00001325bb5c x23: ...
>
> ditto
>
>> pc : __d_lookup_rcu+0x68/0x198
>> lr : lookup_fast+0x44/0x2e8
>> sp : ffff000012a3ba90
>> x29: ffff000012a3ba90 x28: 00000000ce99f075 x27: ffffffffffffffff x26:
>> 0000000000000007 x25: ffff8000ecec702a x24: ffff000012a3bb5c x23: ...
>
> ditto
>
>> pc : __d_lookup_rcu+0x68/0x198
>> lr : lookup_fast+0x44/0x2e8
>> sp : ffff0000132bba90
>> x29: ffff0000132bba90 x28: 00000000ce99e1a6 x27: ffffffffffffffff x26:
>> 000000000000000c x25: ffff8000f21dd036 x24: ffff0000132bbb5c x23: ...
>
> ... and ditto:
>
>> pc : __d_lookup_rcu+0x68/0x198
>> lr : lookup_fast+0x44/0x2e8
>> sp : ffff000013263a90
>> x29: ffff000013263a90 x28: 00000000ce99e1a6 x27: ffffffffffffffff x26:
>> 000000000000000c x25: ffff8000f0a6f036 x24: ffff000013263b5c x23: ...
>
>
> All of those run under rcu_read_lock() and no dentry with DCACHE_NORCU has
> ever been inserted into a hash chain, so it doesn't look like a plain
> use-after-free.  Could you try something like the following to see a bit
> more about where it comes from?  
>
> So far it looks like something is buggering a forward reference
> in hash chain in a fairly specific way - the values seen had been
> 00000000010000000 and
> 00008800010000000.  Does that smell like anything from arm64-specific
> data structures (PTE, etc.)?
>
> Alternatively, we might've gone off rails a step (or more) before,
> with the previous iteration going through bogus, but at least mapped
> address - the one that has never been a dentry in the first place.
>
>
> diff --git a/fs/dcache.c b/fs/dcache.c
> index c435398f2c81..cb555edb5b55 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -2114,6 +2114,22 @@ static inline bool d_same_name(const 
> struct dentry *dentry,
>  				       name) == 0;
>  }
>  
> +static void dump(struct dentry *dentry)
> +{
> +	int i;
> +	if (!dentry) {
> +		printk(KERN_ERR "list fucked in head");
> +		return;
> +	}
> +	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = 
> %x, count = %d",
> +			dentry, dentry->d_hash.next, dentry->d_flags,
> +			dentry->d_lockref.count
> +			);
> +	for (i = 0; i < sizeof(struct dentry); i++)
> +		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
> +			((unsigned char *)dentry)[i]);
> +}
> +
>  /**
>   * __d_lookup_rcu - search for a dentry (racy, store-free)
>   * @parent: parent dentry
> @@ -2151,7 +2167,7 @@ struct dentry *__d_lookup_rcu(const 
> struct dentry *parent,
>  	const unsigned char *str = name->name;
>  	struct hlist_bl_head *b = d_hash(hashlen_hash(hashlen));
>  	struct hlist_bl_node *node;
> -	struct dentry *dentry;
> +	struct dentry *dentry, *last = NULL;
>  
>  	/*
>  	 * Note: There is significant duplication with __d_lookup_rcu which is
> @@ -2176,6 +2192,10 @@ struct dentry *__d_lookup_rcu(const 
> struct dentry *parent,
>  	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
>  		unsigned seq;
>  
> +		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
> +			dump(last);
> +		last = dentry;
> +
>  seqretry:
>  		/*
>  		 * The dentry sequence count protects us from concurrent
> @@ -2274,7 +2294,7 @@ struct dentry *__d_lookup(const struct 
> dentry *parent, const struct qstr *name)
>  	struct hlist_bl_head *b = d_hash(hash);
>  	struct hlist_bl_node *node;
>  	struct dentry *found = NULL;
> -	struct dentry *dentry;
> +	struct dentry *dentry, *last = NULL;
>  
>  	/*
>  	 * Note: There is significant duplication with __d_lookup_rcu which is
> @@ -2300,6 +2320,10 @@ struct dentry *__d_lookup(const struct 
> dentry *parent, const struct qstr *name)
>  	
>  	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
>  
> +		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
> +			dump(last);
> +		last = dentry;
> +
>  		if (dentry->d_name.hash != hash)
>  			continue;

Hi Al,
i have been running the distro-provided kernel the last few weeks
and had no issues at all.
https://archlinuxarm.org/packages/aarch64/linux-aarch64
It is from the v5.1 branch and is compiled with gcc 8.3.

IIRC, i also tested
https://archlinuxarm.org/packages/aarch64/linux-aarch64-rc
v5.2-rc1 and v5.2-rc2 (which at that time where compiled with
gcc 8.2) with no issues.

This week tested v5.2-rc4 and v5.2-rc5 from archlinuxarm but
there are regressions unrelated to d_lookup.

At this point i was convinced it was a gcc 9.1 issue and had
nothing to do with the kernel, but anyways i gave your patch a try.
The tested kernel is v5.2-rc5-224-gbed3c0d84e7e and
it has been compiled with gcc 8.3.
The sentinel you put there has triggered!
So, it is not a gcc 9.1 issue.

In any case, i have no idea if those addresses are arm64-specific
in any way.

Regards,
  Vicenç.

list fucked in head
Unable to handle kernel paging request at virtual address 0000000000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000005c989000
[0000000000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#1] SMP
CPU: 4 PID: 2427 Comm: git Not tainted 5.2.0-rc5 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO)
pc : __d_lookup_rcu+0x90/0x1e8
lr : __d_lookup_rcu+0x84/0x1e8
sp : ffff000013413a90
x29: ffff000013413a90 x28: ffff000013413b5c 
x27: 0000000000000002 x26: ffff000013413c78 
x25: 0000001ac1084259 x24: 0000000000fffff8 
x23: 0000000001000000 x22: ffff8000586ed9c0 
x21: ffff8000586ed9c0 x20: 0000000000fffff8 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: 0000000000000000 x10: ffffffffffffffff 
x9 : 00000000c1084259 x8 : ffff800054f31032 
x7 : 000000000000001a x6 : ffff00001090454b 
x5 : 0000000000000013 x4 : 0000000000000000 
x3 : 0000000000000000 x2 : 00000000ffffffff 
x1 : 00008000e6f46000 x0 : 0000000000000013 
Call trace:
 __d_lookup_rcu+0x90/0x1e8
 lookup_fast+0x44/0x300
 walk_component+0x34/0x2e0
 path_lookupat.isra.13+0x5c/0x1e0
 filename_lookup.part.19+0x6c/0xe8
 user_path_at_empty+0x4c/0x60
 vfs_statx+0x78/0xd8
 __se_sys_newfstatat+0x24/0x48
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x94/0x138
 el0_svc+0x8/0xc
Code: 94000753 294c1fe9 9280000a f94037e8 (b85fc263) 
---[ end trace 93a444e9b6bc67e8 ]---
list fucked in head
Unable to handle kernel paging request at virtual address 0000000000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000005c989000
[0000000000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#2] SMP
CPU: 5 PID: 2424 Comm: git Tainted: G      D           5.2.0-rc5 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO)
pc : __d_lookup_rcu+0x90/0x1e8
lr : __d_lookup_rcu+0x84/0x1e8
sp : ffff0000133fba90
x29: ffff0000133fba90 x28: ffff0000133fbb5c 
x27: 0000000000000002 x26: ffff0000133fbc78 
x25: 0000001ac1084259 x24: 0000000000fffff8 
x23: 0000000001000000 x22: ffff8000586ed9c0 
x21: ffff8000586ed9c0 x20: 0000000000fffff8 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: 0000000000000000 x10: ffffffffffffffff 
x9 : 00000000c1084259 x8 : ffff800079d7b032 
x7 : 000000000000001a x6 : ffff00001090454b 
x5 : 0000000000000013 x4 : 0000000000000000 
x3 : 0000000000000000 x2 : 00000000ffffffff 
x1 : 00008000e6f5a000 x0 : 0000000000000013 
Call trace:
 __d_lookup_rcu+0x90/0x1e8
 lookup_fast+0x44/0x300
 walk_component+0x34/0x2e0
 path_lookupat.isra.13+0x5c/0x1e0
 filename_lookup.part.19+0x6c/0xe8
 user_path_at_empty+0x4c/0x60
 vfs_statx+0x78/0xd8
 __se_sys_newfstatat+0x24/0x48
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x94/0x138
 el0_svc+0x8/0xc
Code: 94000753 294c1fe9 9280000a f94037e8 (b85fc263) 
---[ end trace 93a444e9b6bc67e9 ]---
list fucked in head
Unable to handle kernel paging request at virtual address 0000880000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000003ba5d000
[0000880000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#3] SMP
CPU: 2 PID: 2659 Comm: git Tainted: G      D           5.2.0-rc5 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO)
pc : __d_lookup_rcu+0x90/0x1e8
lr : __d_lookup_rcu+0x84/0x1e8
sp : ffff0000135cba90
x29: ffff0000135cba90 x28: ffff0000135cbb5c 
x27: 0000000000000000 x26: ffff0000135cbc78 
x25: 00000010cb63a9bb x24: 0000880000fffff8 
x23: 0000000001000000 x22: ffff80003be53180 
x21: ffff80003be53180 x20: 0000880000fffff8 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: 0000000000000000 x10: ffffffffffffffff 
x9 : 00000000cb63a9bb x8 : ffff8000f094102e 
x7 : 0000000000000010 x6 : ffff00001090454b 
x5 : 0000000000000013 x4 : 0000000000000000 
x3 : 0000000000000000 x2 : 00000000ffffffff 
x1 : 00008000e6f1e000 x0 : 0000000000000013 
Call trace:
 __d_lookup_rcu+0x90/0x1e8
 lookup_fast+0x44/0x300
 walk_component+0x34/0x2e0
 path_lookupat.isra.13+0x5c/0x1e0
 filename_lookup.part.19+0x6c/0xe8
 user_path_at_empty+0x4c/0x60
 vfs_statx+0x78/0xd8
 __se_sys_newfstatat+0x24/0x48
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x94/0x138
 el0_svc+0x8/0xc
Code: 94000753 294c1fe9 9280000a f94037e8 (b85fc263) 
---[ end trace 93a444e9b6bc67ea ]---
list fucked in head
Unable to handle kernel paging request at virtual address 0000880000fffffc
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000003ba5d000
[0000880000fffffc] pgd=0000000000000000
Internal error: Oops: 96000004 [#4] SMP
CPU: 4 PID: 2658 Comm: git Tainted: G      D           5.2.0-rc5 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO)
pc : __d_lookup_rcu+0x90/0x1e8
lr : __d_lookup_rcu+0x84/0x1e8
sp : ffff00001363ba90
x29: ffff00001363ba90 x28: ffff00001363bb5c 
x27: 0000000000000000 x26: ffff00001363bc78 
x25: 00000010cb63a9bb x24: 0000880000fffff8 
x23: 0000000001000000 x22: ffff80003be53180 
x21: ffff80003be53180 x20: 0000880000fffff8 
x19: 0000880001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: 0000000000000000 x10: ffffffffffffffff 
x9 : 00000000cb63a9bb x8 : ffff80004b94d02e 
x7 : 0000000000000010 x6 : ffff00001090454b 
x5 : 0000000000000013 x4 : 0000000000000000 
x3 : 0000000000000000 x2 : 00000000ffffffff 
x1 : 00008000e6f46000 x0 : 0000000000000013 
Call trace:
 __d_lookup_rcu+0x90/0x1e8
 lookup_fast+0x44/0x300
 walk_component+0x34/0x2e0
 path_lookupat.isra.13+0x5c/0x1e0
 filename_lookup.part.19+0x6c/0xe8
 user_path_at_empty+0x4c/0x60
 vfs_statx+0x78/0xd8
 __se_sys_newfstatat+0x24/0x48
 __arm64_sys_newfstatat+0x18/0x20
 el0_svc_handler+0x94/0x138
 el0_svc+0x8/0xc
Code: 94000753 294c1fe9 9280000a f94037e8 (b85fc263) 
---[ end trace 93a444e9b6bc67eb ]---
list fucked in head
Unable to handle kernel paging request at virtual address 0000000001000018
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000002091a000
[0000000001000018] pgd=0000000000000000
Internal error: Oops: 96000004 [#5] SMP
CPU: 4 PID: 3205 Comm: update_all_gits Tainted: G      D           
5.2.0-rc5 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO)
pc : __d_lookup+0x88/0x1d8
lr : __d_lookup+0x7c/0x1d8
sp : ffff000013dabaa0
x29: ffff000013dabaa0 x28: ffff000013dabbd8 
x27: ffff00001076f0f8 x26: ffff8000f2808780 
x25: 0000000000fffff8 x24: 0000000001000000 
x23: 00000000cb639d51 x22: 0000000000000000 
x21: 0000000000000001 x20: 0000000000fffff8 
x19: 0000000001000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: ffffffffffffffff x14: ffff000010898508 
x13: ffff000013dabbf8 x12: ffff000013dabbed 
x11: 0000000000000001 x10: ffff000013daba60 
x9 : ffff000013daba60 x8 : ffff000013daba60 
x7 : ffff000013daba60 x6 : ffffffffffffffff 
x5 : 0000000000000000 x4 : 0000000000000000 
x3 : 0000000000000000 x2 : 00000000ffffffff 
x1 : 00008000e6f46000 x0 : 0000000000000013 
Call trace:
 __d_lookup+0x88/0x1d8
 d_lookup+0x34/0x68
 d_hash_and_lookup+0x50/0x68
 proc_flush_task+0x9c/0x198
 release_task.part.3+0x68/0x4b8
 wait_consider_task+0x91c/0x9b0
 do_wait+0x120/0x1e0
 kernel_wait4+0x7c/0x140
 __se_sys_wait4+0x68/0xa8
 __arm64_sys_wait4+0x18/0x20
 el0_svc_handler+0x94/0x138
 el0_svc+0x8/0xc
Code: 940006db b9406fe5 92800006 d503201f (b9402282) 
---[ end trace 93a444e9b6bc67ec ]---
list fucked in head
Unable to handle kernel paging request at virtual address 0000880101000018
Mem abort info:
  ESR = 0x96000004
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000004
  CM = 0, WnR = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=000000009bddd000
[0000880101000018] pgd=0000000000000000
Internal error: Oops: 96000004 [#6] SMP
CPU: 5 PID: 3978 Comm: tar Tainted: G      D           5.2.0-rc5 #1
Hardware name: Sapphire-RK3399 Board (DT)
pstate: 60000005 (nZCv daif -PAN -UAO)
pc : __d_lookup+0x88/0x1d8
lr : __d_lookup+0x7c/0x1d8
sp : ffff000014dc3b90
x29: ffff000014dc3b90 x28: ffff000014dc3d58 
x27: ffff000014dc3d48 x26: ffff8000a77becc0 
x25: 0000880100fffff8 x24: 0000000001000000 
x23: 00000000c1086fd8 x22: 0000000000000000 
x21: 0000000000000001 x20: 0000880100fffff8 
x19: 0000880101000000 x18: 0000000000000000 
x17: 0000000000000000 x16: 0000000000000000 
x15: 0000000000000000 x14: 0000000000000000 
x13: 0000000000000000 x12: 0000000000000000 
x11: 0000000000000000 x10: ffff000014dc3b50 
x9 : ffff000014dc3b50 x8 : ffff000014dc3b50 
x7 : ffff000014dc3b50 x6 : ffffffffffffffff 
x5 : 0000000000000000 x4 : 0000000000000000 
x3 : 0000000000000000 x2 : 00000000ffffffff 
x1 : 00008000e6f5a000 x0 : 0000000000000013 
Call trace:
 __d_lookup+0x88/0x1d8
 d_lookup+0x34/0x68
 path_openat+0x528/0xfd0
 do_filp_open+0x60/0xc0
 do_sys_open+0x164/0x200
 __arm64_sys_openat+0x20/0x28
 el0_svc_handler+0x94/0x138
 el0_svc+0x8/0xc
Code: 940006db b9406fe5 92800006 d503201f (b9402282) 
---[ end trace 93a444e9b6bc67ed ]---

0000000000002d10 <__d_lookup_rcu>:
{
    2d10:	a9b97bfd 	stp	x29, x30, [sp, #-112]!
	return dentry_hashtable + (hash >> d_hash_shift);
    2d14:	90000003 	adrp	x3, 0 <d_shrink_del>
    2d18:	91000065 	add	x5, x3, #0x0
{
    2d1c:	910003fd 	mov	x29, sp
    2d20:	a90153f3 	stp	x19, x20, [sp, #16]
    2d24:	a90363f7 	stp	x23, x24, [sp, #48]
    2d28:	a9046bf9 	stp	x25, x26, [sp, #64]
	const unsigned char *str = name->name;
    2d2c:	a9402039 	ldp	x25, x8, [x1]
	return dentry_hashtable + (hash >> d_hash_shift);
    2d30:	f9400064 	ldr	x4, [x3]
    2d34:	b94008a3 	ldr	w3, [x5, #8]
    2d38:	1ac32723 	lsr	w3, w25, w3
	__READ_ONCE_SIZE;
    2d3c:	f8637893 	ldr	x19, [x4, x3, lsl #3]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2d40:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
    2d44:	54000420 	b.eq	2dc8 <__d_lookup_rcu+0xb8>  // b.none
			if (dentry_cmp(dentry, str, hashlen_len(hashlen)) != 0)
    2d48:	d360ff27 	lsr	x7, x25, #32
    2d4c:	2a1903e9 	mov	w9, w25
    2d50:	aa0103fa 	mov	x26, x1
    2d54:	a90573fb 	stp	x27, x28, [sp, #80]
    2d58:	aa0203fc 	mov	x28, x2
    2d5c:	120008fb 	and	w27, w7, #0x7
		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
    2d60:	52a02017 	mov	w23, #0x1000000             	// #16777216
	mask = bytemask_from_count(tcount);
    2d64:	9280000a 	mov	x10, #0xffffffffffffffff    	// #-1
    2d68:	a9025bf5 	stp	x21, x22, [sp, #32]
    2d6c:	aa0003f5 	mov	x21, x0
	struct dentry *dentry, *last = NULL;
    2d70:	d2800000 	mov	x0, #0x0                   	// #0
    2d74:	d503201f 	nop
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2d78:	d1002274 	sub	x20, x19, #0x8
		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
    2d7c:	6b17027f 	cmp	w19, w23
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2d80:	aa1403f8 	mov	x24, x20
		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
    2d84:	540000e1 	b.ne	2da0 <__d_lookup_rcu+0x90>  // b.any
    2d88:	290c1fe9 	stp	w9, w7, [sp, #96]
    2d8c:	f90037e8 	str	x8, [sp, #104]
			dump(last);
    2d90:	94000000 	bl	0 <d_shrink_del>
    2d94:	294c1fe9 	ldp	w9, w7, [sp, #96]
    2d98:	9280000a 	mov	x10, #0xffffffffffffffff    	// #-1
    2d9c:	f94037e8 	ldr	x8, [sp, #104]
    2da0:	b85fc263 	ldur	w3, [x19, #-4]
	smp_rmb();
    2da4:	d50339bf 	dmb	ishld
		if (dentry->d_parent != parent)
    2da8:	f9400e80 	ldr	x0, [x20, #24]
    2dac:	eb15001f 	cmp	x0, x21
    2db0:	540001a0 	b.eq	2de4 <__d_lookup_rcu+0xd4>  // b.none
    2db4:	f9400273 	ldr	x19, [x19]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2db8:	aa1403e0 	mov	x0, x20
    2dbc:	b5fffdf3 	cbnz	x19, 2d78 <__d_lookup_rcu+0x68>
    2dc0:	a9425bf5 	ldp	x21, x22, [sp, #32]
    2dc4:	a94573fb 	ldp	x27, x28, [sp, #80]
	return NULL;
    2dc8:	d2800018 	mov	x24, #0x0                   	// #0
}
    2dcc:	aa1803e0 	mov	x0, x24
    2dd0:	a94153f3 	ldp	x19, x20, [sp, #16]
    2dd4:	a94363f7 	ldp	x23, x24, [sp, #48]
    2dd8:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2ddc:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2de0:	d65f03c0 	ret
		if (d_unhashed(dentry))
    2de4:	f9400660 	ldr	x0, [x19, #8]
    2de8:	b4fffe60 	cbz	x0, 2db4 <__d_lookup_rcu+0xa4>
		if (unlikely(parent->d_flags & DCACHE_OP_COMPARE)) {
    2dec:	b94002a0 	ldr	w0, [x21]
	return ret & ~1;
    2df0:	121f7876 	and	w22, w3, #0xfffffffe
    2df4:	370804a0 	tbnz	w0, #1, 2e88 <__d_lookup_rcu+0x178>
			if (dentry->d_name.hash_len != hashlen)
    2df8:	f9401280 	ldr	x0, [x20, #32]
    2dfc:	eb19001f 	cmp	x0, x25
    2e00:	54fffda1 	b.ne	2db4 <__d_lookup_rcu+0xa4>  // b.any
    2e04:	f9401264 	ldr	x4, [x19, #32]
	const unsigned char *cs = READ_ONCE(dentry->d_name.name);
    2e08:	2a0703e5 	mov	w5, w7
    2e0c:	cb040101 	sub	x1, x8, x4
    2e10:	14000006 	b	2e28 <__d_lookup_rcu+0x118>
		cs += sizeof(unsigned long);
    2e14:	91002084 	add	x4, x4, #0x8
		if (unlikely(a != b))
    2e18:	eb06001f 	cmp	x0, x6
    2e1c:	54fffcc1 	b.ne	2db4 <__d_lookup_rcu+0xa4>  // b.any
		if (!tcount)
    2e20:	710020a5 	subs	w5, w5, #0x8
    2e24:	54000160 	b.eq	2e50 <__d_lookup_rcu+0x140>  // b.none
		cs += sizeof(unsigned long);
    2e28:	8b010083 	add	x3, x4, x1
		if (tcount < sizeof(unsigned long))
    2e2c:	6b1b00bf 	cmp	w5, w27
static inline unsigned long load_unaligned_zeropad(const void *addr)
{
	unsigned long ret, offset;

	/* Load word from unaligned pointer addr */
	asm(
    2e30:	f9400066 	ldr	x6, [x3]

static __no_kasan_or_inline
unsigned long read_word_at_a_time(const void *addr)
{
	kasan_check_read(addr, 1);
	return *(unsigned long *)addr;
    2e34:	f9400080 	ldr	x0, [x4]
    2e38:	54fffee1 	b.ne	2e14 <__d_lookup_rcu+0x104>  // b.any
	mask = bytemask_from_count(tcount);
    2e3c:	531d7361 	lsl	w1, w27, #3
	return unlikely(!!((a ^ b) & mask));
    2e40:	ca060000 	eor	x0, x0, x6
	mask = bytemask_from_count(tcount);
    2e44:	9ac12141 	lsl	x1, x10, x1
			if (dentry_cmp(dentry, str, hashlen_len(hashlen)) != 0)
    2e48:	ea21001f 	bics	xzr, x0, x1
    2e4c:	54fffb41 	b.ne	2db4 <__d_lookup_rcu+0xa4>  // b.any
		*seqp = seq;
    2e50:	b9000396 	str	w22, [x28]
}
    2e54:	aa1803e0 	mov	x0, x24
    2e58:	a94153f3 	ldp	x19, x20, [sp, #16]
		return dentry;
    2e5c:	a9425bf5 	ldp	x21, x22, [sp, #32]
}
    2e60:	a94363f7 	ldp	x23, x24, [sp, #48]
    2e64:	a9446bf9 	ldp	x25, x26, [sp, #64]
		return dentry;
    2e68:	a94573fb 	ldp	x27, x28, [sp, #80]
}
    2e6c:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2e70:	d65f03c0 	ret
		if (d_unhashed(dentry))
    2e74:	f9400660 	ldr	x0, [x19, #8]
    2e78:	121f7876 	and	w22, w3, #0xfffffffe
    2e7c:	b4fff9c0 	cbz	x0, 2db4 <__d_lookup_rcu+0xa4>
		if (unlikely(parent->d_flags & DCACHE_OP_COMPARE)) {
    2e80:	b94002a0 	ldr	w0, [x21]
    2e84:	360ffba0 	tbz	w0, #1, 2df8 <__d_lookup_rcu+0xe8>
			if (dentry->d_name.hash != hashlen_hash(hashlen))
    2e88:	b9402280 	ldr	w0, [x20, #32]
    2e8c:	6b00013f 	cmp	w9, w0
    2e90:	54fff921 	b.ne	2db4 <__d_lookup_rcu+0xa4>  // b.any
			tlen = dentry->d_name.len;
    2e94:	b9402681 	ldr	w1, [x20, #36]
			tname = dentry->d_name.name;
    2e98:	f9401682 	ldr	x2, [x20, #40]
	smp_rmb();
    2e9c:	d50339bf 	dmb	ishld
	return unlikely(s->sequence != start);
    2ea0:	b85fc260 	ldur	w0, [x19, #-4]
			if (read_seqcount_retry(&dentry->d_seq, seq)) {
    2ea4:	6b0002df 	cmp	w22, w0
    2ea8:	54000100 	b.eq	2ec8 <__d_lookup_rcu+0x1b8>  // b.none
    2eac:	d503203f 	yield
	__READ_ONCE_SIZE;
    2eb0:	b85fc263 	ldur	w3, [x19, #-4]
	smp_rmb();
    2eb4:	d50339bf 	dmb	ishld
		if (dentry->d_parent != parent)
    2eb8:	f9400e80 	ldr	x0, [x20, #24]
    2ebc:	eb15001f 	cmp	x0, x21
    2ec0:	54fff7a1 	b.ne	2db4 <__d_lookup_rcu+0xa4>  // b.any
    2ec4:	17ffffec 	b	2e74 <__d_lookup_rcu+0x164>
			if (parent->d_op->d_compare(dentry,
    2ec8:	f94032a4 	ldr	x4, [x21, #96]
    2ecc:	aa1a03e3 	mov	x3, x26
    2ed0:	aa1403e0 	mov	x0, x20
    2ed4:	290c1fe9 	stp	w9, w7, [sp, #96]
    2ed8:	f90037e8 	str	x8, [sp, #104]
    2edc:	f9400c84 	ldr	x4, [x4, #24]
    2ee0:	d63f0080 	blr	x4
    2ee4:	9280000a 	mov	x10, #0xffffffffffffffff    	// #-1
    2ee8:	294c1fe9 	ldp	w9, w7, [sp, #96]
    2eec:	f94037e8 	ldr	x8, [sp, #104]
    2ef0:	34fffb00 	cbz	w0, 2e50 <__d_lookup_rcu+0x140>
    2ef4:	17ffffb0 	b	2db4 <__d_lookup_rcu+0xa4>

0000000000002ef8 <__d_lookup>:
{
    2ef8:	a9b97bfd 	stp	x29, x30, [sp, #-112]!
	return dentry_hashtable + (hash >> d_hash_shift);
    2efc:	90000002 	adrp	x2, 0 <d_shrink_del>
    2f00:	91000044 	add	x4, x2, #0x0
{
    2f04:	910003fd 	mov	x29, sp
    2f08:	a90153f3 	stp	x19, x20, [sp, #16]
    2f0c:	a90363f7 	stp	x23, x24, [sp, #48]
    2f10:	a9046bf9 	stp	x25, x26, [sp, #64]
	return dentry_hashtable + (hash >> d_hash_shift);
    2f14:	f9400043 	ldr	x3, [x2]
	unsigned int hash = name->hash;
    2f18:	b9400037 	ldr	w23, [x1]
	return dentry_hashtable + (hash >> d_hash_shift);
    2f1c:	b9400882 	ldr	w2, [x4, #8]
    2f20:	1ac226e2 	lsr	w2, w23, w2
    2f24:	f8627873 	ldr	x19, [x3, x2, lsl #3]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2f28:	f27ffa73 	ands	x19, x19, #0xfffffffffffffffe
    2f2c:	54000520 	b.eq	2fd0 <__d_lookup+0xd8>  // b.none
    2f30:	aa0003fa 	mov	x26, x0
    2f34:	a90573fb 	stp	x27, x28, [sp, #80]
    2f38:	aa0103fc 	mov	x28, x1
    2f3c:	d2800002 	mov	x2, #0x0                   	// #0
		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
    2f40:	52a02018 	mov	w24, #0x1000000             	// #16777216
	smp_store_release(&lock->locked, 0);
    2f44:	52800005 	mov	w5, #0x0                   	// #0
	mask = bytemask_from_count(tcount);
    2f48:	92800006 	mov	x6, #0xffffffffffffffff    	// #-1
    2f4c:	a9025bf5 	stp	x21, x22, [sp, #32]
    2f50:	d2800016 	mov	x22, #0x0                   	// #0
    2f54:	52800035 	mov	w21, #0x1                   	// #1
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2f58:	d1002274 	sub	x20, x19, #0x8
		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
    2f5c:	6b18027f 	cmp	w19, w24
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2f60:	aa1403f9 	mov	x25, x20
		if (unlikely((u32)(unsigned long)&dentry->d_hash == 0x01000000))
    2f64:	540000e1 	b.ne	2f80 <__d_lookup+0x88>  // b.any
			dump(last);
    2f68:	aa0203e0 	mov	x0, x2
    2f6c:	b9006fe5 	str	w5, [sp, #108]
    2f70:	94000000 	bl	0 <d_shrink_del>
    2f74:	b9406fe5 	ldr	w5, [sp, #108]
    2f78:	92800006 	mov	x6, #0xffffffffffffffff    	// #-1
    2f7c:	d503201f 	nop
		if (dentry->d_name.hash != hash)
    2f80:	b9402282 	ldr	w2, [x20, #32]
    2f84:	6b17005f 	cmp	w2, w23
    2f88:	540001a1 	b.ne	2fbc <__d_lookup+0xc4>  // b.any
    2f8c:	9101427b 	add	x27, x19, #0x50
    2f90:	f9800371 	prfm	pstl1strm, [x27]
    2f94:	885fff61 	ldaxr	w1, [x27]
    2f98:	4a160020 	eor	w0, w1, w22
    2f9c:	35000060 	cbnz	w0, 2fa8 <__d_lookup+0xb0>
    2fa0:	88007f75 	stxr	w0, w21, [x27]
    2fa4:	35ffff80 	cbnz	w0, 2f94 <__d_lookup+0x9c>
    2fa8:	35000521 	cbnz	w1, 304c <__d_lookup+0x154>
		if (dentry->d_parent != parent)
    2fac:	f9400e80 	ldr	x0, [x20, #24]
    2fb0:	eb1a001f 	cmp	x0, x26
    2fb4:	540001c0 	b.eq	2fec <__d_lookup+0xf4>  // b.none
    2fb8:	089fff65 	stlrb	w5, [x27]
    2fbc:	f9400273 	ldr	x19, [x19]
	hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) {
    2fc0:	aa1403e2 	mov	x2, x20
    2fc4:	b5fffcb3 	cbnz	x19, 2f58 <__d_lookup+0x60>
    2fc8:	a9425bf5 	ldp	x21, x22, [sp, #32]
    2fcc:	a94573fb 	ldp	x27, x28, [sp, #80]
	struct dentry *found = NULL;
    2fd0:	d2800019 	mov	x25, #0x0                   	// #0
}
    2fd4:	aa1903e0 	mov	x0, x25
    2fd8:	a94153f3 	ldp	x19, x20, [sp, #16]
    2fdc:	a94363f7 	ldp	x23, x24, [sp, #48]
    2fe0:	a9446bf9 	ldp	x25, x26, [sp, #64]
    2fe4:	a8c77bfd 	ldp	x29, x30, [sp], #112
    2fe8:	d65f03c0 	ret
		if (d_unhashed(dentry))
    2fec:	f9400660 	ldr	x0, [x19, #8]
    2ff0:	b4fffe40 	cbz	x0, 2fb8 <__d_lookup+0xc0>
	if (likely(!(parent->d_flags & DCACHE_OP_COMPARE))) {
    2ff4:	b9400340 	ldr	w0, [x26]
    2ff8:	b9402681 	ldr	w1, [x20, #36]
    2ffc:	37080340 	tbnz	w0, #1, 3064 <__d_lookup+0x16c>
		if (dentry->d_name.len != name->len)
    3000:	b9400783 	ldr	w3, [x28, #4]
    3004:	6b01007f 	cmp	w3, w1
    3008:	54fffd81 	b.ne	2fb8 <__d_lookup+0xc0>  // b.any
		return dentry_cmp(dentry, name->name, name->len) == 0;
    300c:	f9400787 	ldr	x7, [x28, #8]
static inline int dentry_string_cmp(const unsigned char *cs, const unsigned 
char *ct, unsigned tcount)
    3010:	12000868 	and	w8, w3, #0x7
    3014:	f9401264 	ldr	x4, [x19, #32]
    3018:	cb0400e7 	sub	x7, x7, x4
    301c:	14000003 	b	3028 <__d_lookup+0x130>
		if (!tcount)
    3020:	71002063 	subs	w3, w3, #0x8
    3024:	54000380 	b.eq	3094 <__d_lookup+0x19c>  // b.none
		cs += sizeof(unsigned long);
    3028:	8b070082 	add	x2, x4, x7
		if (tcount < sizeof(unsigned long))
    302c:	6b08007f 	cmp	w3, w8
    3030:	f9400041 	ldr	x1, [x2]
	return *(unsigned long *)addr;
    3034:	f9400080 	ldr	x0, [x4]
    3038:	540003e0 	b.eq	30b4 <__d_lookup+0x1bc>  // b.none
		cs += sizeof(unsigned long);
    303c:	91002084 	add	x4, x4, #0x8
		if (unlikely(a != b))
    3040:	eb01001f 	cmp	x0, x1
    3044:	54fffee0 	b.eq	3020 <__d_lookup+0x128>  // b.none
    3048:	17ffffdc 	b	2fb8 <__d_lookup+0xc0>
	queued_spin_lock_slowpath(lock, val);
    304c:	aa1b03e0 	mov	x0, x27
    3050:	b9006fe5 	str	w5, [sp, #108]
    3054:	94000000 	bl	0 <queued_spin_lock_slowpath>
    3058:	b9406fe5 	ldr	w5, [sp, #108]
    305c:	92800006 	mov	x6, #0xffffffffffffffff    	// #-1
    3060:	17ffffd3 	b	2fac <__d_lookup+0xb4>
	return parent->d_op->d_compare(dentry,
    3064:	f9403340 	ldr	x0, [x26, #96]
    3068:	aa1c03e3 	mov	x3, x28
    306c:	f9401682 	ldr	x2, [x20, #40]
    3070:	b9006fe5 	str	w5, [sp, #108]
    3074:	f9400c04 	ldr	x4, [x0, #24]
    3078:	aa1403e0 	mov	x0, x20
    307c:	d63f0080 	blr	x4
				       name) == 0;
    3080:	7100001f 	cmp	w0, #0x0
    3084:	1a9f17e0 	cset	w0, eq  // eq = none
    3088:	92800006 	mov	x6, #0xffffffffffffffff    	// #-1
    308c:	b9406fe5 	ldr	w5, [sp, #108]
		if (!d_same_name(dentry, parent, name))
    3090:	34fff940 	cbz	w0, 2fb8 <__d_lookup+0xc0>
		dentry->d_lockref.count++;
    3094:	b9405e80 	ldr	w0, [x20, #92]
	smp_store_release(&lock->locked, 0);
    3098:	52800001 	mov	w1, #0x0                   	// #0
    309c:	11000400 	add	w0, w0, #0x1
    30a0:	b9005e80 	str	w0, [x20, #92]
    30a4:	089fff61 	stlrb	w1, [x27]
	preempt_enable();
    30a8:	a9425bf5 	ldp	x21, x22, [sp, #32]
    30ac:	a94573fb 	ldp	x27, x28, [sp, #80]
    30b0:	17ffffc9 	b	2fd4 <__d_lookup+0xdc>
	mask = bytemask_from_count(tcount);
    30b4:	531d7062 	lsl	w2, w3, #3
	return unlikely(!!((a ^ b) & mask));
    30b8:	ca010000 	eor	x0, x0, x1
	mask = bytemask_from_count(tcount);
    30bc:	9ac220c2 	lsl	x2, x6, x2
    30c0:	ea22001f 	bics	xzr, x0, x2
    30c4:	1a9f17e0 	cset	w0, eq  // eq = none
		if (!d_same_name(dentry, parent, name))
    30c8:	34fff780 	cbz	w0, 2fb8 <__d_lookup+0xc0>
    30cc:	17fffff2 	b	3094 <__d_lookup+0x19c>

00000000000030d0 <d_lookup>:
{
    30d0:	a9bd7bfd 	stp	x29, x30, [sp, #-48]!
    30d4:	910003fd 	mov	x29, sp
    30d8:	a90153f3 	stp	x19, x20, [sp, #16]
    30dc:	90000014 	adrp	x20, 0 <d_shrink_del>
    30e0:	91000294 	add	x20, x20, #0x0
    30e4:	a9025bf5 	stp	x21, x22, [sp, #32]
    30e8:	aa0003f6 	mov	x22, x0
    30ec:	aa0103f5 	mov	x21, x1
    30f0:	1400000a 	b	3118 <d_lookup+0x48>
	smp_rmb();
    30f4:	d50339bf 	dmb	ishld
		dentry = __d_lookup(parent, name);
    30f8:	aa1503e1 	mov	x1, x21
    30fc:	aa1603e0 	mov	x0, x22
    3100:	94000000 	bl	2ef8 <__d_lookup>
		if (dentry)
    3104:	b5000120 	cbnz	x0, 3128 <d_lookup+0x58>
	smp_rmb();
    3108:	d50339bf 	dmb	ishld
	} while (read_seqretry(&rename_lock, seq));
    310c:	b9400281 	ldr	w1, [x20]
    3110:	6b13003f 	cmp	w1, w19
    3114:	540000a0 	b.eq	3128 <d_lookup+0x58>  // b.none
	__READ_ONCE_SIZE;
    3118:	b9400293 	ldr	w19, [x20]
	if (unlikely(ret & 1)) {
    311c:	3607fed3 	tbz	w19, #0, 30f4 <d_lookup+0x24>
    3120:	d503203f 	yield
    3124:	17fffffd 	b	3118 <d_lookup+0x48>
}
    3128:	a94153f3 	ldp	x19, x20, [sp, #16]
    312c:	a9425bf5 	ldp	x21, x22, [sp, #32]
    3130:	a8c37bfd 	ldp	x29, x30, [sp], #48
    3134:	d65f03c0 	ret

0000000000003138 <d_hash_and_lookup>:
{
    3138:	a9be7bfd 	stp	x29, x30, [sp, #-32]!
    313c:	910003fd 	mov	x29, sp
    3140:	a90153f3 	stp	x19, x20, [sp, #16]
    3144:	aa0103f3 	mov	x19, x1
    3148:	aa0003f4 	mov	x20, x0
	name->hash = full_name_hash(dir, name->name, name->len);
    314c:	b9400422 	ldr	w2, [x1, #4]
    3150:	f9400421 	ldr	x1, [x1, #8]
    3154:	94000000 	bl	0 <full_name_hash>
    3158:	b9000260 	str	w0, [x19]
	if (dir->d_flags & DCACHE_OP_HASH) {
    315c:	b9400280 	ldr	w0, [x20]
    3160:	360000e0 	tbz	w0, #0, 317c <d_hash_and_lookup+0x44>
		int err = dir->d_op->d_hash(dir, name);
    3164:	f9403282 	ldr	x2, [x20, #96]
    3168:	aa1303e1 	mov	x1, x19
    316c:	aa1403e0 	mov	x0, x20
    3170:	f9400842 	ldr	x2, [x2, #16]
    3174:	d63f0040 	blr	x2
		if (unlikely(err < 0))
    3178:	37f800e0 	tbnz	w0, #31, 3194 <d_hash_and_lookup+0x5c>
	return d_lookup(dir, name);
    317c:	aa1303e1 	mov	x1, x19
    3180:	aa1403e0 	mov	x0, x20
    3184:	94000000 	bl	30d0 <d_lookup>
}
    3188:	a94153f3 	ldp	x19, x20, [sp, #16]
    318c:	a8c27bfd 	ldp	x29, x30, [sp], #32
    3190:	d65f03c0 	ret
			return ERR_PTR(err);
    3194:	93407c00 	sxtw	x0, w0
    3198:	17fffffc 	b	3188 <d_hash_and_lookup+0x50>
    319c:	d503201f 	nop

Disassembly of section .text.unlikely:

0000000000000000 <dump>:
{
   0:	a9bc7bfd 	stp	x29, x30, [sp, #-64]!
   4:	910003fd 	mov	x29, sp
   8:	a90153f3 	stp	x19, x20, [sp, #16]
   c:	a9025bf5 	stp	x21, x22, [sp, #32]
  10:	f9001bf7 	str	x23, [sp, #48]
	if (!dentry) {
  14:	b50000a0 	cbnz	x0, 28 <dump+0x28>
		printk(KERN_ERR "list fucked in head");
  18:	90000000 	adrp	x0, 0 <dump>
  1c:	91000000 	add	x0, x0, #0x0
  20:	94000000 	bl	0 <printk>
		return;
  24:	14000016 	b	7c <dump+0x7c>
	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = %x, count = 
%d",
  28:	aa0003f3 	mov	x19, x0
		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
  2c:	90000015 	adrp	x21, 0 <dump>
	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = %x, count = 
%d",
  30:	90000000 	adrp	x0, 0 <dump>
  34:	aa1303e1 	mov	x1, x19
  38:	91000000 	add	x0, x0, #0x0
  3c:	d2800014 	mov	x20, #0x0                   	// #0
  40:	b9400263 	ldr	w3, [x19]
		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
  44:	52800157 	mov	w23, #0xa                   	// #10
	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = %x, count = 
%d",
  48:	b9405e64 	ldr	w4, [x19, #92]
		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
  4c:	52800416 	mov	w22, #0x20                  	// #32
	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = %x, count = 
%d",
  50:	f9400662 	ldr	x2, [x19, #8]
		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
  54:	910002b5 	add	x21, x21, #0x0
	printk(KERN_ERR "fucked dentry[%p]: d_hash.next = %p, flags = %x, count = 
%d",
  58:	94000000 	bl	0 <printk>
		printk(KERN_CONT "%c%02x", i & 31 ? ' ' : '\n',
  5c:	38746a62 	ldrb	w2, [x19, x20]
  60:	f240129f 	tst	x20, #0x1f
  64:	1a9602e1 	csel	w1, w23, w22, eq  // eq = none
  68:	91000694 	add	x20, x20, #0x1
  6c:	aa1503e0 	mov	x0, x21
  70:	94000000 	bl	0 <printk>
	for (i = 0; i < sizeof(struct dentry); i++)
  74:	f103029f 	cmp	x20, #0xc0
  78:	54ffff21 	b.ne	5c <dump+0x5c>  // b.any
}
  7c:	a94153f3 	ldp	x19, x20, [sp, #16]
  80:	a9425bf5 	ldp	x21, x22, [sp, #32]
  84:	f9401bf7 	ldr	x23, [sp, #48]
  88:	a8c47bfd 	ldp	x29, x30, [sp], #64
  8c:	d65f03c0 	ret


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-19 12:42           ` Vicente Bergas
@ 2019-06-19 16:28             ` Al Viro
  2019-06-19 16:51               ` Vicente Bergas
  2019-06-19 17:04               ` Will Deacon
  0 siblings, 2 replies; 19+ messages in thread
From: Al Viro @ 2019-06-19 16:28 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: linux-fsdevel, linux-kernel, Catalin Marinas, Will Deacon

[arm64 maintainers Cc'd; I'm not adding a Cc to moderated list,
sorry]

On Wed, Jun 19, 2019 at 02:42:16PM +0200, Vicente Bergas wrote:

> Hi Al,
> i have been running the distro-provided kernel the last few weeks
> and had no issues at all.
> https://archlinuxarm.org/packages/aarch64/linux-aarch64
> It is from the v5.1 branch and is compiled with gcc 8.3.
> 
> IIRC, i also tested
> https://archlinuxarm.org/packages/aarch64/linux-aarch64-rc
> v5.2-rc1 and v5.2-rc2 (which at that time where compiled with
> gcc 8.2) with no issues.
> 
> This week tested v5.2-rc4 and v5.2-rc5 from archlinuxarm but
> there are regressions unrelated to d_lookup.
> 
> At this point i was convinced it was a gcc 9.1 issue and had
> nothing to do with the kernel, but anyways i gave your patch a try.
> The tested kernel is v5.2-rc5-224-gbed3c0d84e7e and
> it has been compiled with gcc 8.3.
> The sentinel you put there has triggered!
> So, it is not a gcc 9.1 issue.
> 
> In any case, i have no idea if those addresses are arm64-specific
> in any way.

Cute...  So *all* of those are in dentry_hashtable itself.  IOW, we have
these two values (1<<24 and (1<<24)|(0x88L<<40)) cropping up in
dentry_hashtable[...].first on that config.

That, at least, removes the possibility of corrupted forward pointer in
the middle of a chain, with several pointers traversed before we run
into something unmapped - the crap is in the very beginning.

I don't get it.  The only things modifying these pointers should be:

static void ___d_drop(struct dentry *dentry)
{
        struct hlist_bl_head *b;
        /*
         * Hashed dentries are normally on the dentry hashtable,
         * with the exception of those newly allocated by
         * d_obtain_root, which are always IS_ROOT:
         */
        if (unlikely(IS_ROOT(dentry)))
                b = &dentry->d_sb->s_roots;
        else   
                b = d_hash(dentry->d_name.hash);

        hlist_bl_lock(b);
        __hlist_bl_del(&dentry->d_hash);
        hlist_bl_unlock(b);
}

and

static void __d_rehash(struct dentry *entry)
{
        struct hlist_bl_head *b = d_hash(entry->d_name.hash);

        hlist_bl_lock(b);
        hlist_bl_add_head_rcu(&entry->d_hash, b);
        hlist_bl_unlock(b);
}

The latter sets that pointer to (unsigned long)&entry->d_hash | LIST_BL_LOCKMASK),
having dereferenced entry->d_hash prior to that.  It can't be the source of those
values, or we would've oopsed right there.

The former...  __hlist_bl_del() does
        /* pprev may be `first`, so be careful not to lose the lock bit */
        WRITE_ONCE(*pprev,
                   (struct hlist_bl_node *)
                        ((unsigned long)next |
                         ((unsigned long)*pprev & LIST_BL_LOCKMASK)));
        if (next)
                next->pprev = pprev;
so to end up with that garbage in the list head we'd have to had next
the same bogus pointer (modulo bit 0, possibly).  And since it's non-NULL,
we would've immediately oopsed on trying to set next->pprev.

There shouldn't be any pointers to hashtable elements other than ->d_hash.pprev
of various dentries.  And ->d_hash is not a part of anon unions in struct
dentry, so it can't be mistaken access through the aliasing member.

Of course, there's always a possibility of something stomping on random places
in memory and shitting those values all over, with the hashtable being the
hottest place on the loads where it happens...  Hell knows...

What's your config, BTW?  SMP and DEBUG_SPINLOCK, specifically...

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-19 16:28             ` Al Viro
@ 2019-06-19 16:51               ` Vicente Bergas
  2019-06-19 17:06                 ` Will Deacon
  2019-06-19 17:09                 ` Al Viro
  2019-06-19 17:04               ` Will Deacon
  1 sibling, 2 replies; 19+ messages in thread
From: Vicente Bergas @ 2019-06-19 16:51 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, linux-kernel, Catalin Marinas, Will Deacon

On Wednesday, June 19, 2019 6:28:02 PM CEST, Al Viro wrote:
> [arm64 maintainers Cc'd; I'm not adding a Cc to moderated list,
> sorry]
>
> On Wed, Jun 19, 2019 at 02:42:16PM +0200, Vicente Bergas wrote:
>
>> Hi Al,
>> i have been running the distro-provided kernel the last few weeks
>> and had no issues at all.
>> https://archlinuxarm.org/packages/aarch64/linux-aarch64
>> It is from the v5.1 branch and is compiled with gcc 8.3.
>> 
>> IIRC, i also tested
>> https://archlinuxarm.org/packages/aarch64/linux-aarch64-rc
>> v5.2-rc1 and v5.2-rc2 (which at that time where compiled with
>> gcc 8.2) with no issues.
>> 
>> This week tested v5.2-rc4 and v5.2-rc5 from archlinuxarm but
>> there are regressions unrelated to d_lookup.
>> 
>> At this point i was convinced it was a gcc 9.1 issue and had
>> nothing to do with the kernel, but anyways i gave your patch a try.
>> The tested kernel is v5.2-rc5-224-gbed3c0d84e7e and
>> it has been compiled with gcc 8.3.
>> The sentinel you put there has triggered!
>> So, it is not a gcc 9.1 issue.
>> 
>> In any case, i have no idea if those addresses are arm64-specific
>> in any way.
>
> Cute...  So *all* of those are in dentry_hashtable itself.  IOW, we have
> these two values (1<<24 and (1<<24)|(0x88L<<40)) cropping up in
> dentry_hashtable[...].first on that config.
>
> That, at least, removes the possibility of corrupted forward pointer in
> the middle of a chain, with several pointers traversed before we run
> into something unmapped - the crap is in the very beginning.
>
> I don't get it.  The only things modifying these pointers should be:
>
> static void ___d_drop(struct dentry *dentry)
> {
>         struct hlist_bl_head *b;
>         /*
>          * Hashed dentries are normally on the dentry hashtable,
>          * with the exception of those newly allocated by
>          * d_obtain_root, which are always IS_ROOT:
>          */
>         if (unlikely(IS_ROOT(dentry)))
>                 b = &dentry->d_sb->s_roots;
>         else   
>                 b = d_hash(dentry->d_name.hash);
>
>         hlist_bl_lock(b);
>         __hlist_bl_del(&dentry->d_hash);
>         hlist_bl_unlock(b);
> }
>
> and
>
> static void __d_rehash(struct dentry *entry)
> {
>         struct hlist_bl_head *b = d_hash(entry->d_name.hash);
>
>         hlist_bl_lock(b);
>         hlist_bl_add_head_rcu(&entry->d_hash, b);
>         hlist_bl_unlock(b);
> }
>
> The latter sets that pointer to (unsigned long)&entry->d_hash | 
> LIST_BL_LOCKMASK),
> having dereferenced entry->d_hash prior to that.  It can't be 
> the source of those
> values, or we would've oopsed right there.
>
> The former...  __hlist_bl_del() does
>         /* pprev may be `first`, so be careful not to lose the lock bit */
>         WRITE_ONCE(*pprev,
>                    (struct hlist_bl_node *)
>                         ((unsigned long)next |
>                          ((unsigned long)*pprev & LIST_BL_LOCKMASK)));
>         if (next)
>                 next->pprev = pprev;
> so to end up with that garbage in the list head we'd have to had next
> the same bogus pointer (modulo bit 0, possibly).  And since it's non-NULL,
> we would've immediately oopsed on trying to set next->pprev.
>
> There shouldn't be any pointers to hashtable elements other 
> than ->d_hash.pprev
> of various dentries.  And ->d_hash is not a part of anon unions in struct
> dentry, so it can't be mistaken access through the aliasing member.
>
> Of course, there's always a possibility of something stomping 
> on random places
> in memory and shitting those values all over, with the hashtable being the
> hottest place on the loads where it happens...  Hell knows...
>
> What's your config, BTW?  SMP and DEBUG_SPINLOCK, specifically...

Hi Al,
here it is:
https://paste.debian.net/1088517

Regards,
  Vicenç.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-19 16:28             ` Al Viro
  2019-06-19 16:51               ` Vicente Bergas
@ 2019-06-19 17:04               ` Will Deacon
  1 sibling, 0 replies; 19+ messages in thread
From: Will Deacon @ 2019-06-19 17:04 UTC (permalink / raw)
  To: Al Viro; +Cc: Vicente Bergas, linux-fsdevel, linux-kernel, Catalin Marinas

Hi all,

On Wed, Jun 19, 2019 at 05:28:02PM +0100, Al Viro wrote:
> [arm64 maintainers Cc'd; I'm not adding a Cc to moderated list,
> sorry]

Thanks for adding us.

> On Wed, Jun 19, 2019 at 02:42:16PM +0200, Vicente Bergas wrote:
> 
> > Hi Al,
> > i have been running the distro-provided kernel the last few weeks
> > and had no issues at all.
> > https://archlinuxarm.org/packages/aarch64/linux-aarch64
> > It is from the v5.1 branch and is compiled with gcc 8.3.
> > 
> > IIRC, i also tested
> > https://archlinuxarm.org/packages/aarch64/linux-aarch64-rc
> > v5.2-rc1 and v5.2-rc2 (which at that time where compiled with
> > gcc 8.2) with no issues.
> > 
> > This week tested v5.2-rc4 and v5.2-rc5 from archlinuxarm but
> > there are regressions unrelated to d_lookup.
> > 
> > At this point i was convinced it was a gcc 9.1 issue and had
> > nothing to do with the kernel, but anyways i gave your patch a try.
> > The tested kernel is v5.2-rc5-224-gbed3c0d84e7e and
> > it has been compiled with gcc 8.3.
> > The sentinel you put there has triggered!
> > So, it is not a gcc 9.1 issue.
> > 
> > In any case, i have no idea if those addresses are arm64-specific
> > in any way.
> 
> Cute...  So *all* of those are in dentry_hashtable itself.  IOW, we have
> these two values (1<<24 and (1<<24)|(0x88L<<40)) cropping up in
> dentry_hashtable[...].first on that config.

Unfortunately, those values don't jump out at me as something particularly
meaningful on arm64. Bloody weird though.

> There shouldn't be any pointers to hashtable elements other than ->d_hash.pprev
> of various dentries.  And ->d_hash is not a part of anon unions in struct
> dentry, so it can't be mistaken access through the aliasing member.
> 
> Of course, there's always a possibility of something stomping on random places
> in memory and shitting those values all over, with the hashtable being the
> hottest place on the loads where it happens...  Hell knows...
> 
> What's your config, BTW?  SMP and DEBUG_SPINLOCK, specifically...

I'd also be interesting in seeing the .config (the pastebin link earlier in
the thread appears to have expired). Two areas where we've had issues
recently are (1) module relocations and (2) CONFIG_OPTIMIZE_INLINING.
However, this is the first report I've seen of the sort of crash you're
reporting.

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-19 16:51               ` Vicente Bergas
@ 2019-06-19 17:06                 ` Will Deacon
  2019-06-19 17:09                 ` Al Viro
  1 sibling, 0 replies; 19+ messages in thread
From: Will Deacon @ 2019-06-19 17:06 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: Al Viro, linux-fsdevel, linux-kernel, Catalin Marinas

On Wed, Jun 19, 2019 at 06:51:51PM +0200, Vicente Bergas wrote:
> here it is:
> https://paste.debian.net/1088517

No modules and OPTIMIZE_INLINING=n, so this isn't either of my first
thoughts. Hmm. I guess I should try to reproduce the issue locally.

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-19 16:51               ` Vicente Bergas
  2019-06-19 17:06                 ` Will Deacon
@ 2019-06-19 17:09                 ` Al Viro
  2019-06-22 18:02                   ` Vicente Bergas
  1 sibling, 1 reply; 19+ messages in thread
From: Al Viro @ 2019-06-19 17:09 UTC (permalink / raw)
  To: Vicente Bergas; +Cc: linux-fsdevel, linux-kernel, Catalin Marinas, Will Deacon

On Wed, Jun 19, 2019 at 06:51:51PM +0200, Vicente Bergas wrote:

> > What's your config, BTW?  SMP and DEBUG_SPINLOCK, specifically...
> 
> Hi Al,
> here it is:
> https://paste.debian.net/1088517

Aha...  So LIST_BL_LOCKMASK is 1 there (same as on distro builds)...

Hell knows - how about
static inline void hlist_bl_lock(struct hlist_bl_head *b)
{
	BUG_ON(((u32)READ_ONCE(*b)&~LIST_BL_LOCKMASK) == 0x01000000);
        bit_spin_lock(0, (unsigned long *)b);
}

and

static inline void hlist_bl_unlock(struct hlist_bl_head *b)
{
        __bit_spin_unlock(0, (unsigned long *)b);
	BUG_ON(((u32)READ_ONCE(*b)&~LIST_BL_LOCKMASK) == 0x01000000);
}

to see if we can narrow down where that happens?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-19 17:09                 ` Al Viro
@ 2019-06-22 18:02                   ` Vicente Bergas
  2019-06-24 11:47                     ` Will Deacon
  0 siblings, 1 reply; 19+ messages in thread
From: Vicente Bergas @ 2019-06-22 18:02 UTC (permalink / raw)
  To: Al Viro; +Cc: linux-fsdevel, linux-kernel, Catalin Marinas, Will Deacon

Hi Al,
i think have a hint of what is going on.
With the last kernel built with your sentinels at hlist_bl_*lock
it is very easy to reproduce the issue.
In fact it is so unstable that i had to connect a serial port
in order to save the kernel trace.
Unfortunately all the traces are at different addresses and
your sentinel did not trigger.

Now i am writing this email from that same buggy kernel, which is
v5.2-rc5-224-gbed3c0d84e7e.

The difference is that I changed the bootloader.
Before was booting 5.1.12 and kexec into this one.
Now booting from u-boot into this one.
I will continue booting with u-boot for some time to be sure it is
stable and confirm this is the cause.

In case it is, who is the most probable offender?
the kernel before kexec or the kernel after?

The original report was sent to you because you appeared as the maintainer
of fs/dcache.c, which appeared on the trace. Should this be redirected
somewhere else now?

Regards,
  Vicenç.

On Wednesday, June 19, 2019 7:09:40 PM CEST, Al Viro wrote:
> On Wed, Jun 19, 2019 at 06:51:51PM +0200, Vicente Bergas wrote:
>
>>> What's your config, BTW?  SMP and DEBUG_SPINLOCK, specifically...
>> 
>> Hi Al,
>> here it is:
>> https://paste.debian.net/1088517
>
> Aha...  So LIST_BL_LOCKMASK is 1 there (same as on distro builds)...
>
> Hell knows - how about
> static inline void hlist_bl_lock(struct hlist_bl_head *b)
> {
> 	BUG_ON(((u32)READ_ONCE(*b)&~LIST_BL_LOCKMASK) == 0x01000000);
>         bit_spin_lock(0, (unsigned long *)b);
> }
>
> and
>
> static inline void hlist_bl_unlock(struct hlist_bl_head *b)
> {
>         __bit_spin_unlock(0, (unsigned long *)b);
> 	BUG_ON(((u32)READ_ONCE(*b)&~LIST_BL_LOCKMASK) == 0x01000000);
> }
>
> to see if we can narrow down where that happens?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-22 18:02                   ` Vicente Bergas
@ 2019-06-24 11:47                     ` Will Deacon
  2019-06-25  9:46                       ` Will Deacon
  0 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2019-06-24 11:47 UTC (permalink / raw)
  To: Vicente Bergas
  Cc: Al Viro, linux-fsdevel, linux-kernel, Catalin Marinas, Will Deacon

On Sat, Jun 22, 2019 at 08:02:19PM +0200, Vicente Bergas wrote:
> Hi Al,
> i think have a hint of what is going on.
> With the last kernel built with your sentinels at hlist_bl_*lock
> it is very easy to reproduce the issue.
> In fact it is so unstable that i had to connect a serial port
> in order to save the kernel trace.
> Unfortunately all the traces are at different addresses and
> your sentinel did not trigger.
> 
> Now i am writing this email from that same buggy kernel, which is
> v5.2-rc5-224-gbed3c0d84e7e.
> 
> The difference is that I changed the bootloader.
> Before was booting 5.1.12 and kexec into this one.
> Now booting from u-boot into this one.
> I will continue booting with u-boot for some time to be sure it is
> stable and confirm this is the cause.
> 
> In case it is, who is the most probable offender?
> the kernel before kexec or the kernel after?

Has kexec ever worked reliably on this board? If you used to kexec
successfully, then we can try to hunt down the regression using memtest.
If you kexec into a problematic kernel with CONFIG_MEMTEST=y and pass
"memtest=17" on the command-line, it will hopefully reveal any active
memory corruption.

My first thought is that there is ongoing DMA which corrupts the dentry
hash. The rk3399 SoC also has an IOMMU, which could contribute to the fun
if it's not shutdown correctly (i.e. if it enters bypass mode).

> The original report was sent to you because you appeared as the maintainer
> of fs/dcache.c, which appeared on the trace. Should this be redirected
> somewhere else now?

linux-arm-kernel@lists.infradead.org

Probably worth adding Heiko Stuebner <heiko@sntech.de> to cc.

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-24 11:47                     ` Will Deacon
@ 2019-06-25  9:46                       ` Will Deacon
  2019-06-25 10:48                         ` Vicente Bergas
  0 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2019-06-25  9:46 UTC (permalink / raw)
  To: Will Deacon
  Cc: Vicente Bergas, Al Viro, linux-fsdevel, linux-kernel,
	Catalin Marinas, marc.zyngier

[+Marc]

Hi again, Vicente,

On Mon, Jun 24, 2019 at 12:47:41PM +0100, Will Deacon wrote:
> On Sat, Jun 22, 2019 at 08:02:19PM +0200, Vicente Bergas wrote:
> > Hi Al,
> > i think have a hint of what is going on.
> > With the last kernel built with your sentinels at hlist_bl_*lock
> > it is very easy to reproduce the issue.
> > In fact it is so unstable that i had to connect a serial port
> > in order to save the kernel trace.
> > Unfortunately all the traces are at different addresses and
> > your sentinel did not trigger.
> > 
> > Now i am writing this email from that same buggy kernel, which is
> > v5.2-rc5-224-gbed3c0d84e7e.
> > 
> > The difference is that I changed the bootloader.
> > Before was booting 5.1.12 and kexec into this one.
> > Now booting from u-boot into this one.
> > I will continue booting with u-boot for some time to be sure it is
> > stable and confirm this is the cause.
> > 
> > In case it is, who is the most probable offender?
> > the kernel before kexec or the kernel after?
> 
> Has kexec ever worked reliably on this board? If you used to kexec
> successfully, then we can try to hunt down the regression using memtest.
> If you kexec into a problematic kernel with CONFIG_MEMTEST=y and pass
> "memtest=17" on the command-line, it will hopefully reveal any active
> memory corruption.
> 
> My first thought is that there is ongoing DMA which corrupts the dentry
> hash. The rk3399 SoC also has an IOMMU, which could contribute to the fun
> if it's not shutdown correctly (i.e. if it enters bypass mode).
> 
> > The original report was sent to you because you appeared as the maintainer
> > of fs/dcache.c, which appeared on the trace. Should this be redirected
> > somewhere else now?
> 
> linux-arm-kernel@lists.infradead.org
> 
> Probably worth adding Heiko Stuebner <heiko@sntech.de> to cc.

Before you rush over to LAKML, please could you provide your full dmesg
output from the kernel that was crashing (i.e. the dmesg you see in the
kexec'd kernel)? We've got a theory that the issue may be related to the
interrupt controller, and the dmesg output should help to establish whether
that is plausible or not.

Thanks,

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-25  9:46                       ` Will Deacon
@ 2019-06-25 10:48                         ` Vicente Bergas
  2019-06-29 22:56                           ` Vicente Bergas
  0 siblings, 1 reply; 19+ messages in thread
From: Vicente Bergas @ 2019-06-25 10:48 UTC (permalink / raw)
  To: Will Deacon
  Cc: Will Deacon, Al Viro, linux-fsdevel, linux-kernel,
	Catalin Marinas, marc.zyngier

On Tuesday, June 25, 2019 11:46:02 AM CEST, Will Deacon wrote:
> [+Marc]
>
> Hi again, Vicente,
>
> On Mon, Jun 24, 2019 at 12:47:41PM +0100, Will Deacon wrote:
>> On Sat, Jun 22, 2019 at 08:02:19PM +0200, Vicente Bergas wrote: ...
>
> Before you rush over to LAKML, please could you provide your full dmesg
> output from the kernel that was crashing (i.e. the dmesg you see in the
> kexec'd kernel)? We've got a theory that the issue may be related to the
> interrupt controller, and the dmesg output should help to establish whether
> that is plausible or not.
>
> Thanks,
>
> Will

Hi Will,
the memtest test is still pending...

Regarding interrupts, the kernel before kexec has this parameter:
irqchip.gicv3_nolpi=1
Thanks to Marc:
https://freenode.irclog.whitequark.org/linux-rockchip/2018-11-20#23524255

The kernel messages are here:
https://paste.debian.net/1089170/
https://paste.debian.net/1089171/

Regards,
  Vicenç.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: d_lookup: Unable to handle kernel paging request
  2019-06-25 10:48                         ` Vicente Bergas
@ 2019-06-29 22:56                           ` Vicente Bergas
  0 siblings, 0 replies; 19+ messages in thread
From: Vicente Bergas @ 2019-06-29 22:56 UTC (permalink / raw)
  To: Will Deacon
  Cc: Will Deacon, Al Viro, linux-fsdevel, linux-kernel,
	Catalin Marinas, marc.zyngier

On Tuesday, June 25, 2019 12:48:17 PM CEST, Vicente Bergas wrote:
> On Tuesday, June 25, 2019 11:46:02 AM CEST, Will Deacon wrote:
>> [+Marc]
>> 
>> Hi again, Vicente,
>> 
>> On Mon, Jun 24, 2019 at 12:47:41PM +0100, Will Deacon wrote: ...
>
> Hi Will,
> the memtest test is still pending...

Hi Will,
i've just ran that memtest and no issues have been found. See below.
I've also noticed that this is running very early, hence the 'earlycon'.
Because of this i wondered if it was run with interrupts disabled, and
indeed this is the case, am i wrong?
If the kernel before kexec is corrupting memory for the currently
running kernel, which entry point does it have, besides interrupts?
Or can the corruption come through DMA from a peripheral?

> Has kexec ever worked reliably on this board?

Yes, more or less. I have experienced some issues that could be
caused because of this, like sometimes on-screen flickering or
failing to boot once every 20 or 30 tries. But recently this
d_lookup issue appeared and it is a show-stopper.

This way of booting is used on both the sapphire board and on kevin.
The bootloader is https://gitlab.com/vicencb/kevinboot (not up to
date) and a similar one for the sapphire board.

Regards,
  Vicenç.

Booting Linux on physical CPU 0x0000000000 [0x410fd034]
Linux version 5.2.0-rc6 (local@host) (gcc version 8.3.0 (GCC)) #1 SMP @0
Machine model: Sapphire-RK3399 Board
earlycon: uart0 at MMIO32 0x00000000ff1a0000 (options '1500000n8')
printk: bootconsole [uart0] enabled
early_memtest: # of tests: 17
  0x0000000000200000 - 0x0000000000280000 pattern 4c494e5558726c7a
  0x0000000000b62000 - 0x0000000000b65000 pattern 4c494e5558726c7a
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 4c494e5558726c7a
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 4c494e5558726c7a
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 4c494e5558726c7a
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 4c494e5558726c7a
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 4c494e5558726c7a
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 4c494e5558726c7a
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 4c494e5558726c7a
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 4c494e5558726c7a
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 4c494e5558726c7a
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 4c494e5558726c7a
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 4c494e5558726c7a
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 4c494e5558726c7a
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 4c494e5558726c7a
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 4c494e5558726c7a
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 4c494e5558726c7a
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 4c494e5558726c7a
  0x0000000000200000 - 0x0000000000280000 pattern eeeeeeeeeeeeeeee
  0x0000000000b62000 - 0x0000000000b65000 pattern eeeeeeeeeeeeeeee
  0x0000000000b7665f - 0x00000000f77e8a58 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern eeeeeeeeeeeeeeee
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern eeeeeeeeeeeeeeee
  0x0000000000200000 - 0x0000000000280000 pattern dddddddddddddddd
  0x0000000000b62000 - 0x0000000000b65000 pattern dddddddddddddddd
  0x0000000000b7665f - 0x00000000f77e8a58 pattern dddddddddddddddd
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern dddddddddddddddd
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern dddddddddddddddd
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern dddddddddddddddd
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern dddddddddddddddd
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern dddddddddddddddd
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern dddddddddddddddd
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern dddddddddddddddd
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern dddddddddddddddd
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern dddddddddddddddd
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern dddddddddddddddd
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern dddddddddddddddd
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern dddddddddddddddd
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern dddddddddddddddd
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern dddddddddddddddd
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern dddddddddddddddd
  0x0000000000200000 - 0x0000000000280000 pattern bbbbbbbbbbbbbbbb
  0x0000000000b62000 - 0x0000000000b65000 pattern bbbbbbbbbbbbbbbb
  0x0000000000b7665f - 0x00000000f77e8a58 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern bbbbbbbbbbbbbbbb
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern bbbbbbbbbbbbbbbb
  0x0000000000200000 - 0x0000000000280000 pattern 7777777777777777
  0x0000000000b62000 - 0x0000000000b65000 pattern 7777777777777777
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 7777777777777777
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 7777777777777777
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 7777777777777777
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 7777777777777777
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 7777777777777777
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 7777777777777777
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 7777777777777777
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 7777777777777777
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 7777777777777777
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 7777777777777777
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 7777777777777777
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 7777777777777777
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 7777777777777777
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 7777777777777777
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 7777777777777777
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 7777777777777777
  0x0000000000200000 - 0x0000000000280000 pattern cccccccccccccccc
  0x0000000000b62000 - 0x0000000000b65000 pattern cccccccccccccccc
  0x0000000000b7665f - 0x00000000f77e8a58 pattern cccccccccccccccc
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern cccccccccccccccc
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern cccccccccccccccc
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern cccccccccccccccc
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern cccccccccccccccc
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern cccccccccccccccc
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern cccccccccccccccc
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern cccccccccccccccc
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern cccccccccccccccc
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern cccccccccccccccc
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern cccccccccccccccc
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern cccccccccccccccc
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern cccccccccccccccc
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern cccccccccccccccc
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern cccccccccccccccc
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern cccccccccccccccc
  0x0000000000200000 - 0x0000000000280000 pattern 9999999999999999
  0x0000000000b62000 - 0x0000000000b65000 pattern 9999999999999999
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 9999999999999999
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 9999999999999999
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 9999999999999999
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 9999999999999999
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 9999999999999999
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 9999999999999999
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 9999999999999999
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 9999999999999999
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 9999999999999999
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 9999999999999999
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 9999999999999999
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 9999999999999999
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 9999999999999999
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 9999999999999999
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 9999999999999999
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 9999999999999999
  0x0000000000200000 - 0x0000000000280000 pattern 6666666666666666
  0x0000000000b62000 - 0x0000000000b65000 pattern 6666666666666666
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 6666666666666666
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 6666666666666666
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 6666666666666666
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 6666666666666666
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 6666666666666666
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 6666666666666666
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 6666666666666666
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 6666666666666666
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 6666666666666666
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 6666666666666666
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 6666666666666666
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 6666666666666666
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 6666666666666666
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 6666666666666666
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 6666666666666666
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 6666666666666666
  0x0000000000200000 - 0x0000000000280000 pattern 3333333333333333
  0x0000000000b62000 - 0x0000000000b65000 pattern 3333333333333333
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 3333333333333333
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 3333333333333333
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 3333333333333333
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 3333333333333333
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 3333333333333333
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 3333333333333333
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 3333333333333333
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 3333333333333333
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 3333333333333333
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 3333333333333333
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 3333333333333333
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 3333333333333333
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 3333333333333333
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 3333333333333333
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 3333333333333333
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 3333333333333333
  0x0000000000200000 - 0x0000000000280000 pattern 8888888888888888
  0x0000000000b62000 - 0x0000000000b65000 pattern 8888888888888888
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 8888888888888888
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 8888888888888888
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 8888888888888888
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 8888888888888888
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 8888888888888888
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 8888888888888888
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 8888888888888888
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 8888888888888888
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 8888888888888888
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 8888888888888888
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 8888888888888888
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 8888888888888888
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 8888888888888888
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 8888888888888888
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 8888888888888888
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 8888888888888888
  0x0000000000200000 - 0x0000000000280000 pattern 4444444444444444
  0x0000000000b62000 - 0x0000000000b65000 pattern 4444444444444444
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 4444444444444444
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 4444444444444444
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 4444444444444444
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 4444444444444444
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 4444444444444444
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 4444444444444444
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 4444444444444444
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 4444444444444444
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 4444444444444444
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 4444444444444444
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 4444444444444444
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 4444444444444444
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 4444444444444444
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 4444444444444444
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 4444444444444444
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 4444444444444444
  0x0000000000200000 - 0x0000000000280000 pattern 2222222222222222
  0x0000000000b62000 - 0x0000000000b65000 pattern 2222222222222222
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 2222222222222222
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 2222222222222222
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 2222222222222222
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 2222222222222222
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 2222222222222222
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 2222222222222222
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 2222222222222222
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 2222222222222222
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 2222222222222222
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 2222222222222222
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 2222222222222222
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 2222222222222222
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 2222222222222222
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 2222222222222222
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 2222222222222222
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 2222222222222222
  0x0000000000200000 - 0x0000000000280000 pattern 1111111111111111
  0x0000000000b62000 - 0x0000000000b65000 pattern 1111111111111111
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 1111111111111111
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 1111111111111111
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 1111111111111111
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 1111111111111111
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 1111111111111111
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 1111111111111111
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 1111111111111111
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 1111111111111111
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 1111111111111111
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 1111111111111111
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 1111111111111111
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 1111111111111111
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 1111111111111111
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 1111111111111111
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 1111111111111111
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 1111111111111111
  0x0000000000200000 - 0x0000000000280000 pattern aaaaaaaaaaaaaaaa
  0x0000000000b62000 - 0x0000000000b65000 pattern aaaaaaaaaaaaaaaa
  0x0000000000b7665f - 0x00000000f77e8a58 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern aaaaaaaaaaaaaaaa
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern aaaaaaaaaaaaaaaa
  0x0000000000200000 - 0x0000000000280000 pattern 5555555555555555
  0x0000000000b62000 - 0x0000000000b65000 pattern 5555555555555555
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 5555555555555555
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 5555555555555555
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 5555555555555555
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 5555555555555555
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 5555555555555555
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 5555555555555555
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 5555555555555555
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 5555555555555555
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 5555555555555555
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 5555555555555555
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 5555555555555555
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 5555555555555555
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 5555555555555555
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 5555555555555555
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 5555555555555555
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 5555555555555555
  0x0000000000200000 - 0x0000000000280000 pattern ffffffffffffffff
  0x0000000000b62000 - 0x0000000000b65000 pattern ffffffffffffffff
  0x0000000000b7665f - 0x00000000f77e8a58 pattern ffffffffffffffff
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern ffffffffffffffff
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern ffffffffffffffff
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern ffffffffffffffff
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern ffffffffffffffff
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern ffffffffffffffff
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern ffffffffffffffff
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern ffffffffffffffff
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern ffffffffffffffff
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern ffffffffffffffff
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern ffffffffffffffff
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern ffffffffffffffff
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern ffffffffffffffff
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern ffffffffffffffff
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern ffffffffffffffff
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern ffffffffffffffff
  0x0000000000200000 - 0x0000000000280000 pattern 0000000000000000
  0x0000000000b62000 - 0x0000000000b65000 pattern 0000000000000000
  0x0000000000b7665f - 0x00000000f77e8a58 pattern 0000000000000000
  0x00000000f77e8a87 - 0x00000000f77e8a88 pattern 0000000000000000
  0x00000000f77e8ab7 - 0x00000000f77e8ab8 pattern 0000000000000000
  0x00000000f77e8ae7 - 0x00000000f77e8ae8 pattern 0000000000000000
  0x00000000f77e8b17 - 0x00000000f77e8b18 pattern 0000000000000000
  0x00000000f77e8b47 - 0x00000000f77e8b48 pattern 0000000000000000
  0x00000000f77e8b74 - 0x00000000f77e8b78 pattern 0000000000000000
  0x00000000f77e8ba4 - 0x00000000f77e8ba8 pattern 0000000000000000
  0x00000000f77e8bd4 - 0x00000000f77e8bd8 pattern 0000000000000000
  0x00000000f77e8c04 - 0x00000000f77e8c08 pattern 0000000000000000
  0x00000000f77e8c34 - 0x00000000f77e8c38 pattern 0000000000000000
  0x00000000f77e8c64 - 0x00000000f77e8c68 pattern 0000000000000000
  0x00000000f77e8c94 - 0x00000000f77e8c98 pattern 0000000000000000
  0x00000000f77e8cc4 - 0x00000000f77e8cc8 pattern 0000000000000000
  0x00000000f77e8cf4 - 0x00000000f77e8cf8 pattern 0000000000000000
  0x00000000f77e8d29 - 0x00000000f77e8d30 pattern 0000000000000000
On node 0 totalpages: 1015296
  DMA32 zone: 15864 pages used for memmap
  DMA32 zone: 0 pages reserved
  DMA32 zone: 1015296 pages, LIFO batch:63
psci: probing for conduit method from DT.
psci: PSCIv1.1 detected in firmware.
psci: Using standard PSCI v0.2 function IDs
psci: MIGRATE_INFO_TYPE not supported.
psci: SMC Calling Convention v1.1
percpu: Embedded 20 pages/cpu s50200 r0 d31720 u81920
pcpu-alloc: s50200 r0 d31720 u81920 alloc=20*4096
pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 [0] 4 [0] 5 
Detected VIPT I-cache on CPU0
CPU features: detected: GIC system register CPU interface
Speculative Store Bypass Disable mitigation not required
Built 1 zonelists, mobility grouping on.  Total pages: 999432
Kernel command line:  rw root=/dev/sda2 rootwait console=tty0 
console=ttyS2,1500000n8 earlycon memtest=17
Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
Memory: 3971436K/4061184K available (6078K kernel code, 430K rwdata, 1460K 
rodata, 640K init, 407K bss, 89748K reserved, 0K cma-reserved)
random: get_random_u64 called from cache_random_seq_create+0x50/0x118 with 
crng_init=0
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
rcu: Hierarchical RCU implementation.
rcu: 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=6.
rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=6
NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
GICv3: GIC: Using split EOI/Deactivate mode
GICv3: Distributor has no Range Selector support
GICv3: no VLPI support, no direct LPI support
GICv3: CPU0: found redistributor 0 region 0:0x00000000fef00000
ITS [mem 0xfee20000-0xfee3ffff]
ITS@0x00000000fee20000: allocated 65536 Devices @f2400000 (flat, esz 8, psz 
64K, shr 0)
ITS: using cache flushing for cmd queue
GICv3: using LPI property table @0x0000000000240000
GIC: using cache flushing for LPI property table
GICv3: CPU0: using allocated LPI pending table @0x0000000000250000
GICv3: GIC: PPI partition interrupt-partition-0[0] { /cpus/cpu@0[0] 
/cpus/cpu@1[1] /cpus/cpu@2[2] /cpus/cpu@3[3] }
GICv3: GIC: PPI partition interrupt-partition-1[1] { /cpus/cpu@100[4] 
/cpus/cpu@101[5] }
rockchip_mmc_get_phase: invalid clk rate
rockchip_mmc_get_phase: invalid clk rate
rockchip_mmc_get_phase: invalid clk rate
rockchip_mmc_get_phase: invalid clk rate
arch_timer: cp15 timer(s) running at 24.00MHz (phys).
clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 
0x588fe9dc0, max_idle_ns: 440795202592 ns
sched_clock: 56 bits at 24MHz, resolution 41ns, wraps every 4398046511097ns
Console: colour dummy device 240x67
printk: console [tty0] enabled
Calibrating delay loop (skipped), value calculated using timer frequency.. 
48.00 BogoMIPS (lpj=96000)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 8192 (order: 4, 65536 bytes)
Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes)
*** VALIDATE proc ***
*** VALIDATE cgroup1 ***
*** VALIDATE cgroup2 ***
ASID allocator initialised with 32768 entries
rcu: Hierarchical SRCU implementation.
Platform MSI: interrupt-controller@fee20000 domain created
smp: Bringing up secondary CPUs ...
Detected VIPT I-cache on CPU1
GICv3: CPU1: found redistributor 1 region 0:0x00000000fef20000
GICv3: CPU1: using allocated LPI pending table @0x0000000000260000
CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
Detected VIPT I-cache on CPU2
GICv3: CPU2: found redistributor 2 region 0:0x00000000fef40000
GICv3: CPU2: using allocated LPI pending table @0x0000000000270000
CPU2: Booted secondary processor 0x0000000002 [0x410fd034]
Detected VIPT I-cache on CPU3
GICv3: CPU3: found redistributor 3 region 0:0x00000000fef60000
GICv3: CPU3: using allocated LPI pending table @0x00000000f2480000
CPU3: Booted secondary processor 0x0000000003 [0x410fd034]
CPU features: detected: EL2 vector hardening
Detected PIPT I-cache on CPU4
GICv3: CPU4: found redistributor 100 region 0:0x00000000fef80000
GICv3: CPU4: using allocated LPI pending table @0x00000000f2490000
CPU4: Booted secondary processor 0x0000000100 [0x410fd082]
Detected PIPT I-cache on CPU5
GICv3: CPU5: found redistributor 101 region 0:0x00000000fefa0000
GICv3: CPU5: using allocated LPI pending table @0x00000000f24a0000
CPU5: Booted secondary processor 0x0000000101 [0x410fd082]
smp: Brought up 1 node, 6 CPUs
SMP: Total of 6 processors activated.
CPU features: detected: 32-bit EL0 Support
CPU features: detected: CRC32 instructions
CPU: All CPU(s) started at EL2
alternatives: patching kernel code
devtmpfs: initialized
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 
7645041785100000 ns
futex hash table entries: 2048 (order: 5, 131072 bytes)
pinctrl core: initialized pinctrl subsystem
NET: Registered protocol family 16
hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
DMA: preallocated 256 KiB pool for atomic allocations

> Regarding interrupts, the kernel before kexec has this parameter:
> irqchip.gicv3_nolpi=1
> Thanks to Marc:
> https://freenode.irclog.whitequark.org/linux-rockchip/2018-11-20#23524255
>
> The kernel messages are here:
> https://paste.debian.net/1089170/
> https://paste.debian.net/1089171/
>
> Regards,
>  Vicenç.


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-06-29 22:56 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-22 10:40 d_lookup: Unable to handle kernel paging request Vicente Bergas
2019-05-22 13:53 ` Al Viro
2019-05-22 15:44   ` Vicente Bergas
2019-05-22 16:29     ` Al Viro
2019-05-24 22:21       ` Vicente Bergas
2019-05-28  9:38       ` Vicente Bergas
2019-06-18 18:35         ` Al Viro
2019-06-18 18:48           ` Al Viro
2019-06-19 12:42           ` Vicente Bergas
2019-06-19 16:28             ` Al Viro
2019-06-19 16:51               ` Vicente Bergas
2019-06-19 17:06                 ` Will Deacon
2019-06-19 17:09                 ` Al Viro
2019-06-22 18:02                   ` Vicente Bergas
2019-06-24 11:47                     ` Will Deacon
2019-06-25  9:46                       ` Will Deacon
2019-06-25 10:48                         ` Vicente Bergas
2019-06-29 22:56                           ` Vicente Bergas
2019-06-19 17:04               ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).