* [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
@ 2022-04-05 4:44 bugzilla-daemon
2022-04-05 4:48 ` [Bug 215804] " bugzilla-daemon
` (14 more replies)
0 siblings, 15 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 4:44 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
Bug ID: 215804
Summary: [xfstests generic/670] Unable to handle kernel paging
request at virtual address fffffbffff000008
Product: File System
Version: 2.5
Kernel Version: xfs-5.18-merge-4
Hardware: All
OS: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: XFS
Assignee: filesystem_xfs@kernel-bugs.kernel.org
Reporter: zlang@redhat.com
Regression: No
xfstests generic/670 hit a panic[1] on 64k directory block size XFS (mkfs.xfs
-n size=65536 -m rmapbt=1 -b size=1024):
The kernel version is linux 5.17+ (nearly 5.18-rc1, contains latest
xfs-5.18-merge-4)
The linux kernel HEAD is (nearly 5.18-rc1, but not):
commit be2d3ecedd9911fbfd7e55cc9ceac5f8b79ae4cf
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Sat Apr 2 12:57:17 2022 -0700
Merge tag 'perf-tools-for-v5.18-2022-04-02' of
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
[1]
[37277.345917] run fstests generic/670 at 2022-04-03 17:02:54
[37278.883000] XFS (vda3): Mounting V5 Filesystem
[37278.891732] XFS (vda3): Ending clean mount
[37278.920425] XFS (vda3): Unmounting Filesystem
[37279.399805] XFS (vda3): Mounting V5 Filesystem
[37279.407734] XFS (vda3): Ending clean mount
[37280.068575] XFS (vda3): Unmounting Filesystem
[37280.399733] XFS (vda3): Mounting V5 Filesystem
[37280.410122] XFS (vda3): Ending clean mount
[37285.232165] Unable to handle kernel paging request at virtual address
fffffbffff000008
[37285.232776] KASAN: maybe wild-memory-access in range
[0x0003dffff8000040-0x0003dffff8000047]
[37285.233332] Mem abort info:
[37285.233520] ESR = 0x96000006
[37285.233725] EC = 0x25: DABT (current EL), IL = 32 bits
[37285.234077] SET = 0, FnV = 0
[37285.234281] EA = 0, S1PTW = 0
[37285.234544] FSC = 0x06: level 2 translation fault
[37285.234871] Data abort info:
[37285.235065] ISV = 0, ISS = 0x00000006
[37285.235319] CM = 0, WnR = 0
[37285.235517] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000004574eb000
[37285.235953] [fffffbffff000008] pgd=0000000458c71003, p4d=0000000458c71003,
pud=0000000458c72003, pmd=0000000000000000
[37285.236651] Internal error: Oops: 96000006 [#1] SMP
[37285.236971] Modules linked in: overlay dm_zero dm_log_writes dm_thin_pool
dm_persistent_data dm_bio_prison sg dm_snapshot dm_bufio ext4 mbcache jbd2 loop
dm_flakey dm_mod tls rfkill sunrpc vfat fat drm fuse xfs libcrc32c crct10dif_ce
ghash_ce virtio_blk sha2_ce sha256_arm64 sha1_ce virtio_console virtio_net
net_failover failover virtio_mmio [last unloaded: scsi_debug]
[37285.239187] CPU: 3 PID: 3302514 Comm: xfs_io Kdump: loaded Tainted: G
W 5.17.0+ #1
[37285.239810] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[37285.240292] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[37285.240783] pc : __split_huge_pmd+0x1d8/0x34c
[37285.241097] lr : __split_huge_pmd+0x174/0x34c
[37285.241407] sp : ffff800023a56fe0
[37285.241642] x29: ffff800023a56fe0 x28: 0000000000000000 x27:
ffff0001c54d4060
[37285.242145] x26: 0000000000000000 x25: 0000000000000000 x24:
fffffc00056cf000
[37285.242661] x23: 1ffff0000474ae0a x22: ffff0007104fe630 x21:
ffff00014fab66b0
[37285.243175] x20: ffff800023a57080 x19: fffffbffff000000 x18:
0000000000000000
[37285.243689] x17: 0000000000000000 x16: ffffb109a2ec7e30 x15:
0000ffffd9035c10
[37285.244202] x14: 00000000f2040000 x13: 0000000000000000 x12:
ffff70000474aded
[37285.244715] x11: 1ffff0000474adec x10: ffff70000474adec x9 :
dfff800000000000
[37285.245230] x8 : ffff800023a56f63 x7 : 0000000000000001 x6 :
0000000000000003
[37285.245745] x5 : ffff800023a56f60 x4 : ffff70000474adec x3 :
1fffe000cd086e01
[37285.246257] x2 : 1fffff7fffe00001 x1 : 0000000000000000 x0 :
fffffbffff000008
[37285.246770] Call trace:
[37285.246952] __split_huge_pmd+0x1d8/0x34c
[37285.247246] split_huge_pmd_address+0x10c/0x1a0
[37285.247577] try_to_unmap_one+0xb64/0x125c
[37285.247878] rmap_walk_file+0x1dc/0x4b0
[37285.248159] try_to_unmap+0x134/0x16c
[37285.248427] split_huge_page_to_list+0x5ec/0xcbc
[37285.248763] truncate_inode_partial_folio+0x194/0x2ec
[37285.249128] truncate_inode_pages_range+0x2e8/0x870
[37285.249483] truncate_pagecache_range+0xa0/0xc0
[37285.249812] xfs_flush_unmap_range+0xc8/0x10c [xfs]
[37285.250316] xfs_reflink_remap_prep+0x2f4/0x3ac [xfs]
[37285.250822] xfs_file_remap_range+0x170/0x770 [xfs]
[37285.251314] do_clone_file_range+0x198/0x5e0
[37285.251629] vfs_clone_file_range+0xa8/0x63c
[37285.251942] ioctl_file_clone+0x5c/0xc0
[37285.252232] do_vfs_ioctl+0x10d4/0x1684
[37285.252517] __arm64_sys_ioctl+0xcc/0x18c
[37285.252813] invoke_syscall.constprop.0+0x74/0x1e0
[37285.253166] el0_svc_common.constprop.0+0x224/0x2c0
[37285.253525] do_el0_svc+0xa4/0xf0
[37285.253769] el0_svc+0x5c/0x160
[37285.254002] el0t_64_sync_handler+0x9c/0x120
[37285.254312] el0t_64_sync+0x174/0x178
[37285.254584] Code: 91002260 d343fc02 38e16841 35000b41 (f9400660)
[37285.255026] SMP: stopping secondary CPUs
[37285.292297] Starting crashdump kernel...
[37285.292706] Bye!
[ 0.000000] Booting Linux on physical CPU 0x0000000003 [0x413fd0c1]
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
@ 2022-04-05 4:48 ` bugzilla-daemon
2022-04-05 5:13 ` bugzilla-daemon
` (13 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 4:48 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #1 from Zorro Lang (zlang@redhat.com) ---
It's not 64k dir bsize related, just find another same panic on XFS with "-m
crc=1,finobt=1,reflink=1,rmapbt=1,bigtime=1,inobtcount=1 -b size=1024". Hmmm...
"maybe rmapbt and -b size=1024 is related???". And other same condition is 2
panic jobs are all on aarch64 machine.
[36463.624185] run fstests generic/670 at 2022-04-03 15:46:45
[36465.162010] XFS (nvme0n1p3): Mounting V5 Filesystem
[36465.177275] XFS (nvme0n1p3): Ending clean mount
[36465.214655] XFS (nvme0n1p3): Unmounting Filesystem
[36465.852627] XFS (nvme0n1p3): Mounting V5 Filesystem
[36465.869171] XFS (nvme0n1p3): Ending clean mount
[36466.599985] XFS (nvme0n1p3): Unmounting Filesystem
[36467.052055] XFS (nvme0n1p3): Mounting V5 Filesystem
[36467.068257] XFS (nvme0n1p3): Ending clean mount
[36471.061110] Unable to handle kernel paging request at virtual address
fffffbfffe000008
[36471.069061] KASAN: maybe wild-memory-access in range
[0x0003dffff0000040-0x0003dffff0000047]
[36471.078001] Mem abort info:
[36471.080788] ESR = 0x96000006
[36471.083852] EC = 0x25: DABT (current EL), IL = 32 bits
[36471.089155] SET = 0, FnV = 0
[36471.092206] EA = 0, S1PTW = 0
[36471.095338] FSC = 0x06: level 2 translation fault
[36471.100205] Data abort info:
[36471.103083] ISV = 0, ISS = 0x00000006
[36471.106908] CM = 0, WnR = 0
[36471.109867] swapper pgtable: 4k pages, 48-bit VAs, pgdp=000008064712b000
[36471.116572] [fffffbfffe000008] pgd=00000806488b1003, p4d=00000806488b1003,
pud=00000806488b2003, pmd=0000000000000000
[36471.127190] Internal error: Oops: 96000006 [#1] SMP
[36471.132059] Modules linked in: overlay dm_zero dm_log_writes dm_thin_pool
dm_persistent_data dm_bio_prison sg dm_snapshot dm_bufio ext4 mbcache jbd2 loop
dm_flakey dm_mod arm_spe_pmu rfkill mlx5_ib ast acpi_ipmi ib_uverbs
drm_vram_helper drm_ttm_helper ipmi_ssif ttm drm_kms_helper ib_core fb_sys_fops
syscopyarea sysfillrect sysimgblt ipmi_devintf arm_dmc620_pmu arm_cmn
ipmi_msghandler arm_dsu_pmu cppc_cpufreq sunrpc vfat fat drm fuse xfs libcrc32c
mlx5_core crct10dif_ce ghash_ce sha2_ce sha256_arm64 sha1_ce sbsa_gwdt nvme igb
mlxfw nvme_core tls i2c_algo_bit psample pci_hyperv_intf
i2c_designware_platform i2c_designware_core xgene_hwmon [last unloaded:
scsi_debug]
[36471.190920] CPU: 34 PID: 559440 Comm: xfs_io Kdump: loaded Tainted: G
W 5.17.0+ #1
[36471.199781] Hardware name: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F16f (SCP:
1.06.20210615) 07/01/2021
[36471.209075] pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[36471.216025] pc : __split_huge_pmd+0x1d8/0x34c
[36471.220375] lr : __split_huge_pmd+0x174/0x34c
[36471.224720] sp : ffff8000648f6fe0
[36471.228023] x29: ffff8000648f6fe0 x28: 0000000000000000 x27:
ffff080113ae1f00
[36471.235150] x26: 0000000000000000 x25: 0000000000000000 x24:
fffffc200a6cd800
[36471.242276] x23: 1ffff0000c91ee0a x22: ffff08070c7959c8 x21:
ffff080771475b88
[36471.249402] x20: ffff8000648f7080 x19: fffffbfffe000000 x18:
0000000000000000
[36471.256529] x17: 0000000000000000 x16: ffffde5c81d07e30 x15:
0000fffff07a68c0
[36471.263654] x14: 00000000f2040000 x13: 0000000000000000 x12:
ffff70000c91eded
[36471.270781] x11: 1ffff0000c91edec x10: ffff70000c91edec x9 :
dfff800000000000
[36471.277907] x8 : ffff8000648f6f63 x7 : 0000000000000001 x6 :
0000000000000003
[36471.285032] x5 : ffff8000648f6f60 x4 : ffff70000c91edec x3 :
1fffe106cbc34401
[36471.292158] x2 : 1fffff7fffc00001 x1 : 0000000000000000 x0 :
fffffbfffe000008
[36471.299284] Call trace:
[36471.301719] __split_huge_pmd+0x1d8/0x34c
[36471.305718] split_huge_pmd_address+0x10c/0x1a0
[36471.310238] try_to_unmap_one+0xb64/0x125c
[36471.314326] rmap_walk_file+0x1dc/0x4b0
[36471.318151] try_to_unmap+0x134/0x16c
[36471.321803] split_huge_page_to_list+0x5ec/0xcbc
[36471.326409] truncate_inode_partial_folio+0x194/0x2ec
[36471.331451] truncate_inode_pages_range+0x2e8/0x870
[36471.336318] truncate_pagecache_range+0xa0/0xc0
[36471.340837] xfs_flush_unmap_range+0xc8/0x10c [xfs]
[36471.345850] xfs_reflink_remap_prep+0x2f4/0x3ac [xfs]
[36471.351025] xfs_file_remap_range+0x170/0x770 [xfs]
[36471.356025] do_clone_file_range+0x198/0x5e0
[36471.360286] vfs_clone_file_range+0xa8/0x63c
[36471.364545] ioctl_file_clone+0x5c/0xc0
[36471.368372] do_vfs_ioctl+0x10d4/0x1684
[36471.372197] __arm64_sys_ioctl+0xcc/0x18c
[36471.376196] invoke_syscall.constprop.0+0x74/0x1e0
[36471.380978] el0_svc_common.constprop.0+0x224/0x2c0
[36471.385845] do_el0_svc+0xa4/0xf0
[36471.389150] el0_svc+0x5c/0x160
[36471.392281] el0t_64_sync_handler+0x9c/0x120
[36471.396540] el0t_64_sync+0x174/0x178
[36471.400193] Code: 91002260 d343fc02 38e16841 35000b41 (f9400660)
[36471.406279] SMP: stopping secondary CPUs
[36471.411145] Starting crashdump kernel...
[36471.415057] Bye!
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
2022-04-05 4:48 ` [Bug 215804] " bugzilla-daemon
@ 2022-04-05 5:13 ` bugzilla-daemon
2022-04-05 5:26 ` [Bug 215804] New: " Dave Chinner
` (12 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 5:13 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #2 from Zorro Lang (zlang@redhat.com) ---
Hmm... another similar panic testing job, triggered by trinity test tool on XFS
(-m reflink=1,rmapbt=1). It's rmapbt enabled again, and on aarch64 too:
[43380.585988] XFS (vda3): Mounting V5 Filesystem
[43380.596159] XFS (vda3): Ending clean mount
[43397.622777] futex_wake_op: trinity-c1 tries to shift op by -1; fix this
program
[43408.337391] futex_wake_op: trinity-c1 tries to shift op by 525; fix this
program
[43434.008520] restraintd[2046]: *** Current Time: Sun Apr 03 18:09:11 2022
Localwatchdog at: Tue Apr 05 00:08:11 2022
[43439.502831] Unable to handle kernel paging request at virtual address
fffffbffff000008
[43439.503774] KASAN: maybe wild-memory-access in range
[0x0003dffff8000040-0x0003dffff8000047]
[43439.504287] Mem abort info:
[43439.504461] ESR = 0x96000006
[43439.504651] EC = 0x25: DABT (current EL), IL = 32 bits
[43439.504978] SET = 0, FnV = 0
[43439.505168] EA = 0, S1PTW = 0
[43439.505364] FSC = 0x06: level 2 translation fault
[43439.505661] Data abort info:
[43439.505842] ISV = 0, ISS = 0x00000006
[43439.506081] CM = 0, WnR = 0
[43439.506267] swapper pgtable: 4k pages, 48-bit VAs, pgdp=000000072e56b000
[43439.506672] [fffffbffff000008] pgd=000000072fcf1003, p4d=000000072fcf1003,
pud=000000072fcf2003, pmd=0000000000000000
[43439.507533] Internal error: Oops: 96000006 [#1] SMP
[43439.507845] Modules linked in: can_isotp 8021q garp mrp bridge stp llc
vsock_loopback vmw_vsock_virtio_transport_common vsock af_key mpls_router
ip_tunnel qrtr can_bcm can_raw can pptp gre l2tp_ppp l2tp_netlink l2tp_core
pppoe pppox ppp_generic slhc crypto_user ib_core nfnetlink scsi_transport_iscsi
atm sctp ip6_udp_tunnel udp_tunnel tls rfkill sunrpc vfat fat drm fuse xfs
libcrc32c crct10dif_ce ghash_ce sha2_ce sha256_arm64 virtio_console virtio_blk
sha1_ce virtio_net net_failover failover virtio_mmio
[43439.510640] CPU: 6 PID: 518132 Comm: trinity-c3 Kdump: loaded Not tainted
5.17.0+ #1
[43439.511121] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[43439.511551] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[43439.511984] pc : __split_huge_pmd+0x1d8/0x34c
[43439.512265] lr : __split_huge_pmd+0x174/0x34c
[43439.512540] sp : ffff80000e267140
[43439.512750] x29: ffff80000e267140 x28: 0000000000000000 x27:
ffff000148b01780
[43439.513208] x26: 0000000000000000 x25: 0000000000000000 x24:
fffffc0005828700
[43439.513655] x23: 1ffff00001c4ce36 x22: ffff0000d7bb0108 x21:
ffff000116c94220
[43439.514103] x20: ffff80000e2671e0 x19: fffffbffff000000 x18:
1ffff00001c4cd43
[43439.514551] x17: 8f0100002d0e0000 x16: ffffb2d9f2347da0 x15:
0000000000000000
[43439.514999] x14: 0000000000000000 x13: 0000000000000000 x12:
ffff700001c4ce19
[43439.515449] x11: 1ffff00001c4ce18 x10: ffff700001c4ce18 x9 :
dfff800000000000
[43439.515897] x8 : ffff80000e2670c3 x7 : 0000000000000001 x6 :
0000000000000003
[43439.516346] x5 : ffff80000e2670c0 x4 : ffff700001c4ce18 x3 :
1fffe0019b499e01
[43439.516796] x2 : 1fffff7fffe00001 x1 : 0000000000000000 x0 :
fffffbffff000008
[43439.517244] Call trace:
[43439.517400] __split_huge_pmd+0x1d8/0x34c
[43439.517655] split_huge_pmd_address+0x10c/0x1a0
[43439.517943] try_to_unmap_one+0xb64/0x125c
[43439.518206] rmap_walk_file+0x1dc/0x4b0
[43439.518450] try_to_unmap+0x134/0x16c
[43439.518695] split_huge_page_to_list+0x5ec/0xcbc
[43439.518987] truncate_inode_partial_folio+0x194/0x2ec
[43439.519307] truncate_inode_pages_range+0x2e8/0x870
[43439.519615] truncate_pagecache+0x6c/0xa0
[43439.519869] truncate_setsize+0x50/0x90
[43439.520111] xfs_setattr_size+0x280/0x93c [xfs]
[43439.520545] xfs_vn_setattr_size+0xd4/0x124 [xfs]
[43439.520979] xfs_vn_setattr+0x100/0x24c [xfs]
[43439.521390] notify_change+0x720/0xbf0
[43439.521630] do_truncate+0xf4/0x194
[43439.521854] do_sys_ftruncate+0x1d8/0x2b4
[43439.522109] __arm64_sys_ftruncate+0x58/0x7c
[43439.522380] invoke_syscall.constprop.0+0x74/0x1e0
[43439.522685] el0_svc_common.constprop.0+0x224/0x2c0
[43439.522993] do_el0_svc+0xa4/0xf0
[43439.523212] el0_svc+0x5c/0x160
[43439.523415] el0t_64_sync_handler+0x9c/0x120
[43439.523684] el0t_64_sync+0x174/0x178
[43439.523920] Code: 91002260 d343fc02 38e16841 35000b41 (f9400660)
[43439.524304] SMP: stopping secondary CPUs
[43439.525427] Starting crashdump kernel...
[43439.525668] Bye!
[ 0.000000] Booting Linux on physical CPU 0x0000000006 [0x413fd0c1]
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
2022-04-05 4:48 ` [Bug 215804] " bugzilla-daemon
2022-04-05 5:13 ` bugzilla-daemon
@ 2022-04-05 5:26 ` Dave Chinner
2022-04-05 5:27 ` [Bug 215804] " bugzilla-daemon
` (11 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: Dave Chinner @ 2022-04-05 5:26 UTC (permalink / raw)
To: bugzilla-daemon; +Cc: linux-xfs
Hi Zorro,
On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=215804
>
> Bug ID: 215804
> Summary: [xfstests generic/670] Unable to handle kernel paging
> request at virtual address fffffbffff000008
> Product: File System
> Version: 2.5
> Kernel Version: xfs-5.18-merge-4
> Hardware: All
> OS: Linux
> Tree: Mainline
> Status: NEW
> Severity: normal
> Priority: P1
> Component: XFS
> Assignee: filesystem_xfs@kernel-bugs.kernel.org
> Reporter: zlang@redhat.com
> Regression: No
>
> xfstests generic/670 hit a panic[1] on 64k directory block size XFS (mkfs.xfs
> -n size=65536 -m rmapbt=1 -b size=1024):
.....
> [37285.246770] Call trace:
> [37285.246952] __split_huge_pmd+0x1d8/0x34c
> [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> [37285.247577] try_to_unmap_one+0xb64/0x125c
> [37285.247878] rmap_walk_file+0x1dc/0x4b0
> [37285.248159] try_to_unmap+0x134/0x16c
> [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
> [37285.249128] truncate_inode_pages_range+0x2e8/0x870
> [37285.249483] truncate_pagecache_range+0xa0/0xc0
That doesn't look like an XFS regression, more likely a bug in the
new large folios in the page cache feature. Can you revert commit
6795801366da ("xfs: Support large folios") and see if the problem
goes away?
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (2 preceding siblings ...)
2022-04-05 5:26 ` [Bug 215804] New: " Dave Chinner
@ 2022-04-05 5:27 ` bugzilla-daemon
2022-04-05 16:27 ` bugzilla-daemon
` (10 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 5:27 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #3 from Dave Chinner (david@fromorbit.com) ---
Hi Zorro,
On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=215804
>
> Bug ID: 215804
> Summary: [xfstests generic/670] Unable to handle kernel paging
> request at virtual address fffffbffff000008
> Product: File System
> Version: 2.5
> Kernel Version: xfs-5.18-merge-4
> Hardware: All
> OS: Linux
> Tree: Mainline
> Status: NEW
> Severity: normal
> Priority: P1
> Component: XFS
> Assignee: filesystem_xfs@kernel-bugs.kernel.org
> Reporter: zlang@redhat.com
> Regression: No
>
> xfstests generic/670 hit a panic[1] on 64k directory block size XFS (mkfs.xfs
> -n size=65536 -m rmapbt=1 -b size=1024):
.....
> [37285.246770] Call trace:
> [37285.246952] __split_huge_pmd+0x1d8/0x34c
> [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> [37285.247577] try_to_unmap_one+0xb64/0x125c
> [37285.247878] rmap_walk_file+0x1dc/0x4b0
> [37285.248159] try_to_unmap+0x134/0x16c
> [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
> [37285.249128] truncate_inode_pages_range+0x2e8/0x870
> [37285.249483] truncate_pagecache_range+0xa0/0xc0
That doesn't look like an XFS regression, more likely a bug in the
new large folios in the page cache feature. Can you revert commit
6795801366da ("xfs: Support large folios") and see if the problem
goes away?
Cheers,
Dave.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (3 preceding siblings ...)
2022-04-05 5:27 ` [Bug 215804] " bugzilla-daemon
@ 2022-04-05 16:27 ` bugzilla-daemon
2022-04-05 19:23 ` [Bug 215804] New: " Matthew Wilcox
` (9 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 16:27 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #4 from Zorro Lang (zlang@redhat.com) ---
(In reply to Dave Chinner from comment #3)
> Hi Zorro,
>
> On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> > https://bugzilla.kernel.org/show_bug.cgi?id=215804
> >
> > Bug ID: 215804
> > Summary: [xfstests generic/670] Unable to handle kernel paging
> > request at virtual address fffffbffff000008
> > Product: File System
> > Version: 2.5
> > Kernel Version: xfs-5.18-merge-4
> > Hardware: All
> > OS: Linux
> > Tree: Mainline
> > Status: NEW
> > Severity: normal
> > Priority: P1
> > Component: XFS
> > Assignee: filesystem_xfs@kernel-bugs.kernel.org
> > Reporter: zlang@redhat.com
> > Regression: No
> >
> > xfstests generic/670 hit a panic[1] on 64k directory block size XFS
> (mkfs.xfs
> > -n size=65536 -m rmapbt=1 -b size=1024):
> .....
> > [37285.246770] Call trace:
> > [37285.246952] __split_huge_pmd+0x1d8/0x34c
> > [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> > [37285.247577] try_to_unmap_one+0xb64/0x125c
> > [37285.247878] rmap_walk_file+0x1dc/0x4b0
> > [37285.248159] try_to_unmap+0x134/0x16c
> > [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> > [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
> > [37285.249128] truncate_inode_pages_range+0x2e8/0x870
> > [37285.249483] truncate_pagecache_range+0xa0/0xc0
>
> That doesn't look like an XFS regression, more likely a bug in the
> new large folios in the page cache feature. Can you revert commit
> 6795801366da ("xfs: Support large folios") and see if the problem
> goes away?
Sure, I'm going to test that, thanks!
>
> Cheers,
>
> Dave.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (4 preceding siblings ...)
2022-04-05 16:27 ` bugzilla-daemon
@ 2022-04-05 19:23 ` Matthew Wilcox
2022-04-05 20:48 ` Yang Shi
2022-04-05 19:23 ` [Bug 215804] " bugzilla-daemon
` (8 subsequent siblings)
14 siblings, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2022-04-05 19:23 UTC (permalink / raw)
To: bugzilla-daemon; +Cc: linux-xfs, linux-mm
On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=215804
[...]
> [37285.232165] Unable to handle kernel paging request at virtual address
> fffffbffff000008
> [37285.232776] KASAN: maybe wild-memory-access in range
> [0x0003dffff8000040-0x0003dffff8000047]
> [37285.233332] Mem abort info:
> [37285.233520] ESR = 0x96000006
> [37285.233725] EC = 0x25: DABT (current EL), IL = 32 bits
> [37285.234077] SET = 0, FnV = 0
> [37285.234281] EA = 0, S1PTW = 0
> [37285.234544] FSC = 0x06: level 2 translation fault
> [37285.234871] Data abort info:
> [37285.235065] ISV = 0, ISS = 0x00000006
> [37285.235319] CM = 0, WnR = 0
> [37285.235517] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000004574eb000
> [37285.235953] [fffffbffff000008] pgd=0000000458c71003, p4d=0000000458c71003,
> pud=0000000458c72003, pmd=0000000000000000
> [37285.236651] Internal error: Oops: 96000006 [#1] SMP
> [37285.239187] CPU: 3 PID: 3302514 Comm: xfs_io Kdump: loaded Tainted: G W 5.17.0+ #1
> [37285.239810] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
> [37285.240292] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [37285.240783] pc : __split_huge_pmd+0x1d8/0x34c
> [37285.241097] lr : __split_huge_pmd+0x174/0x34c
> [37285.241407] sp : ffff800023a56fe0
> [37285.241642] x29: ffff800023a56fe0 x28: 0000000000000000 x27:
> ffff0001c54d4060
> [37285.242145] x26: 0000000000000000 x25: 0000000000000000 x24:
> fffffc00056cf000
> [37285.242661] x23: 1ffff0000474ae0a x22: ffff0007104fe630 x21:
> ffff00014fab66b0
> [37285.243175] x20: ffff800023a57080 x19: fffffbffff000000 x18:
> 0000000000000000
> [37285.243689] x17: 0000000000000000 x16: ffffb109a2ec7e30 x15:
> 0000ffffd9035c10
> [37285.244202] x14: 00000000f2040000 x13: 0000000000000000 x12:
> ffff70000474aded
> [37285.244715] x11: 1ffff0000474adec x10: ffff70000474adec x9 :
> dfff800000000000
> [37285.245230] x8 : ffff800023a56f63 x7 : 0000000000000001 x6 :
> 0000000000000003
> [37285.245745] x5 : ffff800023a56f60 x4 : ffff70000474adec x3 :
> 1fffe000cd086e01
> [37285.246257] x2 : 1fffff7fffe00001 x1 : 0000000000000000 x0 :
> fffffbffff000008
> [37285.246770] Call trace:
> [37285.246952] __split_huge_pmd+0x1d8/0x34c
> [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> [37285.247577] try_to_unmap_one+0xb64/0x125c
> [37285.247878] rmap_walk_file+0x1dc/0x4b0
> [37285.248159] try_to_unmap+0x134/0x16c
> [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
Clearly this is due to my changes, but I'm wondering why it doesn't
happen with misaligned mappings and shmem today. Here's the path I
see as being problematic:
split_huge_page()
split_huge_page_to_list()
unmap_page()
ttu_flags = ... TTU_SPLIT_HUGE_PMD ...
try_to_unmap()
try_to_unmap_one()
split_huge_pmd_address()
pmd = pmd_offset(pud, address);
__split_huge_pmd(vma, pmd, address, freeze, folio);
if (folio) {
if (folio != page_folio(pmd_page(*pmd)))
I'm assuming it's crashing at that line. Calling pmd_page() on a
pmd that we haven't checked is pmd_trans_huge() seems like a really
bad idea. I probably compounded that problem by calling page_folio()
on something that's not necessarily a PMD that points to a page, but
I think the real sin here is that nobody checks before this that it's
trans_huge.
Here's Option A for fixing it: Only check pmd_page() after checking
pmd_trans_huge():
+++ b/mm/huge_memory.c
@@ -2145,15 +2145,14 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
* pmd against. Otherwise we can end up replacing wrong folio.
*/
VM_BUG_ON(freeze && !folio);
- if (folio) {
- VM_WARN_ON_ONCE(!folio_test_locked(folio));
- if (folio != page_folio(pmd_page(*pmd)))
- goto out;
- }
+ VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
- is_pmd_migration_entry(*pmd))
+ is_pmd_migration_entry(*pmd)) {
+ if (folio && folio != page_folio(pmd_page(*pmd)))
+ goto out;
__split_huge_pmd_locked(vma, pmd, range.start, freeze);
+ }
out:
spin_unlock(ptl);
I can think of a few more ways of fixing it, but that one seems best.
Not tested in any meaningful way, more looking for feedback.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (5 preceding siblings ...)
2022-04-05 19:23 ` [Bug 215804] New: " Matthew Wilcox
@ 2022-04-05 19:23 ` bugzilla-daemon
2022-04-05 20:48 ` bugzilla-daemon
` (7 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 19:23 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #5 from willy@infradead.org ---
On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=215804
[...]
> [37285.232165] Unable to handle kernel paging request at virtual address
> fffffbffff000008
> [37285.232776] KASAN: maybe wild-memory-access in range
> [0x0003dffff8000040-0x0003dffff8000047]
> [37285.233332] Mem abort info:
> [37285.233520] ESR = 0x96000006
> [37285.233725] EC = 0x25: DABT (current EL), IL = 32 bits
> [37285.234077] SET = 0, FnV = 0
> [37285.234281] EA = 0, S1PTW = 0
> [37285.234544] FSC = 0x06: level 2 translation fault
> [37285.234871] Data abort info:
> [37285.235065] ISV = 0, ISS = 0x00000006
> [37285.235319] CM = 0, WnR = 0
> [37285.235517] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000004574eb000
> [37285.235953] [fffffbffff000008] pgd=0000000458c71003, p4d=0000000458c71003,
> pud=0000000458c72003, pmd=0000000000000000
> [37285.236651] Internal error: Oops: 96000006 [#1] SMP
> [37285.239187] CPU: 3 PID: 3302514 Comm: xfs_io Kdump: loaded Tainted: G
> W 5.17.0+ #1
> [37285.239810] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
> [37285.240292] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS
> BTYPE=--)
> [37285.240783] pc : __split_huge_pmd+0x1d8/0x34c
> [37285.241097] lr : __split_huge_pmd+0x174/0x34c
> [37285.241407] sp : ffff800023a56fe0
> [37285.241642] x29: ffff800023a56fe0 x28: 0000000000000000 x27:
> ffff0001c54d4060
> [37285.242145] x26: 0000000000000000 x25: 0000000000000000 x24:
> fffffc00056cf000
> [37285.242661] x23: 1ffff0000474ae0a x22: ffff0007104fe630 x21:
> ffff00014fab66b0
> [37285.243175] x20: ffff800023a57080 x19: fffffbffff000000 x18:
> 0000000000000000
> [37285.243689] x17: 0000000000000000 x16: ffffb109a2ec7e30 x15:
> 0000ffffd9035c10
> [37285.244202] x14: 00000000f2040000 x13: 0000000000000000 x12:
> ffff70000474aded
> [37285.244715] x11: 1ffff0000474adec x10: ffff70000474adec x9 :
> dfff800000000000
> [37285.245230] x8 : ffff800023a56f63 x7 : 0000000000000001 x6 :
> 0000000000000003
> [37285.245745] x5 : ffff800023a56f60 x4 : ffff70000474adec x3 :
> 1fffe000cd086e01
> [37285.246257] x2 : 1fffff7fffe00001 x1 : 0000000000000000 x0 :
> fffffbffff000008
> [37285.246770] Call trace:
> [37285.246952] __split_huge_pmd+0x1d8/0x34c
> [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> [37285.247577] try_to_unmap_one+0xb64/0x125c
> [37285.247878] rmap_walk_file+0x1dc/0x4b0
> [37285.248159] try_to_unmap+0x134/0x16c
> [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
Clearly this is due to my changes, but I'm wondering why it doesn't
happen with misaligned mappings and shmem today. Here's the path I
see as being problematic:
split_huge_page()
split_huge_page_to_list()
unmap_page()
ttu_flags = ... TTU_SPLIT_HUGE_PMD ...
try_to_unmap()
try_to_unmap_one()
split_huge_pmd_address()
pmd = pmd_offset(pud, address);
__split_huge_pmd(vma, pmd, address, freeze, folio);
if (folio) {
if (folio != page_folio(pmd_page(*pmd)))
I'm assuming it's crashing at that line. Calling pmd_page() on a
pmd that we haven't checked is pmd_trans_huge() seems like a really
bad idea. I probably compounded that problem by calling page_folio()
on something that's not necessarily a PMD that points to a page, but
I think the real sin here is that nobody checks before this that it's
trans_huge.
Here's Option A for fixing it: Only check pmd_page() after checking
pmd_trans_huge():
+++ b/mm/huge_memory.c
@@ -2145,15 +2145,14 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t
*pmd,
* pmd against. Otherwise we can end up replacing wrong folio.
*/
VM_BUG_ON(freeze && !folio);
- if (folio) {
- VM_WARN_ON_ONCE(!folio_test_locked(folio));
- if (folio != page_folio(pmd_page(*pmd)))
- goto out;
- }
+ VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
- is_pmd_migration_entry(*pmd))
+ is_pmd_migration_entry(*pmd)) {
+ if (folio && folio != page_folio(pmd_page(*pmd)))
+ goto out;
__split_huge_pmd_locked(vma, pmd, range.start, freeze);
+ }
out:
spin_unlock(ptl);
I can think of a few more ways of fixing it, but that one seems best.
Not tested in any meaningful way, more looking for feedback.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 19:23 ` [Bug 215804] New: " Matthew Wilcox
@ 2022-04-05 20:48 ` Yang Shi
0 siblings, 0 replies; 17+ messages in thread
From: Yang Shi @ 2022-04-05 20:48 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: bugzilla-daemon, linux-xfs, Linux MM
On Tue, Apr 5, 2022 at 12:25 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> > https://bugzilla.kernel.org/show_bug.cgi?id=215804
> [...]
> > [37285.232165] Unable to handle kernel paging request at virtual address
> > fffffbffff000008
> > [37285.232776] KASAN: maybe wild-memory-access in range
> > [0x0003dffff8000040-0x0003dffff8000047]
> > [37285.233332] Mem abort info:
> > [37285.233520] ESR = 0x96000006
> > [37285.233725] EC = 0x25: DABT (current EL), IL = 32 bits
> > [37285.234077] SET = 0, FnV = 0
> > [37285.234281] EA = 0, S1PTW = 0
> > [37285.234544] FSC = 0x06: level 2 translation fault
> > [37285.234871] Data abort info:
> > [37285.235065] ISV = 0, ISS = 0x00000006
> > [37285.235319] CM = 0, WnR = 0
> > [37285.235517] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000004574eb000
> > [37285.235953] [fffffbffff000008] pgd=0000000458c71003, p4d=0000000458c71003,
> > pud=0000000458c72003, pmd=0000000000000000
> > [37285.236651] Internal error: Oops: 96000006 [#1] SMP
> > [37285.239187] CPU: 3 PID: 3302514 Comm: xfs_io Kdump: loaded Tainted: G W 5.17.0+ #1
> > [37285.239810] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
> > [37285.240292] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> > [37285.240783] pc : __split_huge_pmd+0x1d8/0x34c
> > [37285.241097] lr : __split_huge_pmd+0x174/0x34c
> > [37285.241407] sp : ffff800023a56fe0
> > [37285.241642] x29: ffff800023a56fe0 x28: 0000000000000000 x27:
> > ffff0001c54d4060
> > [37285.242145] x26: 0000000000000000 x25: 0000000000000000 x24:
> > fffffc00056cf000
> > [37285.242661] x23: 1ffff0000474ae0a x22: ffff0007104fe630 x21:
> > ffff00014fab66b0
> > [37285.243175] x20: ffff800023a57080 x19: fffffbffff000000 x18:
> > 0000000000000000
> > [37285.243689] x17: 0000000000000000 x16: ffffb109a2ec7e30 x15:
> > 0000ffffd9035c10
> > [37285.244202] x14: 00000000f2040000 x13: 0000000000000000 x12:
> > ffff70000474aded
> > [37285.244715] x11: 1ffff0000474adec x10: ffff70000474adec x9 :
> > dfff800000000000
> > [37285.245230] x8 : ffff800023a56f63 x7 : 0000000000000001 x6 :
> > 0000000000000003
> > [37285.245745] x5 : ffff800023a56f60 x4 : ffff70000474adec x3 :
> > 1fffe000cd086e01
> > [37285.246257] x2 : 1fffff7fffe00001 x1 : 0000000000000000 x0 :
> > fffffbffff000008
> > [37285.246770] Call trace:
> > [37285.246952] __split_huge_pmd+0x1d8/0x34c
> > [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> > [37285.247577] try_to_unmap_one+0xb64/0x125c
> > [37285.247878] rmap_walk_file+0x1dc/0x4b0
> > [37285.248159] try_to_unmap+0x134/0x16c
> > [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> > [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
>
> Clearly this is due to my changes, but I'm wondering why it doesn't
> happen with misaligned mappings and shmem today. Here's the path I
> see as being problematic:
>
> split_huge_page()
> split_huge_page_to_list()
> unmap_page()
> ttu_flags = ... TTU_SPLIT_HUGE_PMD ...
> try_to_unmap()
> try_to_unmap_one()
> split_huge_pmd_address()
> pmd = pmd_offset(pud, address);
> __split_huge_pmd(vma, pmd, address, freeze, folio);
> if (folio) {
> if (folio != page_folio(pmd_page(*pmd)))
>
> I'm assuming it's crashing at that line. Calling pmd_page() on a
> pmd that we haven't checked is pmd_trans_huge() seems like a really
> bad idea. I probably compounded that problem by calling page_folio()
> on something that's not necessarily a PMD that points to a page, but
> I think the real sin here is that nobody checks before this that it's
> trans_huge.
>
> Here's Option A for fixing it: Only check pmd_page() after checking
> pmd_trans_huge():
>
> +++ b/mm/huge_memory.c
> @@ -2145,15 +2145,14 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> * pmd against. Otherwise we can end up replacing wrong folio.
> */
> VM_BUG_ON(freeze && !folio);
> - if (folio) {
> - VM_WARN_ON_ONCE(!folio_test_locked(folio));
> - if (folio != page_folio(pmd_page(*pmd)))
> - goto out;
> - }
> + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
>
> if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
> - is_pmd_migration_entry(*pmd))
> + is_pmd_migration_entry(*pmd)) {
> + if (folio && folio != page_folio(pmd_page(*pmd)))
> + goto out;
> __split_huge_pmd_locked(vma, pmd, range.start, freeze);
> + }
>
> out:
> spin_unlock(ptl);
>
> I can think of a few more ways of fixing it, but that one seems best.
> Not tested in any meaningful way, more looking for feedback.
I agree with your analysis. That pmd may be a normal PMD so its
so-called pfn is invalid in fact. The fix looks fine to me.
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (6 preceding siblings ...)
2022-04-05 19:23 ` [Bug 215804] " bugzilla-daemon
@ 2022-04-05 20:48 ` bugzilla-daemon
2022-04-05 22:57 ` bugzilla-daemon
` (6 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 20:48 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #6 from Yang Shi (shy828301@gmail.com) ---
On Tue, Apr 5, 2022 at 12:25 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> > https://bugzilla.kernel.org/show_bug.cgi?id=215804
> [...]
> > [37285.232165] Unable to handle kernel paging request at virtual address
> > fffffbffff000008
> > [37285.232776] KASAN: maybe wild-memory-access in range
> > [0x0003dffff8000040-0x0003dffff8000047]
> > [37285.233332] Mem abort info:
> > [37285.233520] ESR = 0x96000006
> > [37285.233725] EC = 0x25: DABT (current EL), IL = 32 bits
> > [37285.234077] SET = 0, FnV = 0
> > [37285.234281] EA = 0, S1PTW = 0
> > [37285.234544] FSC = 0x06: level 2 translation fault
> > [37285.234871] Data abort info:
> > [37285.235065] ISV = 0, ISS = 0x00000006
> > [37285.235319] CM = 0, WnR = 0
> > [37285.235517] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000004574eb000
> > [37285.235953] [fffffbffff000008] pgd=0000000458c71003,
> p4d=0000000458c71003,
> > pud=0000000458c72003, pmd=0000000000000000
> > [37285.236651] Internal error: Oops: 96000006 [#1] SMP
> > [37285.239187] CPU: 3 PID: 3302514 Comm: xfs_io Kdump: loaded Tainted: G
> W 5.17.0+ #1
> > [37285.239810] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0
> 02/06/2015
> > [37285.240292] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS
> BTYPE=--)
> > [37285.240783] pc : __split_huge_pmd+0x1d8/0x34c
> > [37285.241097] lr : __split_huge_pmd+0x174/0x34c
> > [37285.241407] sp : ffff800023a56fe0
> > [37285.241642] x29: ffff800023a56fe0 x28: 0000000000000000 x27:
> > ffff0001c54d4060
> > [37285.242145] x26: 0000000000000000 x25: 0000000000000000 x24:
> > fffffc00056cf000
> > [37285.242661] x23: 1ffff0000474ae0a x22: ffff0007104fe630 x21:
> > ffff00014fab66b0
> > [37285.243175] x20: ffff800023a57080 x19: fffffbffff000000 x18:
> > 0000000000000000
> > [37285.243689] x17: 0000000000000000 x16: ffffb109a2ec7e30 x15:
> > 0000ffffd9035c10
> > [37285.244202] x14: 00000000f2040000 x13: 0000000000000000 x12:
> > ffff70000474aded
> > [37285.244715] x11: 1ffff0000474adec x10: ffff70000474adec x9 :
> > dfff800000000000
> > [37285.245230] x8 : ffff800023a56f63 x7 : 0000000000000001 x6 :
> > 0000000000000003
> > [37285.245745] x5 : ffff800023a56f60 x4 : ffff70000474adec x3 :
> > 1fffe000cd086e01
> > [37285.246257] x2 : 1fffff7fffe00001 x1 : 0000000000000000 x0 :
> > fffffbffff000008
> > [37285.246770] Call trace:
> > [37285.246952] __split_huge_pmd+0x1d8/0x34c
> > [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> > [37285.247577] try_to_unmap_one+0xb64/0x125c
> > [37285.247878] rmap_walk_file+0x1dc/0x4b0
> > [37285.248159] try_to_unmap+0x134/0x16c
> > [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> > [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
>
> Clearly this is due to my changes, but I'm wondering why it doesn't
> happen with misaligned mappings and shmem today. Here's the path I
> see as being problematic:
>
> split_huge_page()
> split_huge_page_to_list()
> unmap_page()
> ttu_flags = ... TTU_SPLIT_HUGE_PMD ...
> try_to_unmap()
> try_to_unmap_one()
> split_huge_pmd_address()
> pmd = pmd_offset(pud, address);
> __split_huge_pmd(vma, pmd, address, freeze, folio);
> if (folio) {
> if (folio != page_folio(pmd_page(*pmd)))
>
> I'm assuming it's crashing at that line. Calling pmd_page() on a
> pmd that we haven't checked is pmd_trans_huge() seems like a really
> bad idea. I probably compounded that problem by calling page_folio()
> on something that's not necessarily a PMD that points to a page, but
> I think the real sin here is that nobody checks before this that it's
> trans_huge.
>
> Here's Option A for fixing it: Only check pmd_page() after checking
> pmd_trans_huge():
>
> +++ b/mm/huge_memory.c
> @@ -2145,15 +2145,14 @@ void __split_huge_pmd(struct vm_area_struct *vma,
> pmd_t *pmd,
> * pmd against. Otherwise we can end up replacing wrong folio.
> */
> VM_BUG_ON(freeze && !folio);
> - if (folio) {
> - VM_WARN_ON_ONCE(!folio_test_locked(folio));
> - if (folio != page_folio(pmd_page(*pmd)))
> - goto out;
> - }
> + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
>
> if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
> - is_pmd_migration_entry(*pmd))
> + is_pmd_migration_entry(*pmd)) {
> + if (folio && folio != page_folio(pmd_page(*pmd)))
> + goto out;
> __split_huge_pmd_locked(vma, pmd, range.start, freeze);
> + }
>
> out:
> spin_unlock(ptl);
>
> I can think of a few more ways of fixing it, but that one seems best.
> Not tested in any meaningful way, more looking for feedback.
I agree with your analysis. That pmd may be a normal PMD so its
so-called pfn is invalid in fact. The fix looks fine to me.
>
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (7 preceding siblings ...)
2022-04-05 20:48 ` bugzilla-daemon
@ 2022-04-05 22:57 ` bugzilla-daemon
2022-04-06 4:32 ` bugzilla-daemon
` (5 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-05 22:57 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #7 from Zorro Lang (zlang@redhat.com) ---
Get below messages from aarch64 with linux v5.18-rc1 (which reproduced this bug
too):
# ./scripts/faddr2line vmlinux __split_huge_pmd+0x1d8/0x34c
__split_huge_pmd+0x1d8/0x34c:
_compound_head at
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/./include/linux/page-flags.h:263
(inlined by) __split_huge_pmd at
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/huge_memory.c:2150
# ./scripts/decode_stacktrace.sh vmlinux <crash_calltrace.log
[ 2129.736862] Unable to handle kernel paging request at virtual address
fffffd1d59000008
[ 2129.780524] KASAN: maybe wild-memory-access in range
[0x0003e8eac8000040-0x0003e8eac8000047]
[ 2129.783285] Mem abort info:
[ 2129.783997] ESR = 0x96000006
[ 2129.784732] EC = 0x25: DABT (current EL), IL = 32 bits
[ 2129.786221] SET = 0, FnV = 0
[ 2129.787015] EA = 0, S1PTW = 0
[ 2129.787944] FSC = 0x06: level 2 translation fault
[ 2129.789120] Data abort info:
[ 2129.789858] ISV = 0, ISS = 0x00000006
[ 2129.790801] CM = 0, WnR = 0
[ 2129.791542] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000000fa88b000
[ 2129.793131] [fffffd1d59000008] pgd=10000001bf22e003, p4d=10000001bf22e003,
pud=10000001bf22d003, pmd=0000000000000000
[ 2129.797297] Internal error: Oops: 96000006 [#1] SMP
[ 2129.798708] Modules linked in: tls rfkill sunrpc vfat fat drm fuse xfs
libcrc32c crct10dif_ce ghash_ce sha2_ce virtio_console virtio_blk sha256_arm64
sha1_ce virtio_net net_failover failover virtio_mmio
[ 2129.805211] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[ 2129.806925] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 2129.808682] pc : __split_huge_pmd
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/./include/linux/page-flags.h:263
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/huge_memory.c:2150)
[ 2129.809909] lr : __split_huge_pmd
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/./arch/arm64/include/asm/pgtable.h:387
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/huge_memory.c:2150)
[ 2129.811003] sp : ffff80000e5a6fe0
[ 2129.811834] x29: ffff80000e5a6fe0 x28: 0000000000000000 x27:
ffff4757455eede0
[ 2129.813645] x26: 0000000000000000 x25: 0000000000000000 x24:
fffffd1d5eeb4800
[ 2129.815412] x23: 1ffff00001cb4e0a x22: ffff4757943b0a50 x21:
ffff475755a56270
[ 2129.817219] x20: ffff80000e5a7080 x19: fffffd1d59000000 x18:
0000000000000000
[ 2129.819029] x17: 0000000000000000 x16: ffffb625b8e67e20 x15:
1fffe8eaf65232e9
[ 2129.820840] x14: 0000000000000000 x13: 0000000000000000 x12:
ffff700001cb4ded
[ 2129.822654] x11: 1ffff00001cb4dec x10: ffff700001cb4dec x9 :
dfff800000000000
[ 2129.824447] x8 : ffff80000e5a6f63 x7 : 0000000000000001 x6 :
0000000000000003
[ 2129.826256] x5 : ffff80000e5a6f60 x4 : ffff700001cb4dec x3 :
1fffe8eaf8fd6c01
[ 2129.828045] x2 : 1fffffa3ab200001 x1 : 0000000000000000 x0 :
fffffd1d59000008
[ 2129.829858] Call trace:
[ 2129.830506] __split_huge_pmd
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/./include/linux/page-flags.h:263
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/huge_memory.c:2150)
[ 2129.831525] split_huge_pmd_address
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/huge_memory.c:2199)
[ 2129.832667] try_to_unmap_one
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/internal.h:504
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/rmap.c:1452)
[ 2129.833719] rmap_walk_file
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/rmap.c:2323)
[ 2129.834684] try_to_unmap
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/rmap.c:2352
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/rmap.c:1726)
[ 2129.835628] split_huge_page_to_list
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/./arch/arm64/include/asm/irqflags.h:70
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/./arch/arm64/include/asm/irqflags.h:98
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/huge_memory.c:2567)
[ 2129.836811] truncate_inode_partial_folio
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/truncate.c:243)
[ 2129.838119] truncate_inode_pages_range
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/truncate.c:381)
[ 2129.839360] truncate_pagecache_range
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/mm/truncate.c:868)
[ 2129.840518] xfs_flush_unmap_range
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/xfs/xfs_bmap_util.c:953)
xfs
[ 2129.842300] xfs_reflink_remap_prep
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/xfs/xfs_reflink.c:1372)
xfs
[ 2129.843932] xfs_file_remap_range
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/xfs/xfs_file.c:1129)
xfs
[ 2129.845495] do_clone_file_range
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/remap_range.c:383)
[ 2129.846573] vfs_clone_file_range
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/remap_range.c:401)
[ 2129.847646] ioctl_file_clone
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/ioctl.c:241)
[ 2129.848615] do_vfs_ioctl
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/ioctl.c:823)
[ 2129.849606] __arm64_sys_ioctl
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/ioctl.c:869
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/ioctl.c:856
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/fs/ioctl.c:856)
[ 2129.850630] invoke_syscall.constprop.0
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/syscall.c:38
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/syscall.c:52)
[ 2129.851866] el0_svc_common.constprop.0
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/syscall.c:158)
[ 2129.853118] do_el0_svc
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/syscall.c:182)
[ 2129.853969] el0_svc
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/entry-common.c:133
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/entry-common.c:142
/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/entry-common.c:617)
[ 2129.854850] el0t_64_sync_handler
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/entry-common.c:635)
[ 2129.855950] el0t_64_sync
(/mnt/tests/kernel/distribution/upstream-kernel/install/kernel/arch/arm64/kernel/entry.S:581)
[ 2129.856898] Code: 91002260 d343fc02 38e16841 35000b41 (f9400660)
All code
========
0: 91002260 add x0, x19, #0x8
4: d343fc02 lsr x2, x0, #3
8: 38e16841 ldrsb w1, [x2, x1]
c: 35000b41 cbnz w1, 0x174
10:* f9400660 ldr x0, [x19, #8] <-- trapping
instruction
Code starting with the faulting instruction
===========================================
0: f9400660 ldr x0, [x19, #8]
[ 2129.858468] SMP: stopping secondary CPUs
[ 2129.862796] Starting crashdump kernel...
[ 2129.863839] Bye!
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (8 preceding siblings ...)
2022-04-05 22:57 ` bugzilla-daemon
@ 2022-04-06 4:32 ` bugzilla-daemon
2022-04-06 12:57 ` bugzilla-daemon
` (4 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-06 4:32 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #8 from Zorro Lang (zlang@redhat.com) ---
(In reply to Dave Chinner from comment #3)
> Hi Zorro,
>
> On Tue, Apr 05, 2022 at 04:44:35AM +0000, bugzilla-daemon@kernel.org wrote:
> > https://bugzilla.kernel.org/show_bug.cgi?id=215804
> >
> > Bug ID: 215804
> > Summary: [xfstests generic/670] Unable to handle kernel paging
> > request at virtual address fffffbffff000008
> > Product: File System
> > Version: 2.5
> > Kernel Version: xfs-5.18-merge-4
> > Hardware: All
> > OS: Linux
> > Tree: Mainline
> > Status: NEW
> > Severity: normal
> > Priority: P1
> > Component: XFS
> > Assignee: filesystem_xfs@kernel-bugs.kernel.org
> > Reporter: zlang@redhat.com
> > Regression: No
> >
> > xfstests generic/670 hit a panic[1] on 64k directory block size XFS
> (mkfs.xfs
> > -n size=65536 -m rmapbt=1 -b size=1024):
> .....
> > [37285.246770] Call trace:
> > [37285.246952] __split_huge_pmd+0x1d8/0x34c
> > [37285.247246] split_huge_pmd_address+0x10c/0x1a0
> > [37285.247577] try_to_unmap_one+0xb64/0x125c
> > [37285.247878] rmap_walk_file+0x1dc/0x4b0
> > [37285.248159] try_to_unmap+0x134/0x16c
> > [37285.248427] split_huge_page_to_list+0x5ec/0xcbc
> > [37285.248763] truncate_inode_partial_folio+0x194/0x2ec
> > [37285.249128] truncate_inode_pages_range+0x2e8/0x870
> > [37285.249483] truncate_pagecache_range+0xa0/0xc0
>
> That doesn't look like an XFS regression, more likely a bug in the
> new large folios in the page cache feature. Can you revert commit
> 6795801366da ("xfs: Support large folios") and see if the problem
> goes away?
Hi Dave,
You're right, by reverting that patch this bug can't be reproduced anymore.
Thanks,
Zorro
>
> Cheers,
>
> Dave.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (9 preceding siblings ...)
2022-04-06 4:32 ` bugzilla-daemon
@ 2022-04-06 12:57 ` bugzilla-daemon
2022-04-07 2:18 ` bugzilla-daemon
` (3 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-06 12:57 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
Matthew Wilcox (matthew@wil.cx) changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |matthew@wil.cx
--- Comment #9 from Matthew Wilcox (matthew@wil.cx) ---
Created attachment 300704
--> https://bugzilla.kernel.org/attachment.cgi?id=300704&action=edit
Proposed fix
Please test on arm64; generic/670 passes on x86-64 with this patch, but then it
passed before.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (10 preceding siblings ...)
2022-04-06 12:57 ` bugzilla-daemon
@ 2022-04-07 2:18 ` bugzilla-daemon
2022-04-07 2:33 ` bugzilla-daemon
` (2 subsequent siblings)
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-07 2:18 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #10 from Zorro Lang (zlang@redhat.com) ---
(In reply to Matthew Wilcox from comment #9)
> Created attachment 300704 [details]
> Proposed fix
>
> Please test on arm64; generic/670 passes on x86-64 with this patch, but then
> it passed before.
Hi Matthew,
The reproducer (of this bug) test passed on aarch64 with this patch. But I just
hit another panic on x86_64 as below[1], by doing regression test(run trinity).
As it's not reproducible 100%, so I'm trying to reproduce it without your
patch. If you think it's another issue, not a regression from your patch, I'll
report another bug to track it.
Thanks,
Zorro
[1]
[ 361.335242] futex_wake_op: trinity-c9 tries to shift op by -354; fix this
program
[ 367.675001] futex_wake_op: trinity-c19 tries to shift op by -608; fix this
program
[ 383.028587] page:00000000b6110ce7 refcount:6 mapcount:0
mapping:00000000fd87c1f3 index:0x174 pfn:0x8d6c00
[ 383.039316] head:00000000b6110ce7 order:9 compound_mapcount:0
compound_pincount:0
[ 383.047703] aops:xfs_address_space_operations [xfs] ino:a6 dentry
name:"trinity-testfile2"
[ 383.057131] flags:
0x57ffffc0012005(locked|uptodate|private|head|node=1|zone=2|lastcpupid=0x1fffff)
[ 383.067258] raw: 0057ffffc0012005 0000000000000000 dead000000000122
ffff888136653410
[ 383.075925] raw: 0000000000000174 ffff88810bee5900 00000006ffffffff
0000000000000000
[ 383.084589] page dumped because: VM_BUG_ON_FOLIO(folio_nr_pages(old) !=
nr_pages)
[ 383.092987] ------------[ cut here ]------------
[ 383.098154] kernel BUG at mm/memcontrol.c:6857!
[ 383.103235] invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI
[ 383.109456] CPU: 16 PID: 22651 Comm: trinity-c14 Kdump: loaded Not tainted
5.18.0-rc1+ #1
[ 383.118586] Hardware name: Dell Inc. PowerEdge R430/0CN7X8, BIOS 2.8.0
05/23/2018
[ 383.126938] RIP: 0010:mem_cgroup_migrate+0x21f/0x300
[ 383.132483] Code: 48 89 ef e8 73 78 e7 ff 0f 0b 48 c7 c6 20 0a d8 94 48 89
ef e8 62 78 e7 ff 0f 0b 48 c7 c6 80 0a d8 94 48 89 ef e8 51 78 e7 ff <0f> 0b e8
9a 2b ba ff 89 de 4c 89 ef e8 c0 3c ff ff 48 89 ea 48 b8
[ 383.153442] RSP: 0018:ffffc90023f1f6f8 EFLAGS: 00010282
[ 383.159275] RAX: 0000000000000045 RBX: 0000000000000200 RCX:
0000000000000000
[ 383.167239] RDX: 0000000000000001 RSI: ffffffff94ea1540 RDI:
fffff520047e3ecf
[ 383.175202] RBP: ffffea00235b0000 R08: 0000000000000045 R09:
ffff8888091fda47
[ 383.183165] R10: ffffed110123fb48 R11: 0000000000000001 R12:
ffffea0005f59b00
[ 383.191130] R13: 0000000000000000 R14: ffffea00235b0034 R15:
ffff88810bee5900
[ 383.199094] FS: 00007fda9afb2740(0000) GS:ffff888809000000(0000)
knlGS:0000000000000000
[ 383.208123] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 383.214535] CR2: 00007fda9a36c07c CR3: 0000000182c78001 CR4:
00000000003706e0
[ 383.222498] DR0: 00007fda9aecd000 DR1: 00007fda9aece000 DR2:
0000000000000000
[ 383.230461] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
00000000000f0602
[ 383.238424] Call Trace:
[ 383.241151] <TASK>
[ 383.243490] iomap_migrate_page+0xdc/0x490
[ 383.248068] move_to_new_page+0x1fa/0xdf0
[ 383.252545] ? remove_migration_ptes+0xf0/0xf0
[ 383.257497] ? try_to_migrate+0x13d/0x260
[ 383.261975] ? try_to_unmap+0x150/0x150
[ 383.266248] ? try_to_unmap_one+0x1cd0/0x1cd0
[ 383.271110] ? anon_vma_ctor+0xe0/0xe0
[ 383.275294] ? page_get_anon_vma+0x240/0x240
[ 383.280064] __unmap_and_move+0xc38/0x1090
[ 383.284638] ? unmap_and_move_huge_page+0x1210/0x1210
[ 383.290278] ? __lock_release+0x4bd/0x9f0
[ 383.294759] ? alloc_migration_target+0x267/0x8d0
[ 383.300015] unmap_and_move+0xd6/0xe50
[ 383.304209] ? migrate_page+0x250/0x250
[ 383.308496] migrate_pages+0x6c5/0x12a0
[ 383.312778] ? migrate_page+0x250/0x250
[ 383.317063] ? buffer_migrate_page_norefs+0x10/0x10
[ 383.322510] ? sched_clock_cpu+0x15/0x1b0
[ 383.326991] move_pages_and_store_status.isra.0+0xe9/0x1b0
[ 383.333117] ? migrate_pages+0x12a0/0x12a0
[ 383.337692] ? __might_fault+0xb8/0x160
[ 383.341979] do_pages_move+0x343/0x450
[ 383.346166] ? move_pages_and_store_status.isra.0+0x1b0/0x1b0
[ 383.352587] ? find_mm_struct+0x353/0x5c0
[ 383.357065] kernel_move_pages+0x13c/0x1e0
[ 383.361641] ? do_pages_move+0x450/0x450
[ 383.366024] ? ktime_get_coarse_real_ts64+0x128/0x160
[ 383.371666] ? lockdep_hardirqs_on+0x79/0x100
[ 383.376530] ? ktime_get_coarse_real_ts64+0x128/0x160
[ 383.382176] __x64_sys_move_pages+0xdc/0x1b0
[ 383.386951] ? syscall_trace_enter.constprop.0+0x179/0x250
[ 383.393081] do_syscall_64+0x3b/0x90
[ 383.397064] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 383.402706] RIP: 0033:0x7fda9ac43dfd
[ 383.406698] Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89
f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01
f0 ff ff 73 01 c3 48 8b 0d fb 5f 1b 00 f7 d8 64 89 01 48
[ 383.427647] RSP: 002b:00007ffde3a9cb48 EFLAGS: 00000246 ORIG_RAX:
0000000000000117
[ 383.436092] RAX: ffffffffffffffda RBX: 0000000000000002 RCX:
00007fda9ac43dfd
[ 383.444057] RDX: 00000000022b0760 RSI: 0000000000000038 RDI:
0000000000000000
[ 383.452020] RBP: 00007fda9af49000 R08: 00000000022ac6f0 R09:
0000000000000000
[ 383.459983] R10: 00000000022ac600 R11: 0000000000000246 R12:
0000000000000117
[ 383.467946] R13: 00007fda9afb26c0 R14: 00007fda9af49058 R15:
00007fda9af49000
[ 383.475917] </TASK>
[ 383.478355] Modules linked in: 8021q garp mrp bridge stp llc vsock_loopback
vmw_vsock_virtio_transport_common ieee802154_socket ieee802154
vmw_vsock_vmci_transport vsock vmw_vmci mpls_router ip_tunnel af_key qrtr hidp
bnep rfcomm bluetooth can_bcm can_raw can pptp gre l2tp_ppp l2tp_netlink
l2tp_core pppoe pppox ppp_generic slhc crypto_user ib_core nfnetlink
scsi_transport_iscsi atm sctp ip6_udp_tunnel udp_tunnel tls iTCO_wdt
iTCO_vendor_support intel_rapl_msr dell_wmi_descriptor video dcdbas
intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common sb_edac
x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mgag200
i2c_algo_bit drm_shmem_helper drm_kms_helper irqbypass rapl intel_cstate
intel_uncore syscopyarea rfkill sysfillrect mei_me sysimgblt joydev fb_sys_fops
pcspkr ipmi_ssif mxm_wmi mei lpc_ich ipmi_si ipmi_devintf ipmi_msghandler
acpi_power_meter sunrpc drm fuse xfs libcrc32c sd_mod t10_pi
crc64_rocksoft_generic crc64_rocksoft crc64 sg crct10dif_pclmul
[ 383.478527] crc32_pclmul crc32c_intel ahci ghash_clmulni_intel libahci
libata tg3 megaraid_sas wmi
[ 383.585790] ---[ end trace 0000000000000000 ]---
[ 383.611622] RIP: 0010:mem_cgroup_migrate+0x21f/0x300
[ 383.617187] Code: 48 89 ef e8 73 78 e7 ff 0f 0b 48 c7 c6 20 0a d8 94 48 89
ef e8 62 78 e7 ff 0f 0b 48 c7 c6 80 0a d8 94 48 89 ef e8 51 78 e7 ff <0f> 0b e8
9a 2b ba ff 89 de 4c 89 ef e8 c0 3c ff ff 48 89 ea 48 b8
[ 383.638159] RSP: 0018:ffffc90023f1f6f8 EFLAGS: 00010282
[ 383.644005] RAX: 0000000000000045 RBX: 0000000000000200 RCX:
0000000000000000
[ 383.651983] RDX: 0000000000000001 RSI: ffffffff94ea1540 RDI:
fffff520047e3ecf
[ 383.659955] RBP: ffffea00235b0000 R08: 0000000000000045 R09:
ffff8888091fda47
[ 383.667927] R10: ffffed110123fb48 R11: 0000000000000001 R12:
ffffea0005f59b00
[ 383.675901] R13: 0000000000000000 R14: ffffea00235b0034 R15:
ffff88810bee5900
[ 383.683875] FS: 00007fda9afb2740(0000) GS:ffff888809000000(0000)
knlGS:0000000000000000
[ 383.692917] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 383.699331] CR2: 00007fda9a36c07c CR3: 0000000182c78001 CR4:
00000000003706e0
[ 383.707303] DR0: 00007fda9aecd000 DR1: 00007fda9aece000 DR2:
0000000000000000
[ 383.715281] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
00000000000f0602
[ 385.747373] trinity-main (14692) used greatest stack depth: 20912 bytes left
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (11 preceding siblings ...)
2022-04-07 2:18 ` bugzilla-daemon
@ 2022-04-07 2:33 ` bugzilla-daemon
2022-04-07 2:54 ` bugzilla-daemon
2022-04-08 18:54 ` bugzilla-daemon
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-07 2:33 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #11 from Matthew Wilcox (matthew@wil.cx) ---
Ah, the migrate problem. I have patches already:
https://lore.kernel.org/linux-mm/20220404193006.1429250-1-willy@infradead.org/
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (12 preceding siblings ...)
2022-04-07 2:33 ` bugzilla-daemon
@ 2022-04-07 2:54 ` bugzilla-daemon
2022-04-08 18:54 ` bugzilla-daemon
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-07 2:54 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
--- Comment #12 from Zorro Lang (zlang@redhat.com) ---
(In reply to Matthew Wilcox from comment #11)
> Ah, the migrate problem. I have patches already:
>
> https://lore.kernel.org/linux-mm/20220404193006.1429250-1-willy@infradead.
> org/
Great! As it's a known issue of upstream, I'm going to cancel my further
testing on it.
I've reproduced this bug on 5.18-rc1, then test passed on 5.18-rc1 with your
patch, with same distro version and arch. If my tier0 regression test won't
find obvious regression, it's a good patch for me :)
Thanks,
Zorro
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Bug 215804] [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
` (13 preceding siblings ...)
2022-04-07 2:54 ` bugzilla-daemon
@ 2022-04-08 18:54 ` bugzilla-daemon
14 siblings, 0 replies; 17+ messages in thread
From: bugzilla-daemon @ 2022-04-08 18:54 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=215804
Matthew Wilcox (matthew@wil.cx) changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution|--- |CODE_FIX
--- Comment #13 from Matthew Wilcox (matthew@wil.cx) ---
Linus pulled the fix(es) earlier today
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2022-04-08 18:54 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-05 4:44 [Bug 215804] New: [xfstests generic/670] Unable to handle kernel paging request at virtual address fffffbffff000008 bugzilla-daemon
2022-04-05 4:48 ` [Bug 215804] " bugzilla-daemon
2022-04-05 5:13 ` bugzilla-daemon
2022-04-05 5:26 ` [Bug 215804] New: " Dave Chinner
2022-04-05 5:27 ` [Bug 215804] " bugzilla-daemon
2022-04-05 16:27 ` bugzilla-daemon
2022-04-05 19:23 ` [Bug 215804] New: " Matthew Wilcox
2022-04-05 20:48 ` Yang Shi
2022-04-05 19:23 ` [Bug 215804] " bugzilla-daemon
2022-04-05 20:48 ` bugzilla-daemon
2022-04-05 22:57 ` bugzilla-daemon
2022-04-06 4:32 ` bugzilla-daemon
2022-04-06 12:57 ` bugzilla-daemon
2022-04-07 2:18 ` bugzilla-daemon
2022-04-07 2:33 ` bugzilla-daemon
2022-04-07 2:54 ` bugzilla-daemon
2022-04-08 18:54 ` bugzilla-daemon
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.