linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Bug 216566] New: [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs]
@ 2022-10-09 17:47 bugzilla-daemon
  2022-10-09 22:47 ` Dave Chinner
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: bugzilla-daemon @ 2022-10-09 17:47 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=216566

            Bug ID: 216566
           Summary: [xfstests generic/648] BUG: unable to handle page
                    fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700
                    [xfs]
           Product: File System
           Version: 2.5
    Kernel Version: v6.1-rc0
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: XFS
          Assignee: filesystem_xfs@kernel-bugs.kernel.org
          Reporter: zlang@redhat.com
        Regression: No

xfstests generic/648 hit kernel panic[1] on xfs with 64k directory block size
(-n size=65536), before panic, there's a kernel assertion (not sure if it's
related).

It's reproducable, but not easy. Generally I reproduced it by loop running
generic/648 on xfs (-n size=65536) hundreds of time.

The last time I hit this panic on linux with HEAD=

commit a6afa4199d3d038fbfdff5511f7523b0e30cb774
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sat Oct 8 10:30:44 2022 -0700

    Merge tag 'mailbox-v6.1' of
git://git.linaro.org/landing-teams/working/fujitsu/integration

[1]
[  397.108795] loop: Write error at byte offset 3952001024, length 4096. 
[  397.130710] loop0: writeback error on inode 1494978, offset 1630208, sector
7718752 
[  397.130778] XFS (loop0): log I/O error -5 
[  397.131327] loop: Write error at byte offset 2651811840, length 4096. 
[  397.138435] XFS (loop0): Filesystem has been shut down due to log error
(0x2). 
[  397.142446] XFS (loop0): log I/O error -5 
[  397.148884] XFS (loop0): Please unmount the filesystem and rectify the
problem(s). 
[  397.173024] XFS (loop0): Unmounting Filesystem 
[  395.005786] restraintd[7627]: *** Current Time: Sun Oct 09 10:29:31 2022 
Localwatchdog at: Tue Oct 11 10:24:31 2022 
[  398.203242] XFS (dm-0): Unmounting Filesystem 
[  398.223779] XFS (dm-0): Mounting V5 Filesystem 
[  398.364785] XFS (dm-0): Starting recovery (logdev: internal) 
[  398.987258] XFS (dm-0): Ending recovery (logdev: internal) 
[  399.000633] loop0: detected capacity change from 0 to 10346136 
[  399.735192] XFS (loop0): Mounting V5 Filesystem 
[  399.763005] XFS (loop0): Starting recovery (logdev: internal) 
[  399.816308] XFS (loop0): Bad dir block magic! 
[  399.820681] XFS: Assertion failed: 0, file: fs/xfs/xfs_buf_item_recover.c,
line: 414 
[  399.828459] ------------[ cut here ]------------ 
[  399.833080] WARNING: CPU: 97 PID: 114754 at fs/xfs/xfs_message.c:104
assfail+0x2f/0x36 [xfs] 
[  399.841633] Modules linked in: loop dm_mod rfkill intel_rapl_msr
intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common
ipmi_ssif i10nm_edac nfit x86_pkg_temp_thermal intel_powerclamp coretemp
kvm_intel kvm mlx5_ib mgag200 i2c_algo_bit drm_shmem_helper irqbypass sunrpc
ib_uverbs drm_kms_helper rapl dcdbas acpi_ipmi intel_cstate syscopyarea ipmi_si
ib_core mei_me dell_smbios sysfillrect i2c_i801 isst_if_mbox_pci isst_if_mmio
ipmi_devintf intel_uncore sysimgblt pcspkr wmi_bmof dell_wmi_descriptor
isst_if_common mei fb_sys_fops i2c_smbus intel_pch_thermal intel_vsec
ipmi_msghandler acpi_power_meter drm fuse xfs libcrc32c sd_mod t10_pi sg
mlx5_core ahci mlxfw libahci crct10dif_pclmul crc32_pclmul tls crc32c_intel
ghash_clmulni_intel libata psample megaraid_sas tg3 pci_hyperv_intf wmi 
[  399.911998] CPU: 97 PID: 114754 Comm: mount Kdump: loaded Not tainted 6.0.0+
#1 
[  399.919311] Hardware name: Dell Inc. PowerEdge R750/0PJ80M, BIOS 1.5.4
12/17/2021 
[  399.926794] RIP: 0010:assfail+0x2f/0x36 [xfs] 
[  399.931239] Code: 49 89 d0 41 89 c9 48 c7 c2 60 3e cf c0 48 89 f1 48 89 fe
48 c7 c7 6c 5e ce c0 e8 3a fe ff ff 80 3d 4a 57 0b 00 00 74 02 0f 0b <0f> 0b c3
cc cc cc cc 48 8d 45 10 48 89 e2 4c 89 e6 48 89 1c 24 48 
[  399.949991] RSP: 0018:ff7b3d2f8751b910 EFLAGS: 00010246 
[  399.955219] RAX: 0000000000000000 RBX: ff44dbd80e68de00 RCX:
000000007fffffff 
[  399.962359] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
ffffffffc0ce5e6c 
[  399.969490] RBP: ff44dbd856214000 R08: 0000000000000000 R09:
000000000000000a 
[  399.976625] R10: 000000000000000a R11: f000000000000000 R12:
000000050001ddb2 
[  399.983758] R13: ff44dbd84122be00 R14: ff44dbd856214000 R15:
ff44dbd804bf5c00 
[  399.990890] FS:  00007fcd7164b800(0000) GS:ff44dbe7bfa00000(0000)
knlGS:0000000000000000 
[  399.998977] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[  400.004721] CR2: 000055f074d8a3f8 CR3: 00000010c699a005 CR4:
0000000000771ee0 
[  400.011855] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000 
[  400.018991] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400 
[  400.026127] PKRU: 55555554 
[  400.028841] Call Trace: 
[  400.031297]  <TASK> 
[  400.033409]  xlog_recover_validate_buf_type+0x17c/0x670 [xfs] 
[  400.039235]  xlog_recover_buf_commit_pass2+0x349/0x430 [xfs] 
[  400.044979]  xlog_recover_items_pass2+0x51/0xd0 [xfs] 
[  400.050101]  xlog_recover_commit_trans+0x30f/0x360 [xfs] 
[  400.055483]  xlog_recovery_process_trans+0xf1/0x110 [xfs] 
[  400.060951]  xlog_recover_process_data+0x84/0x140 [xfs] 
[  400.066248]  xlog_do_recovery_pass+0x24f/0x6a0 [xfs] 
[  400.071282]  xlog_do_log_recovery+0x6b/0xc0 [xfs] 
[  400.076056]  xlog_do_recover+0x33/0x1f0 [xfs] 
[  400.080485]  xlog_recover+0xde/0x190 [xfs] 
[  400.084646]  xfs_log_mount+0x19f/0x340 [xfs] 
[  400.088988]  xfs_mountfs+0x44b/0x980 [xfs] 
[  400.093159]  ? xfs_filestream_get_parent+0x90/0x90 [xfs] 
[  400.098538]  xfs_fs_fill_super+0x4bc/0x900 [xfs] 
[  400.103230]  ? xfs_open_devices+0x1f0/0x1f0 [xfs] 
[  400.108011]  get_tree_bdev+0x16d/0x270 
[  400.111772]  vfs_get_tree+0x22/0xc0 
[  400.115265]  do_new_mount+0x17a/0x310 
[  400.118941]  __x64_sys_mount+0x107/0x140 
[  400.122875]  do_syscall_64+0x59/0x90 
[  400.126461]  ? syscall_exit_work+0x103/0x130 
[  400.130745]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.135543]  ? do_syscall_64+0x69/0x90 
[  400.139297]  ? do_syscall_64+0x69/0x90 
[  400.143051]  ? syscall_exit_work+0x103/0x130 
[  400.147331]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.152123]  ? do_syscall_64+0x69/0x90 
[  400.155878]  ? syscall_exit_work+0x103/0x130 
[  400.160159]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.164960]  ? do_syscall_64+0x69/0x90 
[  400.168715]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.173512]  ? do_syscall_64+0x69/0x90 
[  400.177268]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.182066]  ? do_syscall_64+0x69/0x90 
[  400.185822]  entry_SYSCALL_64_after_hwframe+0x63/0xcd 
[  400.190882] RIP: 0033:0x7fcd7143f7be 
[  400.194463] Code: 48 8b 0d 65 a6 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01
f0 ff ff 73 01 c3 48 8b 0d 32 a6 1b 00 f7 d8 64 89 01 48 
[  400.213216] RSP: 002b:00007ffdf9207bd8 EFLAGS: 00000246 ORIG_RAX:
00000000000000a5 
[  400.220791] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
00007fcd7143f7be 
[  400.227924] RDX: 000055687c0def90 RSI: 000055687c0d9630 RDI:
000055687c0ddfe0 
[  400.235065] RBP: 000055687c0d9400 R08: 0000000000000000 R09:
0000000000000001 
[  400.242198] R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000000 
[  400.249330] R13: 000055687c0def90 R14: 000055687c0ddfe0 R15:
000055687c0d9400 
[  400.256462]  </TASK> 
[  400.258654] ---[ end trace 0000000000000000 ]--- 
[  400.263417] XFS (loop0): _xfs_buf_ioapply: no buf ops on daddr 0x319a88 len
16 
[  400.270640] 00000000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.278639] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.286640] 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.294638] 00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.302639] 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.310639] 00000050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.318636] 00000060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.326636] 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
................ 
[  400.334639] CPU: 97 PID: 114754 Comm: mount Kdump: loaded Tainted: G       
W          6.0.0+ #1 
[  400.343419] Hardware name: Dell Inc. PowerEdge R750/0PJ80M, BIOS 1.5.4
12/17/2021 
[  400.350897] Call Trace: 
[  400.353350]  <TASK> 
[  400.355457]  dump_stack_lvl+0x34/0x48 
[  400.359122]  _xfs_buf_ioapply+0x167/0x1b0 [xfs] 
[  400.363727]  ? wake_up_q+0x90/0x90 
[  400.367131]  __xfs_buf_submit+0x69/0x220 [xfs] 
[  400.371637]  xfs_buf_delwri_submit_buffers+0xe3/0x230 [xfs] 
[  400.377273]  xfs_buf_delwri_submit+0x36/0xc0 [xfs] 
[  400.382124]  xlog_recover_process_ophdr+0xb7/0x150 [xfs] 
[  400.387507]  xlog_recover_process_data+0x84/0x140 [xfs] 
[  400.392801]  xlog_do_recovery_pass+0x24f/0x6a0 [xfs] 
[  400.397840]  xlog_do_log_recovery+0x6b/0xc0 [xfs] 
[  400.402614]  xlog_do_recover+0x33/0x1f0 [xfs] 
[  400.407042]  xlog_recover+0xde/0x190 [xfs] 
[  400.411211]  xfs_log_mount+0x19f/0x340 [xfs] 
[  400.415553]  xfs_mountfs+0x44b/0x980 [xfs] 
[  400.419720]  ? xfs_filestream_get_parent+0x90/0x90 [xfs] 
[  400.425104]  xfs_fs_fill_super+0x4bc/0x900 [xfs] 
[  400.429790]  ? xfs_open_devices+0x1f0/0x1f0 [xfs] 
[  400.434556]  get_tree_bdev+0x16d/0x270 
[  400.438310]  vfs_get_tree+0x22/0xc0 
[  400.441804]  do_new_mount+0x17a/0x310 
[  400.445470]  __x64_sys_mount+0x107/0x140 
[  400.449395]  do_syscall_64+0x59/0x90 
[  400.452973]  ? syscall_exit_work+0x103/0x130 
[  400.457248]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.462038]  ? do_syscall_64+0x69/0x90 
[  400.465792]  ? do_syscall_64+0x69/0x90 
[  400.469546]  ? syscall_exit_work+0x103/0x130 
[  400.473817]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.478610]  ? do_syscall_64+0x69/0x90 
[  400.482363]  ? syscall_exit_work+0x103/0x130 
[  400.486634]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.491427]  ? do_syscall_64+0x69/0x90 
[  400.495182]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.499974]  ? do_syscall_64+0x69/0x90 
[  400.503727]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.508518]  ? do_syscall_64+0x69/0x90 
[  400.512272]  entry_SYSCALL_64_after_hwframe+0x63/0xcd 
[  400.517323] RIP: 0033:0x7fcd7143f7be 
[  400.520904] Code: 48 8b 0d 65 a6 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01
f0 ff ff 73 01 c3 48 8b 0d 32 a6 1b 00 f7 d8 64 89 01 48 
[  400.539648] RSP: 002b:00007ffdf9207bd8 EFLAGS: 00000246 ORIG_RAX:
00000000000000a5 
[  400.547215] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
00007fcd7143f7be 
[  400.554349] RDX: 000055687c0def90 RSI: 000055687c0d9630 RDI:
000055687c0ddfe0 
[  400.561482] RBP: 000055687c0d9400 R08: 0000000000000000 R09:
0000000000000001 
[  400.568612] R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000000 
[  400.575747] R13: 000055687c0def90 R14: 000055687c0ddfe0 R15:
000055687c0d9400 
[  400.582881]  </TASK> 
[  400.585095] BUG: unable to handle page fault for address: ff7b3d2f8f3efff8 
[  400.591971] #PF: supervisor read access in kernel mode 
[  400.597109] #PF: error_code(0x0000) - not-present page 
[  400.602248] PGD 100000067 P4D 1001b9067 PUD 1001ba067 PMD 139973067 PTE 0 
[  400.609034] Oops: 0000 [#1] PREEMPT SMP NOPTI 
[  400.613395] CPU: 97 PID: 114754 Comm: mount Kdump: loaded Tainted: G       
W          6.0.0+ #1 
[  400.622174] Hardware name: Dell Inc. PowerEdge R750/0PJ80M, BIOS 1.5.4
12/17/2021 
[  400.629652] RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs] 
[  400.635458] Code: c3 c0 e9 04 ff ff ff 3d 58 44 32 44 0f 84 55 ff ff ff 48
c7 c0 c9 71 c3 c0 e9 ed fe ff ff 41 8b 45 00 4c 8d 54 05 f8 4c 29 f0 <41> 8b 12
48 83 e8 08 48 c1 e8 03 0f ca 39 c2 73 3f 89 d2 4c 89 d0 
[  400.654206] RSP: 0018:ff7b3d2f8751b860 EFLAGS: 00010206 
[  400.659430] RAX: 000000000000ffc0 RBX: ff44dbd80f84d980 RCX:
d14cad4d1bf69f29 
[  400.666563] RDX: 0000000000000006 RSI: ff44dbd80f84d980 RDI:
0000000000000000 
[  400.673697] RBP: ff7b3d2f8f3e0000 R08: ff7b3d2f8751b9f0 R09:
ff7b3d2f8751b9f0 
[  400.680831] R10: ff7b3d2f8f3efff8 R11: 0000000000001000 R12:
ff44dbd856214000 
[  400.687961] R13: ff44dbd807616c00 R14: 0000000000000040 R15:
ff44dbd80e68de70 
[  400.695093] FS:  00007fcd7164b800(0000) GS:ff44dbe7bfa00000(0000)
knlGS:0000000000000000 
[  400.703182] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
[  400.708928] CR2: ff7b3d2f8f3efff8 CR3: 00000010c699a005 CR4:
0000000000771ee0 
[  400.716060] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000 
[  400.723191] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400 
[  400.730324] PKRU: 55555554 
[  400.733037] Call Trace: 
[  400.735492]  <TASK> 
[  400.737596]  xfs_dir3_block_write_verify+0x28/0x90 [xfs] 
[  400.742971]  _xfs_buf_ioapply+0x4a/0x1b0 [xfs] 
[  400.747485]  ? wake_up_q+0x90/0x90 
[  400.750892]  __xfs_buf_submit+0x69/0x220 [xfs] 
[  400.755397]  xfs_buf_delwri_submit_buffers+0xe3/0x230 [xfs] 
[  400.761022]  xfs_buf_delwri_submit+0x36/0xc0 [xfs] 
[  400.765876]  xlog_recover_process_ophdr+0xb7/0x150 [xfs] 
[  400.771259]  xlog_recover_process_data+0x84/0x140 [xfs] 
[  400.776554]  xlog_do_recovery_pass+0x24f/0x6a0 [xfs] 
[  400.781589]  xlog_do_log_recovery+0x6b/0xc0 [xfs] 
[  400.786356]  xlog_do_recover+0x33/0x1f0 [xfs] 
[  400.790775]  xlog_recover+0xde/0x190 [xfs] 
[  400.794936]  xfs_log_mount+0x19f/0x340 [xfs] 
[  400.799278]  xfs_mountfs+0x44b/0x980 [xfs] 
[  400.803445]  ? xfs_filestream_get_parent+0x90/0x90 [xfs] 
[  400.808827]  xfs_fs_fill_super+0x4bc/0x900 [xfs] 
[  400.813508]  ? xfs_open_devices+0x1f0/0x1f0 [xfs] 
[  400.818274]  get_tree_bdev+0x16d/0x270 
[  400.822028]  vfs_get_tree+0x22/0xc0 
[  400.825519]  do_new_mount+0x17a/0x310 
[  400.829186]  __x64_sys_mount+0x107/0x140 
[  400.833112]  do_syscall_64+0x59/0x90 
[  400.836690]  ? syscall_exit_work+0x103/0x130 
[  400.840962]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.845755]  ? do_syscall_64+0x69/0x90 
[  400.849508]  ? do_syscall_64+0x69/0x90 
[  400.853262]  ? syscall_exit_work+0x103/0x130 
[  400.857533]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.862327]  ? do_syscall_64+0x69/0x90 
[  400.866078]  ? syscall_exit_work+0x103/0x130 
[  400.870352]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.875144]  ? do_syscall_64+0x69/0x90 
[  400.878897]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.883691]  ? do_syscall_64+0x69/0x90 
[  400.887441]  ? syscall_exit_to_user_mode+0x12/0x30 
[  400.892237]  ? do_syscall_64+0x69/0x90 
[  400.895987]  entry_SYSCALL_64_after_hwframe+0x63/0xcd 
[  400.901039] RIP: 0033:0x7fcd7143f7be 
[  400.904619] Code: 48 8b 0d 65 a6 1b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01
f0 ff ff 73 01 c3 48 8b 0d 32 a6 1b 00 f7 d8 64 89 01 48 
[  400.923367] RSP: 002b:00007ffdf9207bd8 EFLAGS: 00000246 ORIG_RAX:
00000000000000a5 
[  400.930931] RAX: ffffffffffffffda RBX: 0000000000000000 RCX:
00007fcd7143f7be 
[  400.938063] RDX: 000055687c0def90 RSI: 000055687c0d9630 RDI:
000055687c0ddfe0 
[  400.945196] RBP: 000055687c0d9400 R08: 0000000000000000 R09:
0000000000000001 
[  400.952329] R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000000 
[  400.959464] R13: 000055687c0def90 R14: 000055687c0ddfe0 R15:
000055687c0d9400 
[  400.966594]  </TASK> 
[  400.968789] Modules linked in: loop dm_mod rfkill intel_rapl_msr
intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common
ipmi_ssif i10nm_edac nfit x86_pkg_temp_thermal intel_powerclamp coretemp
kvm_intel kvm mlx5_ib mgag200 i2c_algo_bit drm_shmem_helper irqbypass sunrpc
ib_uverbs drm_kms_helper rapl dcdbas acpi_ipmi intel_cstate syscopyarea ipmi_si
ib_core mei_me dell_smbios sysfillrect i2c_i801 isst_if_mbox_pci isst_if_mmio
ipmi_devintf intel_uncore sysimgblt pcspkr wmi_bmof dell_wmi_descriptor
isst_if_common mei fb_sys_fops i2c_smbus intel_pch_thermal intel_vsec
ipmi_msghandler acpi_power_meter drm fuse xfs libcrc32c sd_mod t10_pi sg
mlx5_core ahci mlxfw libahci crct10dif_pclmul crc32_pclmul tls crc32c_intel
ghash_clmulni_intel libata psample megaraid_sas tg3 pci_hyperv_intf wmi 
[  401.039126] CR2: ff7b3d2f8f3efff8

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Bug 216566] New: [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs]
  2022-10-09 17:47 [Bug 216566] New: [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs] bugzilla-daemon
@ 2022-10-09 22:47 ` Dave Chinner
  2022-10-09 22:47 ` [Bug 216566] " bugzilla-daemon
  2022-10-10  1:31 ` bugzilla-daemon
  2 siblings, 0 replies; 4+ messages in thread
From: Dave Chinner @ 2022-10-09 22:47 UTC (permalink / raw)
  To: bugzilla-daemon; +Cc: linux-xfs

On Sun, Oct 09, 2022 at 05:47:49PM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=216566
> 
>             Bug ID: 216566
>            Summary: [xfstests generic/648] BUG: unable to handle page
>                     fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700
>                     [xfs]
>            Product: File System
>            Version: 2.5
>     Kernel Version: v6.1-rc0
>           Hardware: All
>                 OS: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: XFS
>           Assignee: filesystem_xfs@kernel-bugs.kernel.org
>           Reporter: zlang@redhat.com
>         Regression: No
> 
> xfstests generic/648 hit kernel panic[1] on xfs with 64k directory block size
> (-n size=65536), before panic, there's a kernel assertion (not sure if it's
> related).
> 
> It's reproducable, but not easy. Generally I reproduced it by loop running
> generic/648 on xfs (-n size=65536) hundreds of time.
> 
> The last time I hit this panic on linux with HEAD=

Given that there have been no changes to XFS committed in v6.1-rc0
at this point in time, this won't be an XFS regression unless you
can reproduce it on 6.0 or 5.19 kernels, too. Regardless, I'd suggest
bisection is in order to find where the problem was introduced.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 216566] [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs]
  2022-10-09 17:47 [Bug 216566] New: [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs] bugzilla-daemon
  2022-10-09 22:47 ` Dave Chinner
@ 2022-10-09 22:47 ` bugzilla-daemon
  2022-10-10  1:31 ` bugzilla-daemon
  2 siblings, 0 replies; 4+ messages in thread
From: bugzilla-daemon @ 2022-10-09 22:47 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=216566

--- Comment #1 from Dave Chinner (david@fromorbit.com) ---
On Sun, Oct 09, 2022 at 05:47:49PM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=216566
> 
>             Bug ID: 216566
>            Summary: [xfstests generic/648] BUG: unable to handle page
>                     fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700
>                     [xfs]
>            Product: File System
>            Version: 2.5
>     Kernel Version: v6.1-rc0
>           Hardware: All
>                 OS: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: XFS
>           Assignee: filesystem_xfs@kernel-bugs.kernel.org
>           Reporter: zlang@redhat.com
>         Regression: No
> 
> xfstests generic/648 hit kernel panic[1] on xfs with 64k directory block size
> (-n size=65536), before panic, there's a kernel assertion (not sure if it's
> related).
> 
> It's reproducable, but not easy. Generally I reproduced it by loop running
> generic/648 on xfs (-n size=65536) hundreds of time.
> 
> The last time I hit this panic on linux with HEAD=

Given that there have been no changes to XFS committed in v6.1-rc0
at this point in time, this won't be an XFS regression unless you
can reproduce it on 6.0 or 5.19 kernels, too. Regardless, I'd suggest
bisection is in order to find where the problem was introduced.

-Dave.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 216566] [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs]
  2022-10-09 17:47 [Bug 216566] New: [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs] bugzilla-daemon
  2022-10-09 22:47 ` Dave Chinner
  2022-10-09 22:47 ` [Bug 216566] " bugzilla-daemon
@ 2022-10-10  1:31 ` bugzilla-daemon
  2 siblings, 0 replies; 4+ messages in thread
From: bugzilla-daemon @ 2022-10-10  1:31 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=216566

--- Comment #2 from Zorro Lang (zlang@redhat.com) ---
(In reply to Dave Chinner from comment #1)
> On Sun, Oct 09, 2022 at 05:47:49PM +0000, bugzilla-daemon@kernel.org wrote:
> > https://bugzilla.kernel.org/show_bug.cgi?id=216566
> > 
> >             Bug ID: 216566
> >            Summary: [xfstests generic/648] BUG: unable to handle page
> >                     fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700
> >                     [xfs]
> >            Product: File System
> >            Version: 2.5
> >     Kernel Version: v6.1-rc0
> >           Hardware: All
> >                 OS: Linux
> >               Tree: Mainline
> >             Status: NEW
> >           Severity: normal
> >           Priority: P1
> >          Component: XFS
> >           Assignee: filesystem_xfs@kernel-bugs.kernel.org
> >           Reporter: zlang@redhat.com
> >         Regression: No
> > 
> > xfstests generic/648 hit kernel panic[1] on xfs with 64k directory block
> size
> > (-n size=65536), before panic, there's a kernel assertion (not sure if it's
> > related).
> > 
> > It's reproducable, but not easy. Generally I reproduced it by loop running
> > generic/648 on xfs (-n size=65536) hundreds of time.
> > 
> > The last time I hit this panic on linux with HEAD=
> 
> Given that there have been no changes to XFS committed in v6.1-rc0
> at this point in time, this won't be an XFS regression unless you
> can reproduce it on 6.0 or 5.19 kernels, too. Regardless, I'd suggest
> bisection is in order to find where the problem was introduced.

It's not a regression recently, I even can reproduce it on RHEL-9 (which base
on 5.14 kernel).

> 
> -Dave.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-10-10  1:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-09 17:47 [Bug 216566] New: [xfstests generic/648] BUG: unable to handle page fault, RIP: 0010:__xfs_dir3_data_check+0x171/0x700 [xfs] bugzilla-daemon
2022-10-09 22:47 ` Dave Chinner
2022-10-09 22:47 ` [Bug 216566] " bugzilla-daemon
2022-10-10  1:31 ` bugzilla-daemon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).