linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
@ 2024-04-27 11:09 syzbot
  2024-05-08  5:36 ` syzbot
  2024-05-08 15:13 ` syzbot
  0 siblings, 2 replies; 9+ messages in thread
From: syzbot @ 2024-04-27 11:09 UTC (permalink / raw)
  To: gregkh, linux-fsdevel, linux-kernel, syzkaller-bugs, tj

Hello,

syzbot found the following issue on:

HEAD commit:    71b1543c83d6 Merge tag '6.9-rc5-ksmbd-fixes' of git://git...
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=132f3973180000
kernel config:  https://syzkaller.appspot.com/x/.config?x=5a05c230e142f2bc
dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c2ffdc70f4f4/disk-71b1543c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3f55cd9df875/vmlinux-71b1543c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/c9547057857d/bzImage-71b1543c.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc5-syzkaller-00031-g71b1543c83d6 #0 Not tainted
------------------------------------------------------
syz-executor.1/17062 is trying to acquire lock:
ffff88806b638488 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154

but task is already holding lock:
ffff88806fe359e0 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&p->lock){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
       call_read_iter include/linux/fs.h:2104 [inline]
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xdc0 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&pipe->mutex){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
       do_splice_from fs/splice.c:941 [inline]
       do_splice+0xd77/0x1880 fs/splice.c:1354
       __do_splice fs/splice.c:1436 [inline]
       __do_sys_splice fs/splice.c:1652 [inline]
       __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (sb_writers#4){.+.+}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1664 [inline]
       sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
       mnt_want_write+0x3f/0x90 fs/namespace.c:409
       ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
       lookup_open fs/namei.c:3497 [inline]
       open_last_lookups fs/namei.c:3566 [inline]
       path_openat+0x1425/0x3240 fs/namei.c:3796
       do_filp_open+0x235/0x490 fs/namei.c:3826
       do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
       do_sys_open fs/open.c:1421 [inline]
       __do_sys_openat fs/open.c:1437 [inline]
       __se_sys_openat fs/open.c:1432 [inline]
       __x64_sys_openat+0x247/0x2a0 fs/open.c:1432
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
       inode_lock_shared include/linux/fs.h:805 [inline]
       lookup_slow+0x45/0x70 fs/namei.c:1708
       walk_component+0x2e1/0x410 fs/namei.c:2004
       lookup_last fs/namei.c:2461 [inline]
       path_lookupat+0x16f/0x450 fs/namei.c:2485
       filename_lookup+0x256/0x610 fs/namei.c:2514
       kern_path+0x35/0x50 fs/namei.c:2622
       lookup_bdev+0xc5/0x290 block/bdev.c:1136
       resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
       kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
       call_write_iter include/linux/fs.h:2110 [inline]
       new_sync_write fs/read_write.c:497 [inline]
       vfs_write+0xa84/0xcb0 fs/read_write.c:590
       ksys_write+0x1a0/0x2c0 fs/read_write.c:643
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&of->mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
       seq_read_iter+0x3d0/0xd60 fs/seq_file.c:225
       call_read_iter include/linux/fs.h:2104 [inline]
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xdc0 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &of->mutex --> &pipe->mutex --> &p->lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&p->lock);
                               lock(&pipe->mutex);
                               lock(&p->lock);
  lock(&of->mutex);

 *** DEADLOCK ***

2 locks held by syz-executor.1/17062:
 #0: ffff88802d049468 (&pipe->mutex){+.+.}-{3:3}, at: splice_file_to_pipe+0x2e/0x500 fs/splice.c:1292
 #1: ffff88806fe359e0 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182

stack backtrace:
CPU: 0 PID: 17062 Comm: syz-executor.1 Not tainted 6.9.0-rc5-syzkaller-00031-g71b1543c83d6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:608 [inline]
 __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
 kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
 seq_read_iter+0x3d0/0xd60 fs/seq_file.c:225
 call_read_iter include/linux/fs.h:2104 [inline]
 copy_splice_read+0x662/0xb60 fs/splice.c:365
 do_splice_read fs/splice.c:985 [inline]
 splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
 do_sendfile+0x515/0xdc0 fs/read_write.c:1301
 __do_sys_sendfile64 fs/read_write.c:1362 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f50be67dea9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f50bf41a0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f50be7ac1f0 RCX: 00007f50be67dea9
RDX: 0000000000000000 RSI: 0000000000000007 RDI: 0000000000000005
RBP: 00007f50be6ca4a4 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000004 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f50be7ac1f0 R15: 00007ffe78344d68
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-04-27 11:09 [syzbot] [kernfs?] possible deadlock in kernfs_seq_start syzbot
@ 2024-05-08  5:36 ` syzbot
  2024-05-08 23:19   ` Hillf Danton
  2024-05-08 15:13 ` syzbot
  1 sibling, 1 reply; 9+ messages in thread
From: syzbot @ 2024-05-08  5:36 UTC (permalink / raw)
  To: gregkh, linux-fsdevel, linux-kernel, syzkaller-bugs, tj

syzbot has found a reproducer for the following issue on:

HEAD commit:    dccb07f2914c Merge tag 'for-6.9-rc7-tag' of git://git.kern..
git tree:       upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=137daa6c980000
kernel config:  https://syzkaller.appspot.com/x/.config?x=9d7ea7de0cb32587
dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1134f3c0980000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1367a504980000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/ea1961ce01fe/disk-dccb07f2.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/445a00347402/vmlinux-dccb07f2.xz
kernel image: https://storage.googleapis.com/syzbot-assets/461aed7c4df3/bzImage-dccb07f2.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0 Not tainted
------------------------------------------------------
syz-executor149/5078 is trying to acquire lock:
ffff88802a978888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154

but task is already holding lock:
ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&p->lock){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
       call_read_iter include/linux/fs.h:2104 [inline]
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xdc0 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&pipe->mutex){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
       backing_file_splice_write+0x2bc/0x4c0 fs/backing-file.c:289
       ovl_splice_write+0x3cf/0x500 fs/overlayfs/file.c:379
       do_splice_from fs/splice.c:941 [inline]
       do_splice+0xd77/0x1880 fs/splice.c:1354
       __do_splice fs/splice.c:1436 [inline]
       __do_sys_splice fs/splice.c:1652 [inline]
       __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (sb_writers#4){.+.+}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1664 [inline]
       sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
       mnt_want_write+0x3f/0x90 fs/namespace.c:409
       ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
       lookup_open fs/namei.c:3497 [inline]
       open_last_lookups fs/namei.c:3566 [inline]
       path_openat+0x1425/0x3240 fs/namei.c:3796
       do_filp_open+0x235/0x490 fs/namei.c:3826
       do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
       do_sys_open fs/open.c:1421 [inline]
       __do_sys_open fs/open.c:1429 [inline]
       __se_sys_open fs/open.c:1425 [inline]
       __x64_sys_open+0x225/0x270 fs/open.c:1425
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
       inode_lock_shared include/linux/fs.h:805 [inline]
       lookup_slow+0x45/0x70 fs/namei.c:1708
       walk_component+0x2e1/0x410 fs/namei.c:2004
       lookup_last fs/namei.c:2461 [inline]
       path_lookupat+0x16f/0x450 fs/namei.c:2485
       filename_lookup+0x256/0x610 fs/namei.c:2514
       kern_path+0x35/0x50 fs/namei.c:2622
       lookup_bdev+0xc5/0x290 block/bdev.c:1136
       resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
       kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
       call_write_iter include/linux/fs.h:2110 [inline]
       new_sync_write fs/read_write.c:497 [inline]
       vfs_write+0xa84/0xcb0 fs/read_write.c:590
       ksys_write+0x1a0/0x2c0 fs/read_write.c:643
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&of->mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
       traverse+0x14f/0x550 fs/seq_file.c:106
       seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
       call_read_iter include/linux/fs.h:2104 [inline]
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xdc0 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &of->mutex --> &pipe->mutex --> &p->lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&p->lock);
                               lock(&pipe->mutex);
                               lock(&p->lock);
  lock(&of->mutex);

 *** DEADLOCK ***

2 locks held by syz-executor149/5078:
 #0: ffff88807de89868 (&pipe->mutex){+.+.}-{3:3}, at: splice_file_to_pipe+0x2e/0x500 fs/splice.c:1292
 #1: ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182

stack backtrace:
CPU: 0 PID: 5078 Comm: syz-executor149 Not tainted 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:608 [inline]
 __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
 kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
 traverse+0x14f/0x550 fs/seq_file.c:106
 seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
 call_read_iter include/linux/fs.h:2104 [inline]
 copy_splice_read+0x662/0xb60 fs/splice.c:365
 do_splice_read fs/splice.c:985 [inline]
 splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
 do_sendfile+0x515/0xdc0 fs/read_write.c:1301
 __do_sys_sendfile64 fs/read_write.c:1362 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7d8d33f8e9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 18 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7d8d2b8218 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f7d8d3c9428 RCX: 00007f7d8d33f8e9
RDX: 0000000000000000 RSI: 0000000000000007 RDI: 0000000000000006
RBP: 00007f7d8d3c9420 R08: 00007ffdc82d7c57 R09: 0000000000000000
R10: 0000000000000004 R11: 0000000000000246 R12: 00007f7d8d39606c
R13: 0030656c69662f2e R14: 0079616c7265766f R15: 00007f7d8d39603b
 </TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-04-27 11:09 [syzbot] [kernfs?] possible deadlock in kernfs_seq_start syzbot
  2024-05-08  5:36 ` syzbot
@ 2024-05-08 15:13 ` syzbot
  1 sibling, 0 replies; 9+ messages in thread
From: syzbot @ 2024-05-08 15:13 UTC (permalink / raw)
  To: axboe, gregkh, hch, linux-fsdevel, linux-kernel, linux-unionfs,
	miklos, rafael, syzkaller-bugs, tj

syzbot has bisected this issue to:

commit 1e8c813b083c4122dfeaa5c3b11028331026e85d
Author: Christoph Hellwig <hch@lst.de>
Date:   Wed May 31 12:55:32 2023 +0000

    PM: hibernate: don't use early_lookup_bdev in resume_store

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=1380072f180000
start commit:   dccb07f2914c Merge tag 'for-6.9-rc7-tag' of git://git.kern..
git tree:       upstream
final oops:     https://syzkaller.appspot.com/x/report.txt?x=1040072f180000
console output: https://syzkaller.appspot.com/x/log.txt?x=1780072f180000
kernel config:  https://syzkaller.appspot.com/x/.config?x=9d7ea7de0cb32587
dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1134f3c0980000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1367a504980000

Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com
Fixes: 1e8c813b083c ("PM: hibernate: don't use early_lookup_bdev in resume_store")

For information about bisection process see: https://goo.gl/tpsmEJ#bisection

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-05-08  5:36 ` syzbot
@ 2024-05-08 23:19   ` Hillf Danton
  2024-05-09  6:37     ` Amir Goldstein
  0 siblings, 1 reply; 9+ messages in thread
From: Hillf Danton @ 2024-05-08 23:19 UTC (permalink / raw)
  To: syzbot
  Cc: linux-fsdevel, Amir Goldstein, Al Viro, linux-kernel, syzkaller-bugs

On Tue, 07 May 2024 22:36:18 -0700
> syzbot has found a reproducer for the following issue on:
> 
> HEAD commit:    dccb07f2914c Merge tag 'for-6.9-rc7-tag' of git://git.kern..
> git tree:       upstream
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=137daa6c980000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=9d7ea7de0cb32587
> dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
> compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1134f3c0980000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1367a504980000
> 
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/ea1961ce01fe/disk-dccb07f2.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/445a00347402/vmlinux-dccb07f2.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/461aed7c4df3/bzImage-dccb07f2.xz
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com
> 
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0 Not tainted
> ------------------------------------------------------
> syz-executor149/5078 is trying to acquire lock:
> ffff88802a978888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
> 
> but task is already holding lock:
> ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #4 (&p->lock){+.+.}-{3:3}:
>        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
>        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
>        seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
>        call_read_iter include/linux/fs.h:2104 [inline]
>        copy_splice_read+0x662/0xb60 fs/splice.c:365
>        do_splice_read fs/splice.c:985 [inline]
>        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
>        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
>        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
>        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
>        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
>        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> 
> -> #3 (&pipe->mutex){+.+.}-{3:3}:
>        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
>        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
>        iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
>        backing_file_splice_write+0x2bc/0x4c0 fs/backing-file.c:289
>        ovl_splice_write+0x3cf/0x500 fs/overlayfs/file.c:379
>        do_splice_from fs/splice.c:941 [inline]
>        do_splice+0xd77/0x1880 fs/splice.c:1354
>        __do_splice fs/splice.c:1436 [inline]
>        __do_sys_splice fs/splice.c:1652 [inline]
>        __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
>        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
>        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> 
> -> #2 (sb_writers#4){.+.+}-{0:0}:
>        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>        percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
>        __sb_start_write include/linux/fs.h:1664 [inline]
>        sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
>        mnt_want_write+0x3f/0x90 fs/namespace.c:409
>        ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
>        lookup_open fs/namei.c:3497 [inline]
>        open_last_lookups fs/namei.c:3566 [inline]
>        path_openat+0x1425/0x3240 fs/namei.c:3796
>        do_filp_open+0x235/0x490 fs/namei.c:3826
>        do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
>        do_sys_open fs/open.c:1421 [inline]
>        __do_sys_open fs/open.c:1429 [inline]
>        __se_sys_open fs/open.c:1425 [inline]
>        __x64_sys_open+0x225/0x270 fs/open.c:1425
>        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
>        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> 
> -> #1 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
>        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>        down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
>        inode_lock_shared include/linux/fs.h:805 [inline]
>        lookup_slow+0x45/0x70 fs/namei.c:1708
>        walk_component+0x2e1/0x410 fs/namei.c:2004
>        lookup_last fs/namei.c:2461 [inline]
>        path_lookupat+0x16f/0x450 fs/namei.c:2485
>        filename_lookup+0x256/0x610 fs/namei.c:2514
>        kern_path+0x35/0x50 fs/namei.c:2622
>        lookup_bdev+0xc5/0x290 block/bdev.c:1136
>        resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
>        kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
>        call_write_iter include/linux/fs.h:2110 [inline]
>        new_sync_write fs/read_write.c:497 [inline]
>        vfs_write+0xa84/0xcb0 fs/read_write.c:590
>        ksys_write+0x1a0/0x2c0 fs/read_write.c:643
>        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
>        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> 
> -> #0 (&of->mutex){+.+.}-{3:3}:
>        check_prev_add kernel/locking/lockdep.c:3134 [inline]
>        check_prevs_add kernel/locking/lockdep.c:3253 [inline]
>        validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
>        __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
>        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
>        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
>        kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
>        traverse+0x14f/0x550 fs/seq_file.c:106
>        seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
>        call_read_iter include/linux/fs.h:2104 [inline]
>        copy_splice_read+0x662/0xb60 fs/splice.c:365
>        do_splice_read fs/splice.c:985 [inline]
>        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
>        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
>        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
>        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
>        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
>        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> 
> other info that might help us debug this:
> 
> Chain exists of:
>   &of->mutex --> &pipe->mutex --> &p->lock
> 
>  Possible unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(&p->lock);
>                                lock(&pipe->mutex);
>                                lock(&p->lock);
>   lock(&of->mutex);
> 
>  *** DEADLOCK ***

This shows 16b52bbee482 ("kernfs: annotate different lockdep class for
of->mutex of writable files") is a bandaid.
> 
> 2 locks held by syz-executor149/5078:
>  #0: ffff88807de89868 (&pipe->mutex){+.+.}-{3:3}, at: splice_file_to_pipe+0x2e/0x500 fs/splice.c:1292
>  #1: ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> 
> stack backtrace:
> CPU: 0 PID: 5078 Comm: syz-executor149 Not tainted 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
> Call Trace:
>  <TASK>
>  __dump_stack lib/dump_stack.c:88 [inline]
>  dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
>  check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
>  check_prev_add kernel/locking/lockdep.c:3134 [inline]
>  check_prevs_add kernel/locking/lockdep.c:3253 [inline]
>  validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
>  __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
>  lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>  __mutex_lock_common kernel/locking/mutex.c:608 [inline]
>  __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
>  kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
>  traverse+0x14f/0x550 fs/seq_file.c:106
>  seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
>  call_read_iter include/linux/fs.h:2104 [inline]
>  copy_splice_read+0x662/0xb60 fs/splice.c:365
>  do_splice_read fs/splice.c:985 [inline]
>  splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
>  do_sendfile+0x515/0xdc0 fs/read_write.c:1301
>  __do_sys_sendfile64 fs/read_write.c:1362 [inline]
>  __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
>  do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>  do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
>  entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7f7d8d33f8e9
> Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 18 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f7d8d2b8218 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
> RAX: ffffffffffffffda RBX: 00007f7d8d3c9428 RCX: 00007f7d8d33f8e9
> RDX: 0000000000000000 RSI: 0000000000000007 RDI: 0000000000000006
> RBP: 00007f7d8d3c9420 R08: 00007ffdc82d7c57 R09: 0000000000000000
> R10: 0000000000000004 R11: 0000000000000246 R12: 00007f7d8d39606c
> R13: 0030656c69662f2e R14: 0079616c7265766f R15: 00007f7d8d39603b
>  </TASK>
> 
> 
> ---
> If you want syzbot to run the reproducer, reply with:
> #syz test: git://repo/address.git branch-or-commit-hash
> If you attach or paste a git patch, syzbot will apply it before testing.
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-05-08 23:19   ` Hillf Danton
@ 2024-05-09  6:37     ` Amir Goldstein
  2024-05-09 10:48       ` Hillf Danton
  0 siblings, 1 reply; 9+ messages in thread
From: Amir Goldstein @ 2024-05-09  6:37 UTC (permalink / raw)
  To: Hillf Danton
  Cc: syzbot, linux-fsdevel, Al Viro, linux-kernel, syzkaller-bugs,
	Rafael J. Wysocki, Pavel Machek, linux-pm

CC: linux-pm

On Thu, May 9, 2024 at 2:19 AM Hillf Danton <hdanton@sina.com> wrote:
>
> On Tue, 07 May 2024 22:36:18 -0700
> > syzbot has found a reproducer for the following issue on:
> >
> > HEAD commit:    dccb07f2914c Merge tag 'for-6.9-rc7-tag' of git://git.kern..
> > git tree:       upstream
> > console+strace: https://syzkaller.appspot.com/x/log.txt?x=137daa6c980000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=9d7ea7de0cb32587
> > dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
> > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1134f3c0980000
> > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1367a504980000
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/ea1961ce01fe/disk-dccb07f2.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/445a00347402/vmlinux-dccb07f2.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/461aed7c4df3/bzImage-dccb07f2.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com
> >
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0 Not tainted
> > ------------------------------------------------------
> > syz-executor149/5078 is trying to acquire lock:
> > ffff88802a978888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
> >
> > but task is already holding lock:
> > ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> >
> > which lock already depends on the new lock.
> >
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #4 (&p->lock){+.+.}-{3:3}:
> >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> >        seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> >        call_read_iter include/linux/fs.h:2104 [inline]
> >        copy_splice_read+0x662/0xb60 fs/splice.c:365
> >        do_splice_read fs/splice.c:985 [inline]
> >        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
> >        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
> >        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
> >        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
> >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> >
> > -> #3 (&pipe->mutex){+.+.}-{3:3}:
> >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> >        iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
> >        backing_file_splice_write+0x2bc/0x4c0 fs/backing-file.c:289
> >        ovl_splice_write+0x3cf/0x500 fs/overlayfs/file.c:379
> >        do_splice_from fs/splice.c:941 [inline]
> >        do_splice+0xd77/0x1880 fs/splice.c:1354
> >        __do_splice fs/splice.c:1436 [inline]
> >        __do_sys_splice fs/splice.c:1652 [inline]
> >        __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
> >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> >
> > -> #2 (sb_writers#4){.+.+}-{0:0}:
> >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> >        percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> >        __sb_start_write include/linux/fs.h:1664 [inline]
> >        sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
> >        mnt_want_write+0x3f/0x90 fs/namespace.c:409
> >        ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
> >        lookup_open fs/namei.c:3497 [inline]
> >        open_last_lookups fs/namei.c:3566 [inline]
> >        path_openat+0x1425/0x3240 fs/namei.c:3796
> >        do_filp_open+0x235/0x490 fs/namei.c:3826
> >        do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
> >        do_sys_open fs/open.c:1421 [inline]
> >        __do_sys_open fs/open.c:1429 [inline]
> >        __se_sys_open fs/open.c:1425 [inline]
> >        __x64_sys_open+0x225/0x270 fs/open.c:1425
> >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> >
> > -> #1 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
> >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> >        down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
> >        inode_lock_shared include/linux/fs.h:805 [inline]
> >        lookup_slow+0x45/0x70 fs/namei.c:1708
> >        walk_component+0x2e1/0x410 fs/namei.c:2004
> >        lookup_last fs/namei.c:2461 [inline]
> >        path_lookupat+0x16f/0x450 fs/namei.c:2485
> >        filename_lookup+0x256/0x610 fs/namei.c:2514
> >        kern_path+0x35/0x50 fs/namei.c:2622
> >        lookup_bdev+0xc5/0x290 block/bdev.c:1136
> >        resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
> >        kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
> >        call_write_iter include/linux/fs.h:2110 [inline]
> >        new_sync_write fs/read_write.c:497 [inline]
> >        vfs_write+0xa84/0xcb0 fs/read_write.c:590
> >        ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> >
> > -> #0 (&of->mutex){+.+.}-{3:3}:
> >        check_prev_add kernel/locking/lockdep.c:3134 [inline]
> >        check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> >        validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
> >        __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
> >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> >        kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
> >        traverse+0x14f/0x550 fs/seq_file.c:106
> >        seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
> >        call_read_iter include/linux/fs.h:2104 [inline]
> >        copy_splice_read+0x662/0xb60 fs/splice.c:365
> >        do_splice_read fs/splice.c:985 [inline]
> >        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
> >        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
> >        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
> >        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
> >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> >
> > other info that might help us debug this:
> >
> > Chain exists of:
> >   &of->mutex --> &pipe->mutex --> &p->lock
> >
> >  Possible unsafe locking scenario:
> >
> >        CPU0                    CPU1
> >        ----                    ----
> >   lock(&p->lock);
> >                                lock(&pipe->mutex);
> >                                lock(&p->lock);
> >   lock(&of->mutex);
> >
> >  *** DEADLOCK ***
>
> This shows 16b52bbee482 ("kernfs: annotate different lockdep class for
> of->mutex of writable files") is a bandaid.

Well, nobody said that it fixes the root cause.
But the annotation fix is correct, because the former report was
really false positive one.

The root cause is resume_store() doing vfs path lookup.
If we could deprecate this allegedly unneeded UAPI we should.

That said, all those lockdep warnings indicate a possible deadlock
if someone tries to hibernate into an overlayfs file.

If root tries to do that then, this is either an attack or stupidity.
Either Way the news flash from this report is "root may be able
to deadlock kernel on purpose"
Not very exciting and not likely to happen in the real world.

The remaining question is what to do about the lockdep reports.

Questions to PM maintainers:
Any chance to deprecate writing path to /sys/power/resume?
Userspace should have no problem getting the same done
with writing dev number.

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-05-09  6:37     ` Amir Goldstein
@ 2024-05-09 10:48       ` Hillf Danton
  2024-05-09 14:52         ` Amir Goldstein
  0 siblings, 1 reply; 9+ messages in thread
From: Hillf Danton @ 2024-05-09 10:48 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: syzbot, linux-fsdevel, Al Viro, linux-kernel, syzkaller-bugs,
	Rafael J. Wysocki, Pavel Machek, linux-pm

On Thu, 9 May 2024 09:37:24 +0300 Amir Goldstein <amir73il@gmail.com>
> On Thu, May 9, 2024 at 2:19 AM Hillf Danton <hdanton@sina.com> wrote:
> > On Tue, 07 May 2024 22:36:18 -0700
> > > syzbot has found a reproducer for the following issue on:
> > >
> > > HEAD commit:    dccb07f2914c Merge tag 'for-6.9-rc7-tag' of git://git.kern..
> > > git tree:       upstream
> > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=137daa6c980000
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=9d7ea7de0cb32587
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
> > > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1134f3c0980000
> > > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1367a504980000
> > >
> > > Downloadable assets:
> > > disk image: https://storage.googleapis.com/syzbot-assets/ea1961ce01fe/disk-dccb07f2.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/445a00347402/vmlinux-dccb07f2.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/461aed7c4df3/bzImage-dccb07f2.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com
> > >
> > > ======================================================
> > > WARNING: possible circular locking dependency detected
> > > 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0 Not tainted
> > > ------------------------------------------------------
> > > syz-executor149/5078 is trying to acquire lock:
> > > ffff88802a978888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
> > >
> > > but task is already holding lock:
> > > ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> > >
> > > which lock already depends on the new lock.
> > >
> > >
> > > the existing dependency chain (in reverse order) is:
> > >
> > > -> #4 (&p->lock){+.+.}-{3:3}:
> > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> > >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> > >        seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> > >        call_read_iter include/linux/fs.h:2104 [inline]
> > >        copy_splice_read+0x662/0xb60 fs/splice.c:365
> > >        do_splice_read fs/splice.c:985 [inline]
> > >        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
> > >        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
> > >        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
> > >        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
> > >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> > >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > >
> > > -> #3 (&pipe->mutex){+.+.}-{3:3}:
> > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> > >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> > >        iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
> > >        backing_file_splice_write+0x2bc/0x4c0 fs/backing-file.c:289
> > >        ovl_splice_write+0x3cf/0x500 fs/overlayfs/file.c:379
> > >        do_splice_from fs/splice.c:941 [inline]
> > >        do_splice+0xd77/0x1880 fs/splice.c:1354

		file_start_write(out);
		ret = do_splice_from(ipipe, out, &offset, len, flags);
		file_end_write(out);

The correct locking order is

		sb_writers
		inode lock

> > >        __do_splice fs/splice.c:1436 [inline]
> > >        __do_sys_splice fs/splice.c:1652 [inline]
> > >        __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
> > >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> > >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > >
> > > -> #2 (sb_writers#4){.+.+}-{0:0}:
> > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > >        percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> > >        __sb_start_write include/linux/fs.h:1664 [inline]
> > >        sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
> > >        mnt_want_write+0x3f/0x90 fs/namespace.c:409

but inverse order occurs here.

> > >        ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
> > >        lookup_open fs/namei.c:3497 [inline]
> > >        open_last_lookups fs/namei.c:3566 [inline]
> > >        path_openat+0x1425/0x3240 fs/namei.c:3796
> > >        do_filp_open+0x235/0x490 fs/namei.c:3826
> > >        do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
> > >        do_sys_open fs/open.c:1421 [inline]
> > >        __do_sys_open fs/open.c:1429 [inline]
> > >        __se_sys_open fs/open.c:1425 [inline]
> > >        __x64_sys_open+0x225/0x270 fs/open.c:1425
> > >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> > >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > >
> > > -> #1 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
> > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > >        down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
> > >        inode_lock_shared include/linux/fs.h:805 [inline]
> > >        lookup_slow+0x45/0x70 fs/namei.c:1708
> > >        walk_component+0x2e1/0x410 fs/namei.c:2004
> > >        lookup_last fs/namei.c:2461 [inline]
> > >        path_lookupat+0x16f/0x450 fs/namei.c:2485
> > >        filename_lookup+0x256/0x610 fs/namei.c:2514
> > >        kern_path+0x35/0x50 fs/namei.c:2622
> > >        lookup_bdev+0xc5/0x290 block/bdev.c:1136
> > >        resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
> > >        kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
> > >        call_write_iter include/linux/fs.h:2110 [inline]
> > >        new_sync_write fs/read_write.c:497 [inline]
> > >        vfs_write+0xa84/0xcb0 fs/read_write.c:590
> > >        ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> > >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> > >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > >
> > > -> #0 (&of->mutex){+.+.}-{3:3}:
> > >        check_prev_add kernel/locking/lockdep.c:3134 [inline]
> > >        check_prevs_add kernel/locking/lockdep.c:3253 [inline]
> > >        validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
> > >        __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
> > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> > >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> > >        kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
> > >        traverse+0x14f/0x550 fs/seq_file.c:106
> > >        seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
> > >        call_read_iter include/linux/fs.h:2104 [inline]
> > >        copy_splice_read+0x662/0xb60 fs/splice.c:365
> > >        do_splice_read fs/splice.c:985 [inline]
> > >        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
> > >        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
> > >        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
> > >        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
> > >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> > >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > >
> > > other info that might help us debug this:
> > >
> > > Chain exists of:
> > >   &of->mutex --> &pipe->mutex --> &p->lock
> > >
> > >  Possible unsafe locking scenario:
> > >
> > >        CPU0                    CPU1
> > >        ----                    ----
> > >   lock(&p->lock);
> > >                                lock(&pipe->mutex);
> > >                                lock(&p->lock);
> > >   lock(&of->mutex);
> > >
> > >  *** DEADLOCK ***
> >
> > This shows 16b52bbee482 ("kernfs: annotate different lockdep class for
> > of->mutex of writable files") is a bandaid.
> 
> Well, nobody said that it fixes the root cause.
> But the annotation fix is correct, because the former report was
> really false positive one.
> 
> The root cause is resume_store() doing vfs path lookup.

resume_store() looks innocent before locking order above is explained.

> If we could deprecate this allegedly unneeded UAPI we should.
> 
> That said, all those lockdep warnings indicate a possible deadlock
> if someone tries to hibernate into an overlayfs file.
> 
> If root tries to do that then, this is either an attack or stupidity.
> Either Way the news flash from this report is "root may be able
> to deadlock kernel on purpose"
> Not very exciting and not likely to happen in the real world.
> 
> The remaining question is what to do about the lockdep reports.
> 
> Questions to PM maintainers:
> Any chance to deprecate writing path to /sys/power/resume?
> Userspace should have no problem getting the same done
> with writing dev number.
> 
> Thanks,
> Amir.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-05-09 10:48       ` Hillf Danton
@ 2024-05-09 14:52         ` Amir Goldstein
  2024-05-09 23:26           ` Hillf Danton
  0 siblings, 1 reply; 9+ messages in thread
From: Amir Goldstein @ 2024-05-09 14:52 UTC (permalink / raw)
  To: Hillf Danton
  Cc: syzbot, linux-fsdevel, Al Viro, linux-kernel, syzkaller-bugs,
	Rafael J. Wysocki, Pavel Machek, linux-pm

On Thu, May 9, 2024 at 1:49 PM Hillf Danton <hdanton@sina.com> wrote:
>
> On Thu, 9 May 2024 09:37:24 +0300 Amir Goldstein <amir73il@gmail.com>
> > On Thu, May 9, 2024 at 2:19 AM Hillf Danton <hdanton@sina.com> wrote:
> > > On Tue, 07 May 2024 22:36:18 -0700
> > > > syzbot has found a reproducer for the following issue on:
> > > >
> > > > HEAD commit:    dccb07f2914c Merge tag 'for-6.9-rc7-tag' of git://git.kern..
> > > > git tree:       upstream
> > > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=137daa6c980000
> > > > kernel config:  https://syzkaller.appspot.com/x/.config?x=9d7ea7de0cb32587
> > > > dashboard link: https://syzkaller.appspot.com/bug?extid=4c493dcd5a68168a94b2
> > > > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1134f3c0980000
> > > > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=1367a504980000
> > > >
> > > > Downloadable assets:
> > > > disk image: https://storage.googleapis.com/syzbot-assets/ea1961ce01fe/disk-dccb07f2.raw.xz
> > > > vmlinux: https://storage.googleapis.com/syzbot-assets/445a00347402/vmlinux-dccb07f2.xz
> > > > kernel image: https://storage.googleapis.com/syzbot-assets/461aed7c4df3/bzImage-dccb07f2.xz
> > > >
> > > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > > Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com
> > > >
> > > > ======================================================
> > > > WARNING: possible circular locking dependency detected
> > > > 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0 Not tainted
> > > > ------------------------------------------------------
> > > > syz-executor149/5078 is trying to acquire lock:
> > > > ffff88802a978888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
> > > >
> > > > but task is already holding lock:
> > > > ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> > > >
> > > > which lock already depends on the new lock.
> > > >
> > > >
> > > > the existing dependency chain (in reverse order) is:
> > > >
> > > > -> #4 (&p->lock){+.+.}-{3:3}:
> > > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > > >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> > > >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> > > >        seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
> > > >        call_read_iter include/linux/fs.h:2104 [inline]
> > > >        copy_splice_read+0x662/0xb60 fs/splice.c:365
> > > >        do_splice_read fs/splice.c:985 [inline]
> > > >        splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
> > > >        do_sendfile+0x515/0xdc0 fs/read_write.c:1301
> > > >        __do_sys_sendfile64 fs/read_write.c:1362 [inline]
> > > >        __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
> > > >        do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> > > >        do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
> > > >        entry_SYSCALL_64_after_hwframe+0x77/0x7f
> > > >
> > > > -> #3 (&pipe->mutex){+.+.}-{3:3}:
> > > >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> > > >        __mutex_lock_common kernel/locking/mutex.c:608 [inline]
> > > >        __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
> > > >        iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
> > > >        backing_file_splice_write+0x2bc/0x4c0 fs/backing-file.c:289
> > > >        ovl_splice_write+0x3cf/0x500 fs/overlayfs/file.c:379
> > > >        do_splice_from fs/splice.c:941 [inline]
> > > >        do_splice+0xd77/0x1880 fs/splice.c:1354
>
>                 file_start_write(out);
>                 ret = do_splice_from(ipipe, out, &offset, len, flags);
>                 file_end_write(out);
>
> The correct locking order is
>
>                 sb_writers

This is sb of overlayfs

>                 inode lock

This is real inode

See comment above ovl_lockdep_annotate_inode_mutex_key()
for more details.

Thanks,
Amir.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-05-09 14:52         ` Amir Goldstein
@ 2024-05-09 23:26           ` Hillf Danton
  2024-05-10 11:33             ` Hillf Danton
  0 siblings, 1 reply; 9+ messages in thread
From: Hillf Danton @ 2024-05-09 23:26 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: syzbot, linux-fsdevel, Al Viro, linux-kernel, syzkaller-bugs,
	Rafael J. Wysocki, Pavel Machek, linux-pm

On Thu, 9 May 2024 17:52:21 +0300 Amir Goldstein <amir73il@gmail.com>
> On Thu, May 9, 2024 at 1:49 PM Hillf Danton <hdanton@sina.com> wrote:
> >
> > The correct locking order is
> >
> >                 sb_writers
> 
> This is sb of overlayfs
> 
> >                 inode lock
> 
> This is real inode
> 
WRT sb_writers the order

	lock inode parent
	lock inode kid

becomes
	lock inode kid
	sb_writers
	lock inode parent 

given call trace

> -> #2 (sb_writers#4){.+.+}-{0:0}:
>        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
>        percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
>        __sb_start_write include/linux/fs.h:1664 [inline]
>        sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
>        mnt_want_write+0x3f/0x90 fs/namespace.c:409
>        ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
>        lookup_open fs/namei.c:3497 [inline]
>        open_last_lookups fs/namei.c:3566 [inline]

and code snippet [1]

	if (open_flag & O_CREAT)
		inode_lock(dir->d_inode);
	else
		inode_lock_shared(dir->d_inode);
	dentry = lookup_open(nd, file, op, got_write);

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/namei.c?id=dccb07f2914c#n3566

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [syzbot] [kernfs?] possible deadlock in kernfs_seq_start
  2024-05-09 23:26           ` Hillf Danton
@ 2024-05-10 11:33             ` Hillf Danton
  0 siblings, 0 replies; 9+ messages in thread
From: Hillf Danton @ 2024-05-10 11:33 UTC (permalink / raw)
  To: Amir Goldstein
  Cc: syzbot, linux-fsdevel, Al Viro, linux-kernel, syzkaller-bugs,
	Rafael J. Wysocki, Pavel Machek, linux-pm

On Fri, 10 May 2024 07:26:13 +0800 Hillf Danton <hdanton@sina.com> wrote:
> On Thu, 9 May 2024 17:52:21 +0300 Amir Goldstein <amir73il@gmail.com>
> > On Thu, May 9, 2024 at 1:49 PM Hillf Danton <hdanton@sina.com> wrote:
> > >
> > > The correct locking order is
> > >
> > >                 sb_writers
> > 
> > This is sb of overlayfs
> > 
> > >                 inode lock
> > 
> > This is real inode
> > 
> WRT sb_writers the order
> 
> 	lock inode parent
> 	lock inode kid
> 
> becomes
> 	lock inode kid
> 	sb_writers
> 	lock inode parent 
> 
> given call trace
> 
> > -> #2 (sb_writers#4){.+.+}-{0:0}:
> >        lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
> >        percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> >        __sb_start_write include/linux/fs.h:1664 [inline]
> >        sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
> >        mnt_want_write+0x3f/0x90 fs/namespace.c:409
> >        ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
> >        lookup_open fs/namei.c:3497 [inline]
> >        open_last_lookups fs/namei.c:3566 [inline]
> 
> and code snippet [1]
> 
> 	if (open_flag & O_CREAT)
> 		inode_lock(dir->d_inode);
> 	else
> 		inode_lock_shared(dir->d_inode);
> 	dentry = lookup_open(nd, file, op, got_write);
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/namei.c?id=dccb07f2914c#n3566

JFYI simply cutting off mnt_want_write() in ovl_create_object() survived
the syzpot repro [2], so acquiring sb_writers with inode locked at least
in the lookup path makes trouble.

[2] https://lore.kernel.org/lkml/000000000000975906061817416b@google.com/

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-05-10 11:33 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-27 11:09 [syzbot] [kernfs?] possible deadlock in kernfs_seq_start syzbot
2024-05-08  5:36 ` syzbot
2024-05-08 23:19   ` Hillf Danton
2024-05-09  6:37     ` Amir Goldstein
2024-05-09 10:48       ` Hillf Danton
2024-05-09 14:52         ` Amir Goldstein
2024-05-09 23:26           ` Hillf Danton
2024-05-10 11:33             ` Hillf Danton
2024-05-08 15:13 ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).