linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [syzbot] possible deadlock in freeze_super (2)
@ 2022-10-10  7:37 syzbot
  2022-12-29 15:58 ` [syzbot] [gfs2?] " syzbot
  2023-03-24 14:43 ` [syzbot] [cluster?] " syzbot
  0 siblings, 2 replies; 4+ messages in thread
From: syzbot @ 2022-10-10  7:37 UTC (permalink / raw)
  To: agruenba, cluster-devel, linux-kernel, rpeterso, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    bbed346d5a96 Merge branch 'for-next/core' into for-kernelci
git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=16b0403a880000
kernel config:  https://syzkaller.appspot.com/x/.config?x=3a4a45d2d827c1e
dashboard link: https://syzkaller.appspot.com/bug?extid=be899d4f10b2a9522dce
compiler:       Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/e8e91bc79312/disk-bbed346d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c1cb3fb3b77e/vmlinux-bbed346d.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+be899d4f10b2a9522dce@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.0.0-rc7-syzkaller-18095-gbbed346d5a96 #0 Not tainted
------------------------------------------------------
kworker/1:1H/76 is trying to acquire lock:
ffff000122d770e0 (&type->s_umount_key#113){+.+.}-{3:3}, at: freeze_super+0x40/0x1f0 fs/super.c:1696

but task is already holding lock:
ffff80000fb63d80 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}, at: process_one_work+0x29c/0x504 kernel/workqueue.c:2264

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}:
       process_one_work+0x2c4/0x504 kernel/workqueue.c:2265
       worker_thread+0x340/0x610 kernel/workqueue.c:2436
       kthread+0x12c/0x158 kernel/kthread.c:376
       ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860

-> #1 ((wq_completion)glock_workqueue){+.+.}-{0:0}:
       __flush_workqueue+0xb8/0x6dc kernel/workqueue.c:2809
       gfs2_gl_hash_clear+0x4c/0x1b0 fs/gfs2/glock.c:2207
       gfs2_put_super+0x318/0x390 fs/gfs2/super.c:619
       generic_shutdown_super+0x8c/0x190 fs/super.c:491
       kill_block_super+0x30/0x78 fs/super.c:1427
       gfs2_kill_sb+0x68/0x78
       deactivate_locked_super+0x70/0xe8 fs/super.c:332
       deactivate_super+0xd0/0xd4 fs/super.c:363
       cleanup_mnt+0x1f8/0x234 fs/namespace.c:1186
       __cleanup_mnt+0x20/0x30 fs/namespace.c:1193
       task_work_run+0xc4/0x14c kernel/task_work.c:177
       resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
       do_notify_resume+0x174/0x1f0 arch/arm64/kernel/signal.c:1127
       prepare_exit_to_user_mode arch/arm64/kernel/entry-common.c:137 [inline]
       exit_to_user_mode arch/arm64/kernel/entry-common.c:142 [inline]
       el0_svc+0x9c/0x150 arch/arm64/kernel/entry-common.c:637
       el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:654
       el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:581

-> #0 (&type->s_umount_key#113){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3095 [inline]
       check_prevs_add kernel/locking/lockdep.c:3214 [inline]
       validate_chain kernel/locking/lockdep.c:3829 [inline]
       __lock_acquire+0x1530/0x30a4 kernel/locking/lockdep.c:5053
       lock_acquire+0x100/0x1f8 kernel/locking/lockdep.c:5666
       down_write+0x5c/0xcc kernel/locking/rwsem.c:1552
       freeze_super+0x40/0x1f0 fs/super.c:1696
       freeze_go_sync+0x84/0x1a8 fs/gfs2/glops.c:573
       do_xmote+0x180/0x954 fs/gfs2/glock.c:769
       run_queue+0x294/0x3c4 fs/gfs2/glock.c:893
       glock_work_func+0x190/0x288 fs/gfs2/glock.c:1059
       process_one_work+0x2d8/0x504 kernel/workqueue.c:2289
       worker_thread+0x340/0x610 kernel/workqueue.c:2436
       kthread+0x12c/0x158 kernel/kthread.c:376
       ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860

other info that might help us debug this:

Chain exists of:
  &type->s_umount_key#113 --> (wq_completion)glock_workqueue --> (work_completion)(&(&gl->gl_work)->work)

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock((work_completion)(&(&gl->gl_work)->work));
                               lock((wq_completion)glock_workqueue);
                               lock((work_completion)(&(&gl->gl_work)->work));
  lock(&type->s_umount_key#113);

 *** DEADLOCK ***

2 locks held by kworker/1:1H/76:
 #0: ffff0000c0de2f38 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: process_one_work+0x270/0x504 kernel/workqueue.c:2262
 #1: ffff80000fb63d80 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}, at: process_one_work+0x29c/0x504 kernel/workqueue.c:2264

stack backtrace:
CPU: 1 PID: 76 Comm: kworker/1:1H Not tainted 6.0.0-rc7-syzkaller-18095-gbbed346d5a96 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/30/2022
Workqueue: glock_workqueue glock_work_func
Call trace:
 dump_backtrace+0x1c4/0x1f0 arch/arm64/kernel/stacktrace.c:156
 show_stack+0x2c/0x54 arch/arm64/kernel/stacktrace.c:163
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x104/0x16c lib/dump_stack.c:106
 dump_stack+0x1c/0x58 lib/dump_stack.c:113
 print_circular_bug+0x2c4/0x2c8 kernel/locking/lockdep.c:2053
 check_noncircular+0x14c/0x154 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3095 [inline]
 check_prevs_add kernel/locking/lockdep.c:3214 [inline]
 validate_chain kernel/locking/lockdep.c:3829 [inline]
 __lock_acquire+0x1530/0x30a4 kernel/locking/lockdep.c:5053
 lock_acquire+0x100/0x1f8 kernel/locking/lockdep.c:5666
 down_write+0x5c/0xcc kernel/locking/rwsem.c:1552
 freeze_super+0x40/0x1f0 fs/super.c:1696
 freeze_go_sync+0x84/0x1a8 fs/gfs2/glops.c:573
 do_xmote+0x180/0x954 fs/gfs2/glock.c:769
 run_queue+0x294/0x3c4 fs/gfs2/glock.c:893
 glock_work_func+0x190/0x288 fs/gfs2/glock.c:1059
 process_one_work+0x2d8/0x504 kernel/workqueue.c:2289
 worker_thread+0x340/0x610 kernel/workqueue.c:2436
 kthread+0x12c/0x158 kernel/kthread.c:376
 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [syzbot] [gfs2?] possible deadlock in freeze_super (2)
  2022-10-10  7:37 [syzbot] possible deadlock in freeze_super (2) syzbot
@ 2022-12-29 15:58 ` syzbot
  2023-03-24 14:43 ` [syzbot] [cluster?] " syzbot
  1 sibling, 0 replies; 4+ messages in thread
From: syzbot @ 2022-12-29 15:58 UTC (permalink / raw)
  To: agruenba, cluster-devel, linux-kernel, rpeterso, syzkaller-bugs

syzbot has found a reproducer for the following issue on:

HEAD commit:    1b929c02afd3 Linux 6.2-rc1
git tree:       upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=11447312480000
kernel config:  https://syzkaller.appspot.com/x/.config?x=68e0be42c8ee4bb4
dashboard link: https://syzkaller.appspot.com/bug?extid=be899d4f10b2a9522dce
compiler:       Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=14b638c0480000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=11b17270480000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2d8c5072480f/disk-1b929c02.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/46687f1395db/vmlinux-1b929c02.xz
kernel image: https://storage.googleapis.com/syzbot-assets/26f1afa5ec00/bzImage-1b929c02.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/952580c084c8/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+be899d4f10b2a9522dce@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.2.0-rc1-syzkaller #0 Not tainted
------------------------------------------------------
kworker/0:1H/52 is trying to acquire lock:
ffff8880277440e0 (&type->s_umount_key#44){+.+.}-{3:3}, at: freeze_super+0x45/0x420 fs/super.c:1655

but task is already holding lock:
ffffc90000bd7d00 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}, at: process_one_work+0x831/0xdb0 kernel/workqueue.c:2264

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 ((work_completion)(&(&gl->gl_work)->work)){+.+.}-{0:0}:
       lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
       process_one_work+0x852/0xdb0 kernel/workqueue.c:2265
       worker_thread+0xb14/0x1330 kernel/workqueue.c:2436
       kthread+0x266/0x300 kernel/kthread.c:376
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

-> #1 ((wq_completion)glock_workqueue){+.+.}-{0:0}:
       lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
       __flush_workqueue+0x178/0x1680 kernel/workqueue.c:2809
       gfs2_gl_hash_clear+0xa3/0x300 fs/gfs2/glock.c:2191
       gfs2_put_super+0x862/0x8d0 fs/gfs2/super.c:627
       generic_shutdown_super+0x130/0x310 fs/super.c:492
       kill_block_super+0x79/0xd0 fs/super.c:1386
       deactivate_locked_super+0xa7/0xf0 fs/super.c:332
       cleanup_mnt+0x494/0x520 fs/namespace.c:1291
       task_work_run+0x243/0x300 kernel/task_work.c:179
       ptrace_notify+0x29a/0x340 kernel/signal.c:2354
       ptrace_report_syscall include/linux/ptrace.h:411 [inline]
       ptrace_report_syscall_exit include/linux/ptrace.h:473 [inline]
       syscall_exit_work+0x8c/0xe0 kernel/entry/common.c:251
       syscall_exit_to_user_mode_prepare+0x63/0xc0 kernel/entry/common.c:278
       __syscall_exit_to_user_mode_work kernel/entry/common.c:283 [inline]
       syscall_exit_to_user_mode+0xa/0x60 kernel/entry/common.c:296
       do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:86
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&type->s_umount_key#44){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3097 [inline]
       check_prevs_add kernel/locking/lockdep.c:3216 [inline]
       validate_chain+0x1898/0x6ae0 kernel/locking/lockdep.c:3831
       __lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
       lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
       down_write+0x9c/0x270 kernel/locking/rwsem.c:1562
       freeze_super+0x45/0x420 fs/super.c:1655
       freeze_go_sync+0x178/0x340 fs/gfs2/glops.c:577
       do_xmote+0x34d/0x13d0 fs/gfs2/glock.c:708
       glock_work_func+0x2c2/0x450 fs/gfs2/glock.c:1056
       process_one_work+0x877/0xdb0 kernel/workqueue.c:2289
       worker_thread+0xb14/0x1330 kernel/workqueue.c:2436
       kthread+0x266/0x300 kernel/kthread.c:376
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

other info that might help us debug this:

Chain exists of:
  &type->s_umount_key#44 --> (wq_completion)glock_workqueue --> (work_completion)(&(&gl->gl_work)->work)

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock((work_completion)(&(&gl->gl_work)->work));
                               lock((wq_completion)glock_workqueue);
                               lock((work_completion)(&(&gl->gl_work)->work));
  lock(&type->s_umount_key#44);

 *** DEADLOCK ***

2 locks held by kworker/0:1H/52:
 #0: ffff888018293938 ((wq_completion)glock_workqueue){+.+.}-{0:0}, at: process_one_work+0x7f2/0xdb0
 #1: ffffc90000bd7d00 ((work_completion)(&(&gl->gl_work)->work)
){+.+.}-{0:0}
, at: process_one_work+0x831/0xdb0 kernel/workqueue.c:2264

stack backtrace:
CPU: 0 PID: 52 Comm: kworker/0:1H Not tainted 6.2.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Workqueue: glock_workqueue glock_work_func
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1b1/0x290 lib/dump_stack.c:106
 check_noncircular+0x2cc/0x390 kernel/locking/lockdep.c:2177
 check_prev_add kernel/locking/lockdep.c:3097 [inline]
 check_prevs_add kernel/locking/lockdep.c:3216 [inline]
 validate_chain+0x1898/0x6ae0 kernel/locking/lockdep.c:3831
 __lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
 lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
 down_write+0x9c/0x270 kernel/locking/rwsem.c:1562
 freeze_super+0x45/0x420 fs/super.c:1655
 freeze_go_sync+0x178/0x340 fs/gfs2/glops.c:577
 do_xmote+0x34d/0x13d0 fs/gfs2/glock.c:708
 glock_work_func+0x2c2/0x450 fs/gfs2/glock.c:1056
 process_one_work+0x877/0xdb0 kernel/workqueue.c:2289
 worker_thread+0xb14/0x1330 kernel/workqueue.c:2436
 kthread+0x266/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [syzbot] [cluster?] possible deadlock in freeze_super (2)
  2022-10-10  7:37 [syzbot] possible deadlock in freeze_super (2) syzbot
  2022-12-29 15:58 ` [syzbot] [gfs2?] " syzbot
@ 2023-03-24 14:43 ` syzbot
  2023-04-03  6:23   ` Dmitry Vyukov
  1 sibling, 1 reply; 4+ messages in thread
From: syzbot @ 2023-03-24 14:43 UTC (permalink / raw)
  To: agruenba, cluster-devel, linux-fsdevel, linux-kernel, rpeterso,
	syzkaller-bugs

syzbot suspects this issue was fixed by commit:

commit b66f723bb552ad59c2acb5d45ea45c890f84498b
Author: Andreas Gruenbacher <agruenba@redhat.com>
Date:   Tue Jan 31 14:06:53 2023 +0000

    gfs2: Improve gfs2_make_fs_rw error handling

bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=117e2e29c80000
start commit:   4a7d37e824f5 Merge tag 'hardening-v6.3-rc1' of git://git.k..
git tree:       upstream
kernel config:  https://syzkaller.appspot.com/x/.config?x=8b969c5af147d31c
dashboard link: https://syzkaller.appspot.com/bug?extid=be899d4f10b2a9522dce
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=11484328c80000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=127093a0c80000

If the result looks correct, please mark the issue as fixed by replying with:

#syz fix: gfs2: Improve gfs2_make_fs_rw error handling

For information about bisection process see: https://goo.gl/tpsmEJ#bisection

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [syzbot] [cluster?] possible deadlock in freeze_super (2)
  2023-03-24 14:43 ` [syzbot] [cluster?] " syzbot
@ 2023-04-03  6:23   ` Dmitry Vyukov
  0 siblings, 0 replies; 4+ messages in thread
From: Dmitry Vyukov @ 2023-04-03  6:23 UTC (permalink / raw)
  To: syzbot
  Cc: agruenba, cluster-devel, linux-fsdevel, linux-kernel, rpeterso,
	syzkaller-bugs

On Fri, 24 Mar 2023 at 15:43, syzbot
<syzbot+be899d4f10b2a9522dce@syzkaller.appspotmail.com> wrote:
>
> syzbot suspects this issue was fixed by commit:
>
> commit b66f723bb552ad59c2acb5d45ea45c890f84498b
> Author: Andreas Gruenbacher <agruenba@redhat.com>
> Date:   Tue Jan 31 14:06:53 2023 +0000
>
>     gfs2: Improve gfs2_make_fs_rw error handling
>
> bisection log:  https://syzkaller.appspot.com/x/bisect.txt?x=117e2e29c80000
> start commit:   4a7d37e824f5 Merge tag 'hardening-v6.3-rc1' of git://git.k..
> git tree:       upstream
> kernel config:  https://syzkaller.appspot.com/x/.config?x=8b969c5af147d31c
> dashboard link: https://syzkaller.appspot.com/bug?extid=be899d4f10b2a9522dce
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=11484328c80000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=127093a0c80000
>
> If the result looks correct, please mark the issue as fixed by replying with:
>
> #syz fix: gfs2: Improve gfs2_make_fs_rw error handling
>
> For information about bisection process see: https://goo.gl/tpsmEJ#bisection

Looks reasonable:

#syz fix: gfs2: Improve gfs2_make_fs_rw error handling

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-04-03  6:24 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-10  7:37 [syzbot] possible deadlock in freeze_super (2) syzbot
2022-12-29 15:58 ` [syzbot] [gfs2?] " syzbot
2023-03-24 14:43 ` [syzbot] [cluster?] " syzbot
2023-04-03  6:23   ` Dmitry Vyukov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).