* [syzbot] [fs?] possible deadlock in sys_quotactl_fd
@ 2023-04-11 6:53 ` syzbot
2023-04-11 10:11 ` [syzbot] [fs?] possible deadlock in quotactl_fd Christian Brauner
2023-04-29 17:36 ` [syzbot] [ext4?] [fat?] possible deadlock in sys_quotactl_fd syzbot
0 siblings, 2 replies; 10+ messages in thread
From: syzbot @ 2023-04-11 6:53 UTC (permalink / raw)
To: jack, linux-fsdevel, linux-kernel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 76f598ba7d8e Merge tag 'for-linus' of git://git.kernel.org..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13965b21c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=5666fa6aca264e42
dashboard link: https://syzkaller.appspot.com/bug?extid=aacb82fca60873422114
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/1f01c9748997/disk-76f598ba.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b3afb4fc86b9/vmlinux-76f598ba.xz
kernel image: https://storage.googleapis.com/syzbot-assets/8908040d7a31/bzImage-76f598ba.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+aacb82fca60873422114@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.3.0-rc5-syzkaller-00022-g76f598ba7d8e #0 Not tainted
------------------------------------------------------
syz-executor.0/17940 is trying to acquire lock:
ffff88802a89e0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
ffff88802a89e0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
but task is already holding lock:
ffff88802a89e460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (sb_writers#4){.+.+}-{0:0}:
lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1477 [inline]
sb_start_write include/linux/fs.h:1552 [inline]
write_mmp_block+0xe5/0x930 fs/ext4/mmp.c:50
ext4_multi_mount_protect+0x364/0x990 fs/ext4/mmp.c:343
__ext4_remount fs/ext4/super.c:6543 [inline]
ext4_reconfigure+0x29a8/0x3280 fs/ext4/super.c:6642
reconfigure_super+0x3c9/0x7c0 fs/super.c:956
vfs_fsconfig_locked fs/fsopen.c:254 [inline]
__do_sys_fsconfig fs/fsopen.c:439 [inline]
__se_sys_fsconfig+0xa29/0xf70 fs/fsopen.c:314
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (&type->s_umount_key#31){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
__do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
__se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(sb_writers#4);
lock(&type->s_umount_key#31);
lock(sb_writers#4);
lock(&type->s_umount_key#31);
*** DEADLOCK ***
1 lock held by syz-executor.0/17940:
#0: ffff88802a89e460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
stack backtrace:
CPU: 0 PID: 17940 Comm: syz-executor.0 Not tainted 6.3.0-rc5-syzkaller-00022-g76f598ba7d8e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
__do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
__se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f3c2aa8c169
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f3c2b826168 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
RAX: ffffffffffffffda RBX: 00007f3c2ababf80 RCX: 00007f3c2aa8c169
RDX: ffffffffffffffff RSI: ffffffff80000601 RDI: 0000000000000003
RBP: 00007f3c2aae7ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000200024c0 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffd71f38adf R14: 00007f3c2b826300 R15: 0000000000022000
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
^ permalink raw reply [flat|nested] 10+ messages in thread
* [syzbot] [fs?] possible deadlock in quotactl_fd
@ 2023-04-11 6:53 syzbot
2023-04-11 6:53 ` [syzbot] [fs?] possible deadlock in sys_quotactl_fd syzbot
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: syzbot @ 2023-04-11 6:53 UTC (permalink / raw)
To: jack, linux-fsdevel, linux-kernel, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 0d3eb744aed4 Merge tag 'urgent-rcu.2023.04.07a' of git://g..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=11798e4bc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=c21559e740385326
dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/a02928003efa/disk-0d3eb744.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/7839447005a4/vmlinux-0d3eb744.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d26ab3184148/bzImage-0d3eb744.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0 Not tainted
------------------------------------------------------
syz-executor.3/11858 is trying to acquire lock:
ffff88802a3bc0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
but task is already holding lock:
ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (sb_writers#4){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1477 [inline]
sb_start_write include/linux/fs.h:1552 [inline]
write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:343
__ext4_remount fs/ext4/super.c:6543 [inline]
ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6642
reconfigure_super+0x40c/0xa30 fs/super.c:956
vfs_fsconfig_locked fs/fsopen.c:254 [inline]
__do_sys_fsconfig+0xa3a/0xc20 fs/fsopen.c:439
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (&type->s_umount_key#31){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
lock_acquire kernel/locking/lockdep.c:5669 [inline]
lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
down_write+0x92/0x200 kernel/locking/rwsem.c:1573
__do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(sb_writers#4);
lock(&type->s_umount_key#31);
lock(sb_writers#4);
lock(&type->s_umount_key#31);
*** DEADLOCK ***
1 lock held by syz-executor.3/11858:
#0: ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
stack backtrace:
CPU: 0 PID: 11858 Comm: syz-executor.3 Not tainted 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
lock_acquire kernel/locking/lockdep.c:5669 [inline]
lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
down_write+0x92/0x200 kernel/locking/rwsem.c:1573
__do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f81d2e8c169
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f81d3b29168 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
RAX: ffffffffffffffda RBX: 00007f81d2fabf80 RCX: 00007f81d2e8c169
RDX: 0000000000000000 RSI: ffffffff80000300 RDI: 0000000000000003
RBP: 00007f81d2ee7ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fffeeb18d7f R14: 00007f81d3b29300 R15: 0000000000022000
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [fs?] possible deadlock in quotactl_fd
2023-04-11 6:53 ` [syzbot] [fs?] possible deadlock in sys_quotactl_fd syzbot
@ 2023-04-11 10:11 ` Christian Brauner
2023-04-11 10:55 ` Jan Kara
2023-04-29 17:36 ` [syzbot] [ext4?] [fat?] possible deadlock in sys_quotactl_fd syzbot
1 sibling, 1 reply; 10+ messages in thread
From: Christian Brauner @ 2023-04-11 10:11 UTC (permalink / raw)
To: jack; +Cc: linux-fsdevel, linux-kernel, syzkaller-bugs, syzbot, syzbot
On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 0d3eb744aed4 Merge tag 'urgent-rcu.2023.04.07a' of git://g..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=11798e4bc80000
> kernel config: https://syzkaller.appspot.com/x/.config?x=c21559e740385326
> dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
> compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/a02928003efa/disk-0d3eb744.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/7839447005a4/vmlinux-0d3eb744.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/d26ab3184148/bzImage-0d3eb744.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0 Not tainted
> ------------------------------------------------------
> syz-executor.3/11858 is trying to acquire lock:
> ffff88802a3bc0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
>
> but task is already holding lock:
> ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (sb_writers#4){.+.+}-{0:0}:
> percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> __sb_start_write include/linux/fs.h:1477 [inline]
> sb_start_write include/linux/fs.h:1552 [inline]
> write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
> ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:343
> __ext4_remount fs/ext4/super.c:6543 [inline]
> ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6642
> reconfigure_super+0x40c/0xa30 fs/super.c:956
> vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> __do_sys_fsconfig+0xa3a/0xc20 fs/fsopen.c:439
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> check_prev_add kernel/locking/lockdep.c:3098 [inline]
> check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> validate_chain kernel/locking/lockdep.c:3832 [inline]
> __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> lock_acquire kernel/locking/lockdep.c:5669 [inline]
> lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(sb_writers#4);
> lock(&type->s_umount_key#31);
> lock(sb_writers#4);
> lock(&type->s_umount_key#31);
>
> *** DEADLOCK ***
Hmkay, I understand how this happens, I think:
fsconfig(FSCONFIG_CMD_RECONFIGURE) quotactl_fd(Q_QUOTAON/Q_QUOTAOFF/Q_XQUOTAON/Q_XQUOTAOFF)
-> mnt_want_write(f.file->f_path.mnt);
-> down_write(&sb->s_umount); -> __sb_start_write(sb, SB_FREEZE_WRITE)
-> reconfigure_super(fc);
-> ext4_multi_mount_protect()
-> __sb_start_write(sb, SB_FREEZE_WRITE) -> down_write(&sb->s_umount);
-> up_write(&sb->s_umount);
I have to step away from the computer now for a bit but naively it seem
that the locking order for quotactl_fd() should be the other way around.
But while I'm here, why does quotactl_fd() take mnt_want_write() but
quotactl() doesn't? It seems that if one needs to take it both need to
take it.
>
> 1 lock held by syz-executor.3/11858:
> #0: ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
>
> stack backtrace:
> CPU: 0 PID: 11858 Comm: syz-executor.3 Not tainted 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
> check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2178
> check_prev_add kernel/locking/lockdep.c:3098 [inline]
> check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> validate_chain kernel/locking/lockdep.c:3832 [inline]
> __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> lock_acquire kernel/locking/lockdep.c:5669 [inline]
> lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
> RIP: 0033:0x7f81d2e8c169
> Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f81d3b29168 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
> RAX: ffffffffffffffda RBX: 00007f81d2fabf80 RCX: 00007f81d2e8c169
> RDX: 0000000000000000 RSI: ffffffff80000300 RDI: 0000000000000003
> RBP: 00007f81d2ee7ca1 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007fffeeb18d7f R14: 00007f81d3b29300 R15: 0000000000022000
> </TASK>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 76f598ba7d8e Merge tag 'for-linus' of git://git.kernel.org..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=13965b21c80000
> kernel config: https://syzkaller.appspot.com/x/.config?x=5666fa6aca264e42
> dashboard link: https://syzkaller.appspot.com/bug?extid=aacb82fca60873422114
> compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/1f01c9748997/disk-76f598ba.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/b3afb4fc86b9/vmlinux-76f598ba.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/8908040d7a31/bzImage-76f598ba.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+aacb82fca60873422114@syzkaller.appspotmail.com
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.3.0-rc5-syzkaller-00022-g76f598ba7d8e #0 Not tainted
> ------------------------------------------------------
> syz-executor.0/17940 is trying to acquire lock:
> ffff88802a89e0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
> ffff88802a89e0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
>
> but task is already holding lock:
> ffff88802a89e460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (sb_writers#4){.+.+}-{0:0}:
> lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
> percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> __sb_start_write include/linux/fs.h:1477 [inline]
> sb_start_write include/linux/fs.h:1552 [inline]
> write_mmp_block+0xe5/0x930 fs/ext4/mmp.c:50
> ext4_multi_mount_protect+0x364/0x990 fs/ext4/mmp.c:343
> __ext4_remount fs/ext4/super.c:6543 [inline]
> ext4_reconfigure+0x29a8/0x3280 fs/ext4/super.c:6642
> reconfigure_super+0x3c9/0x7c0 fs/super.c:956
> vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> __do_sys_fsconfig fs/fsopen.c:439 [inline]
> __se_sys_fsconfig+0xa29/0xf70 fs/fsopen.c:314
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> check_prev_add kernel/locking/lockdep.c:3098 [inline]
> check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3832
> __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
> lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
> down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
> __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
> __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(sb_writers#4);
> lock(&type->s_umount_key#31);
> lock(sb_writers#4);
> lock(&type->s_umount_key#31);
>
> *** DEADLOCK ***
>
> 1 lock held by syz-executor.0/17940:
> #0: ffff88802a89e460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
>
> stack backtrace:
> CPU: 0 PID: 17940 Comm: syz-executor.0 Not tainted 6.3.0-rc5-syzkaller-00022-g76f598ba7d8e #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
> check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2178
> check_prev_add kernel/locking/lockdep.c:3098 [inline]
> check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3832
> __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
> lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
> down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
> __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
> __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
> RIP: 0033:0x7f3c2aa8c169
> Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f3c2b826168 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
> RAX: ffffffffffffffda RBX: 00007f3c2ababf80 RCX: 00007f3c2aa8c169
> RDX: ffffffffffffffff RSI: ffffffff80000601 RDI: 0000000000000003
> RBP: 00007f3c2aae7ca1 R08: 0000000000000000 R09: 0000000000000000
> R10: 00000000200024c0 R11: 0000000000000246 R12: 0000000000000000
> R13: 00007ffd71f38adf R14: 00007f3c2b826300 R15: 0000000000022000
> </TASK>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [fs?] possible deadlock in quotactl_fd
2023-04-11 10:11 ` [syzbot] [fs?] possible deadlock in quotactl_fd Christian Brauner
@ 2023-04-11 10:55 ` Jan Kara
2023-04-11 13:40 ` Christian Brauner
0 siblings, 1 reply; 10+ messages in thread
From: Jan Kara @ 2023-04-11 10:55 UTC (permalink / raw)
To: Christian Brauner
Cc: jack, linux-fsdevel, linux-ext4, linux-kernel, syzkaller-bugs,
syzbot, syzbot
On Tue 11-04-23 12:11:52, Christian Brauner wrote:
> On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: 0d3eb744aed4 Merge tag 'urgent-rcu.2023.04.07a' of git://g..
> > git tree: upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=11798e4bc80000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=c21559e740385326
> > dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
> > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/a02928003efa/disk-0d3eb744.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/7839447005a4/vmlinux-0d3eb744.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/d26ab3184148/bzImage-0d3eb744.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
> >
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0 Not tainted
> > ------------------------------------------------------
> > syz-executor.3/11858 is trying to acquire lock:
> > ffff88802a3bc0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> >
> > but task is already holding lock:
> > ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
> >
> > which lock already depends on the new lock.
> >
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #1 (sb_writers#4){.+.+}-{0:0}:
> > percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> > __sb_start_write include/linux/fs.h:1477 [inline]
> > sb_start_write include/linux/fs.h:1552 [inline]
> > write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
> > ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:343
> > __ext4_remount fs/ext4/super.c:6543 [inline]
> > ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6642
> > reconfigure_super+0x40c/0xa30 fs/super.c:956
> > vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> > __do_sys_fsconfig+0xa3a/0xc20 fs/fsopen.c:439
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >
> > -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > validate_chain kernel/locking/lockdep.c:3832 [inline]
> > __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> > lock_acquire kernel/locking/lockdep.c:5669 [inline]
> > lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> > down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> > __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >
> > other info that might help us debug this:
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(sb_writers#4);
> > lock(&type->s_umount_key#31);
> > lock(sb_writers#4);
> > lock(&type->s_umount_key#31);
> >
> > *** DEADLOCK ***
>
> Hmkay, I understand how this happens, I think:
>
> fsconfig(FSCONFIG_CMD_RECONFIGURE) quotactl_fd(Q_QUOTAON/Q_QUOTAOFF/Q_XQUOTAON/Q_XQUOTAOFF)
> -> mnt_want_write(f.file->f_path.mnt);
> -> down_write(&sb->s_umount); -> __sb_start_write(sb, SB_FREEZE_WRITE)
> -> reconfigure_super(fc);
> -> ext4_multi_mount_protect()
> -> __sb_start_write(sb, SB_FREEZE_WRITE) -> down_write(&sb->s_umount);
> -> up_write(&sb->s_umount);
Thanks for having a look!
> I have to step away from the computer now for a bit but naively it seem
> that the locking order for quotactl_fd() should be the other way around.
>
> But while I'm here, why does quotactl_fd() take mnt_want_write() but
> quotactl() doesn't? It seems that if one needs to take it both need to
> take it.
Couple of notes here:
1) quotactl() handles the filesystem freezing by grabbing the s_umount
semaphore, checking the superblock freeze state (it cannot change while
s_umount is held) and proceeding if fs is not frozen. This logic is hidden
in quotactl_block().
2) The proper lock ordering is indeed freeze-protection -> s_umount because
that is implicitely dictated by how filesystem freezing works. If you grab
s_umount and then try to grab freeze protection, you may effectively wait
for fs to get unfrozen which cannot happen while s_umount is held as
thaw_super() needs to grab it.
3) Hence this could be viewed as ext4 bug that it tries to grab freeze
protection from remount path. *But* reconfigure_super() actually has:
if (sb->s_writers.frozen != SB_UNFROZEN)
return -EBUSY;
so even ext4 is in fact safe because the filesystem is guaranteed to not be
frozen during remount. But still we should probably tweak the ext4 code to
avoid this lockdep warning...
Honza
> > 1 lock held by syz-executor.3/11858:
> > #0: ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
> >
> > stack backtrace:
> > CPU: 0 PID: 11858 Comm: syz-executor.3 Not tainted 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
> > Call Trace:
> > <TASK>
> > __dump_stack lib/dump_stack.c:88 [inline]
> > dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
> > check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2178
> > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > validate_chain kernel/locking/lockdep.c:3832 [inline]
> > __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> > lock_acquire kernel/locking/lockdep.c:5669 [inline]
> > lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> > down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> > __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > RIP: 0033:0x7f81d2e8c169
> > Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> > RSP: 002b:00007f81d3b29168 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
> > RAX: ffffffffffffffda RBX: 00007f81d2fabf80 RCX: 00007f81d2e8c169
> > RDX: 0000000000000000 RSI: ffffffff80000300 RDI: 0000000000000003
> > RBP: 00007f81d2ee7ca1 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> > R13: 00007fffeeb18d7f R14: 00007f81d3b29300 R15: 0000000000022000
> > </TASK>
> >
> >
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> >
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: 76f598ba7d8e Merge tag 'for-linus' of git://git.kernel.org..
> > git tree: upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=13965b21c80000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=5666fa6aca264e42
> > dashboard link: https://syzkaller.appspot.com/bug?extid=aacb82fca60873422114
> > compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/1f01c9748997/disk-76f598ba.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/b3afb4fc86b9/vmlinux-76f598ba.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/8908040d7a31/bzImage-76f598ba.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+aacb82fca60873422114@syzkaller.appspotmail.com
> >
> > ======================================================
> > WARNING: possible circular locking dependency detected
> > 6.3.0-rc5-syzkaller-00022-g76f598ba7d8e #0 Not tainted
> > ------------------------------------------------------
> > syz-executor.0/17940 is trying to acquire lock:
> > ffff88802a89e0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
> > ffff88802a89e0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
> >
> > but task is already holding lock:
> > ffff88802a89e460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
> >
> > which lock already depends on the new lock.
> >
> >
> > the existing dependency chain (in reverse order) is:
> >
> > -> #1 (sb_writers#4){.+.+}-{0:0}:
> > lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
> > percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> > __sb_start_write include/linux/fs.h:1477 [inline]
> > sb_start_write include/linux/fs.h:1552 [inline]
> > write_mmp_block+0xe5/0x930 fs/ext4/mmp.c:50
> > ext4_multi_mount_protect+0x364/0x990 fs/ext4/mmp.c:343
> > __ext4_remount fs/ext4/super.c:6543 [inline]
> > ext4_reconfigure+0x29a8/0x3280 fs/ext4/super.c:6642
> > reconfigure_super+0x3c9/0x7c0 fs/super.c:956
> > vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> > __do_sys_fsconfig fs/fsopen.c:439 [inline]
> > __se_sys_fsconfig+0xa29/0xf70 fs/fsopen.c:314
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >
> > -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3832
> > __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
> > lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
> > down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
> > __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
> > __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >
> > other info that might help us debug this:
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(sb_writers#4);
> > lock(&type->s_umount_key#31);
> > lock(sb_writers#4);
> > lock(&type->s_umount_key#31);
> >
> > *** DEADLOCK ***
> >
> > 1 lock held by syz-executor.0/17940:
> > #0: ffff88802a89e460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
> >
> > stack backtrace:
> > CPU: 0 PID: 17940 Comm: syz-executor.0 Not tainted 6.3.0-rc5-syzkaller-00022-g76f598ba7d8e #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
> > Call Trace:
> > <TASK>
> > __dump_stack lib/dump_stack.c:88 [inline]
> > dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
> > check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2178
> > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3832
> > __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
> > lock_acquire+0x1e1/0x520 kernel/locking/lockdep.c:5669
> > down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
> > __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
> > __se_sys_quotactl_fd+0x2fb/0x440 fs/quota/quota.c:972
> > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
> > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > RIP: 0033:0x7f3c2aa8c169
> > Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> > RSP: 002b:00007f3c2b826168 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
> > RAX: ffffffffffffffda RBX: 00007f3c2ababf80 RCX: 00007f3c2aa8c169
> > RDX: ffffffffffffffff RSI: ffffffff80000601 RDI: 0000000000000003
> > RBP: 00007f3c2aae7ca1 R08: 0000000000000000 R09: 0000000000000000
> > R10: 00000000200024c0 R11: 0000000000000246 R12: 0000000000000000
> > R13: 00007ffd71f38adf R14: 00007f3c2b826300 R15: 0000000000022000
> > </TASK>
> >
> >
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> >
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [fs?] possible deadlock in quotactl_fd
2023-04-11 10:55 ` Jan Kara
@ 2023-04-11 13:40 ` Christian Brauner
2023-04-11 14:01 ` Christian Brauner
0 siblings, 1 reply; 10+ messages in thread
From: Christian Brauner @ 2023-04-11 13:40 UTC (permalink / raw)
To: Jan Kara
Cc: jack, linux-fsdevel, linux-ext4, linux-kernel, syzkaller-bugs,
syzbot, syzbot
On Tue, Apr 11, 2023 at 12:55:42PM +0200, Jan Kara wrote:
> On Tue 11-04-23 12:11:52, Christian Brauner wrote:
> > On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit: 0d3eb744aed4 Merge tag 'urgent-rcu.2023.04.07a' of git://g..
> > > git tree: upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=11798e4bc80000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=c21559e740385326
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
> > > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> > >
> > > Unfortunately, I don't have any reproducer for this issue yet.
> > >
> > > Downloadable assets:
> > > disk image: https://storage.googleapis.com/syzbot-assets/a02928003efa/disk-0d3eb744.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/7839447005a4/vmlinux-0d3eb744.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/d26ab3184148/bzImage-0d3eb744.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
> > >
> > > ======================================================
> > > WARNING: possible circular locking dependency detected
> > > 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0 Not tainted
> > > ------------------------------------------------------
> > > syz-executor.3/11858 is trying to acquire lock:
> > > ffff88802a3bc0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > >
> > > but task is already holding lock:
> > > ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
> > >
> > > which lock already depends on the new lock.
> > >
> > >
> > > the existing dependency chain (in reverse order) is:
> > >
> > > -> #1 (sb_writers#4){.+.+}-{0:0}:
> > > percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> > > __sb_start_write include/linux/fs.h:1477 [inline]
> > > sb_start_write include/linux/fs.h:1552 [inline]
> > > write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
> > > ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:343
> > > __ext4_remount fs/ext4/super.c:6543 [inline]
> > > ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6642
> > > reconfigure_super+0x40c/0xa30 fs/super.c:956
> > > vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> > > __do_sys_fsconfig+0xa3a/0xc20 fs/fsopen.c:439
> > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > >
> > > -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> > > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > > validate_chain kernel/locking/lockdep.c:3832 [inline]
> > > __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> > > lock_acquire kernel/locking/lockdep.c:5669 [inline]
> > > lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> > > down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> > > __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > >
> > > other info that might help us debug this:
> > >
> > > Possible unsafe locking scenario:
> > >
> > > CPU0 CPU1
> > > ---- ----
> > > lock(sb_writers#4);
> > > lock(&type->s_umount_key#31);
> > > lock(sb_writers#4);
> > > lock(&type->s_umount_key#31);
> > >
> > > *** DEADLOCK ***
> >
> > Hmkay, I understand how this happens, I think:
> >
> > fsconfig(FSCONFIG_CMD_RECONFIGURE) quotactl_fd(Q_QUOTAON/Q_QUOTAOFF/Q_XQUOTAON/Q_XQUOTAOFF)
> > -> mnt_want_write(f.file->f_path.mnt);
> > -> down_write(&sb->s_umount); -> __sb_start_write(sb, SB_FREEZE_WRITE)
> > -> reconfigure_super(fc);
> > -> ext4_multi_mount_protect()
> > -> __sb_start_write(sb, SB_FREEZE_WRITE) -> down_write(&sb->s_umount);
> > -> up_write(&sb->s_umount);
>
> Thanks for having a look!
>
> > I have to step away from the computer now for a bit but naively it seem
> > that the locking order for quotactl_fd() should be the other way around.
> >
> > But while I'm here, why does quotactl_fd() take mnt_want_write() but
> > quotactl() doesn't? It seems that if one needs to take it both need to
> > take it.
>
> Couple of notes here:
>
> 1) quotactl() handles the filesystem freezing by grabbing the s_umount
> semaphore, checking the superblock freeze state (it cannot change while
> s_umount is held) and proceeding if fs is not frozen. This logic is hidden
> in quotactl_block().
>
> 2) The proper lock ordering is indeed freeze-protection -> s_umount because
> that is implicitely dictated by how filesystem freezing works. If you grab
Yep.
> s_umount and then try to grab freeze protection, you may effectively wait
> for fs to get unfrozen which cannot happen while s_umount is held as
> thaw_super() needs to grab it.
Yep.
>
> 3) Hence this could be viewed as ext4 bug that it tries to grab freeze
> protection from remount path. *But* reconfigure_super() actually has:
>
> if (sb->s_writers.frozen != SB_UNFROZEN)
> return -EBUSY;
And user_get_super() grabs sb->s_umount which means it's not racy to
check for SB_UNFROZEN. I missed that rushing out the door. :)
>
> so even ext4 is in fact safe because the filesystem is guaranteed to not be
> frozen during remount. But still we should probably tweak the ext4 code to
> avoid this lockdep warning...
Thanks for that!
Christian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [fs?] possible deadlock in quotactl_fd
2023-04-11 13:40 ` Christian Brauner
@ 2023-04-11 14:01 ` Christian Brauner
2023-04-12 10:18 ` Jan Kara
0 siblings, 1 reply; 10+ messages in thread
From: Christian Brauner @ 2023-04-11 14:01 UTC (permalink / raw)
To: Jan Kara
Cc: jack, linux-fsdevel, linux-ext4, linux-kernel, syzkaller-bugs,
syzbot, syzbot
On Tue, Apr 11, 2023 at 03:40:25PM +0200, Christian Brauner wrote:
> On Tue, Apr 11, 2023 at 12:55:42PM +0200, Jan Kara wrote:
> > On Tue 11-04-23 12:11:52, Christian Brauner wrote:
> > > On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> > > > Hello,
> > > >
> > > > syzbot found the following issue on:
> > > >
> > > > HEAD commit: 0d3eb744aed4 Merge tag 'urgent-rcu.2023.04.07a' of git://g..
> > > > git tree: upstream
> > > > console output: https://syzkaller.appspot.com/x/log.txt?x=11798e4bc80000
> > > > kernel config: https://syzkaller.appspot.com/x/.config?x=c21559e740385326
> > > > dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
> > > > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> > > >
> > > > Unfortunately, I don't have any reproducer for this issue yet.
> > > >
> > > > Downloadable assets:
> > > > disk image: https://storage.googleapis.com/syzbot-assets/a02928003efa/disk-0d3eb744.raw.xz
> > > > vmlinux: https://storage.googleapis.com/syzbot-assets/7839447005a4/vmlinux-0d3eb744.xz
> > > > kernel image: https://storage.googleapis.com/syzbot-assets/d26ab3184148/bzImage-0d3eb744.xz
> > > >
> > > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > > Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
> > > >
> > > > ======================================================
> > > > WARNING: possible circular locking dependency detected
> > > > 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0 Not tainted
> > > > ------------------------------------------------------
> > > > syz-executor.3/11858 is trying to acquire lock:
> > > > ffff88802a3bc0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > > >
> > > > but task is already holding lock:
> > > > ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
> > > >
> > > > which lock already depends on the new lock.
> > > >
> > > >
> > > > the existing dependency chain (in reverse order) is:
> > > >
> > > > -> #1 (sb_writers#4){.+.+}-{0:0}:
> > > > percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> > > > __sb_start_write include/linux/fs.h:1477 [inline]
> > > > sb_start_write include/linux/fs.h:1552 [inline]
> > > > write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
> > > > ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:343
> > > > __ext4_remount fs/ext4/super.c:6543 [inline]
> > > > ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6642
> > > > reconfigure_super+0x40c/0xa30 fs/super.c:956
> > > > vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> > > > __do_sys_fsconfig+0xa3a/0xc20 fs/fsopen.c:439
> > > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > > >
> > > > -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> > > > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > > > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > > > validate_chain kernel/locking/lockdep.c:3832 [inline]
> > > > __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> > > > lock_acquire kernel/locking/lockdep.c:5669 [inline]
> > > > lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> > > > down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> > > > __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > > >
> > > > other info that might help us debug this:
> > > >
> > > > Possible unsafe locking scenario:
> > > >
> > > > CPU0 CPU1
> > > > ---- ----
> > > > lock(sb_writers#4);
> > > > lock(&type->s_umount_key#31);
> > > > lock(sb_writers#4);
> > > > lock(&type->s_umount_key#31);
> > > >
> > > > *** DEADLOCK ***
> > >
> > > Hmkay, I understand how this happens, I think:
> > >
> > > fsconfig(FSCONFIG_CMD_RECONFIGURE) quotactl_fd(Q_QUOTAON/Q_QUOTAOFF/Q_XQUOTAON/Q_XQUOTAOFF)
> > > -> mnt_want_write(f.file->f_path.mnt);
> > > -> down_write(&sb->s_umount); -> __sb_start_write(sb, SB_FREEZE_WRITE)
> > > -> reconfigure_super(fc);
> > > -> ext4_multi_mount_protect()
> > > -> __sb_start_write(sb, SB_FREEZE_WRITE) -> down_write(&sb->s_umount);
> > > -> up_write(&sb->s_umount);
> >
> > Thanks for having a look!
> >
> > > I have to step away from the computer now for a bit but naively it seem
> > > that the locking order for quotactl_fd() should be the other way around.
> > >
> > > But while I'm here, why does quotactl_fd() take mnt_want_write() but
> > > quotactl() doesn't? It seems that if one needs to take it both need to
> > > take it.
> >
> > Couple of notes here:
> >
> > 1) quotactl() handles the filesystem freezing by grabbing the s_umount
> > semaphore, checking the superblock freeze state (it cannot change while
> > s_umount is held) and proceeding if fs is not frozen. This logic is hidden
> > in quotactl_block().
> >
> > 2) The proper lock ordering is indeed freeze-protection -> s_umount because
> > that is implicitely dictated by how filesystem freezing works. If you grab
>
> Yep.
One final thought about this. quotactl() and quotactl_fd() could do the
same thing though, right? quotactl() could just be made to use the same
locking scheme as quotactl_fd(). Not saying it has to, but the code
would probably be easier to understand/maintain if both would use the same.
Christian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [fs?] possible deadlock in quotactl_fd
2023-04-11 14:01 ` Christian Brauner
@ 2023-04-12 10:18 ` Jan Kara
0 siblings, 0 replies; 10+ messages in thread
From: Jan Kara @ 2023-04-12 10:18 UTC (permalink / raw)
To: Christian Brauner
Cc: Jan Kara, jack, linux-fsdevel, linux-ext4, linux-kernel,
syzkaller-bugs, syzbot, syzbot
On Tue 11-04-23 16:01:16, Christian Brauner wrote:
> On Tue, Apr 11, 2023 at 03:40:25PM +0200, Christian Brauner wrote:
> > On Tue, Apr 11, 2023 at 12:55:42PM +0200, Jan Kara wrote:
> > > On Tue 11-04-23 12:11:52, Christian Brauner wrote:
> > > > On Mon, Apr 10, 2023 at 11:53:46PM -0700, syzbot wrote:
> > > > > Hello,
> > > > >
> > > > > syzbot found the following issue on:
> > > > >
> > > > > HEAD commit: 0d3eb744aed4 Merge tag 'urgent-rcu.2023.04.07a' of git://g..
> > > > > git tree: upstream
> > > > > console output: https://syzkaller.appspot.com/x/log.txt?x=11798e4bc80000
> > > > > kernel config: https://syzkaller.appspot.com/x/.config?x=c21559e740385326
> > > > > dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
> > > > > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> > > > >
> > > > > Unfortunately, I don't have any reproducer for this issue yet.
> > > > >
> > > > > Downloadable assets:
> > > > > disk image: https://storage.googleapis.com/syzbot-assets/a02928003efa/disk-0d3eb744.raw.xz
> > > > > vmlinux: https://storage.googleapis.com/syzbot-assets/7839447005a4/vmlinux-0d3eb744.xz
> > > > > kernel image: https://storage.googleapis.com/syzbot-assets/d26ab3184148/bzImage-0d3eb744.xz
> > > > >
> > > > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > > > Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
> > > > >
> > > > > ======================================================
> > > > > WARNING: possible circular locking dependency detected
> > > > > 6.3.0-rc6-syzkaller-00016-g0d3eb744aed4 #0 Not tainted
> > > > > ------------------------------------------------------
> > > > > syz-executor.3/11858 is trying to acquire lock:
> > > > > ffff88802a3bc0e0 (&type->s_umount_key#31){++++}-{3:3}, at: __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > > > >
> > > > > but task is already holding lock:
> > > > > ffff88802a3bc460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
> > > > >
> > > > > which lock already depends on the new lock.
> > > > >
> > > > >
> > > > > the existing dependency chain (in reverse order) is:
> > > > >
> > > > > -> #1 (sb_writers#4){.+.+}-{0:0}:
> > > > > percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
> > > > > __sb_start_write include/linux/fs.h:1477 [inline]
> > > > > sb_start_write include/linux/fs.h:1552 [inline]
> > > > > write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
> > > > > ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:343
> > > > > __ext4_remount fs/ext4/super.c:6543 [inline]
> > > > > ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6642
> > > > > reconfigure_super+0x40c/0xa30 fs/super.c:956
> > > > > vfs_fsconfig_locked fs/fsopen.c:254 [inline]
> > > > > __do_sys_fsconfig+0xa3a/0xc20 fs/fsopen.c:439
> > > > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > > > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > > > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > > > >
> > > > > -> #0 (&type->s_umount_key#31){++++}-{3:3}:
> > > > > check_prev_add kernel/locking/lockdep.c:3098 [inline]
> > > > > check_prevs_add kernel/locking/lockdep.c:3217 [inline]
> > > > > validate_chain kernel/locking/lockdep.c:3832 [inline]
> > > > > __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
> > > > > lock_acquire kernel/locking/lockdep.c:5669 [inline]
> > > > > lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
> > > > > down_write+0x92/0x200 kernel/locking/rwsem.c:1573
> > > > > __do_sys_quotactl_fd+0x174/0x3f0 fs/quota/quota.c:997
> > > > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > > > do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > > > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > > > >
> > > > > other info that might help us debug this:
> > > > >
> > > > > Possible unsafe locking scenario:
> > > > >
> > > > > CPU0 CPU1
> > > > > ---- ----
> > > > > lock(sb_writers#4);
> > > > > lock(&type->s_umount_key#31);
> > > > > lock(sb_writers#4);
> > > > > lock(&type->s_umount_key#31);
> > > > >
> > > > > *** DEADLOCK ***
> > > >
> > > > Hmkay, I understand how this happens, I think:
> > > >
> > > > fsconfig(FSCONFIG_CMD_RECONFIGURE) quotactl_fd(Q_QUOTAON/Q_QUOTAOFF/Q_XQUOTAON/Q_XQUOTAOFF)
> > > > -> mnt_want_write(f.file->f_path.mnt);
> > > > -> down_write(&sb->s_umount); -> __sb_start_write(sb, SB_FREEZE_WRITE)
> > > > -> reconfigure_super(fc);
> > > > -> ext4_multi_mount_protect()
> > > > -> __sb_start_write(sb, SB_FREEZE_WRITE) -> down_write(&sb->s_umount);
> > > > -> up_write(&sb->s_umount);
> > >
> > > Thanks for having a look!
> > >
> > > > I have to step away from the computer now for a bit but naively it seem
> > > > that the locking order for quotactl_fd() should be the other way around.
> > > >
> > > > But while I'm here, why does quotactl_fd() take mnt_want_write() but
> > > > quotactl() doesn't? It seems that if one needs to take it both need to
> > > > take it.
> > >
> > > Couple of notes here:
> > >
> > > 1) quotactl() handles the filesystem freezing by grabbing the s_umount
> > > semaphore, checking the superblock freeze state (it cannot change while
> > > s_umount is held) and proceeding if fs is not frozen. This logic is hidden
> > > in quotactl_block().
> > >
> > > 2) The proper lock ordering is indeed freeze-protection -> s_umount because
> > > that is implicitely dictated by how filesystem freezing works. If you grab
> >
> > Yep.
>
> One final thought about this. quotactl() and quotactl_fd() could do the
> same thing though, right? quotactl() could just be made to use the same
> locking scheme as quotactl_fd(). Not saying it has to, but the code
> would probably be easier to understand/maintain if both would use the same.
Yes, that would be nice. But quotactl(2) gets a block device as an
argument, needs to translate that to a superblock (user_get_super()) and
only then we could use sb_start_write() to protect from fs freezing - but
we already hold s_umount from user_get_super() so we can't do that due to
lock ordering. That's why handling the freeze protection is so contrived in
quotactl(2). We used to have variant of user_get_super() that guaranteed
returning thawed superblock but Christoph didn't like it and only quota
code was using it so stuff got opencoded in the quota code instead (see
commit 60b498852bf2 ("fs: remove get_super_thawed and
get_super_exclusive_thawed").
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [ext4?] [fat?] possible deadlock in sys_quotactl_fd
2023-04-11 6:53 ` [syzbot] [fs?] possible deadlock in sys_quotactl_fd syzbot
2023-04-11 10:11 ` [syzbot] [fs?] possible deadlock in quotactl_fd Christian Brauner
@ 2023-04-29 17:36 ` syzbot
1 sibling, 0 replies; 10+ messages in thread
From: syzbot @ 2023-04-29 17:36 UTC (permalink / raw)
To: adilger.kernel, brauner, jack, jack, linkinjeon, linux-ext4,
linux-fsdevel, linux-kernel, sj1557.seo, syzkaller-bugs, tytso
syzbot has found a reproducer for the following issue on:
HEAD commit: 14f8db1c0f9a Merge branch 'for-next/core' into for-kernelci
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=115bdcf8280000
kernel config: https://syzkaller.appspot.com/x/.config?x=a837a8ba7e88bb45
dashboard link: https://syzkaller.appspot.com/bug?extid=aacb82fca60873422114
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1405a2b4280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17bd276fc80000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/ad6ce516eed3/disk-14f8db1c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/1f38c2cc7667/vmlinux-14f8db1c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d795115eee39/Image-14f8db1c.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/b9312932adf4/mount_0.gz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+aacb82fca60873422114@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.3.0-rc7-syzkaller-g14f8db1c0f9a #0 Not tainted
------------------------------------------------------
syz-executor396/5961 is trying to acquire lock:
ffff0000d9e5c0e0 (&type->s_umount_key#29){++++}-{3:3}, at: __do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
ffff0000d9e5c0e0 (&type->s_umount_key#29){++++}-{3:3}, at: __se_sys_quotactl_fd fs/quota/quota.c:972 [inline]
ffff0000d9e5c0e0 (&type->s_umount_key#29){++++}-{3:3}, at: __arm64_sys_quotactl_fd+0x30c/0x4a4 fs/quota/quota.c:972
but task is already holding lock:
ffff0000d9e5c460 (sb_writers#3){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:394
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (sb_writers#3){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1477 [inline]
sb_start_write include/linux/fs.h:1552 [inline]
write_mmp_block+0xe4/0xb70 fs/ext4/mmp.c:50
ext4_multi_mount_protect+0x2f8/0x8c8 fs/ext4/mmp.c:343
__ext4_remount fs/ext4/super.c:6543 [inline]
ext4_reconfigure+0x2180/0x2928 fs/ext4/super.c:6642
reconfigure_super+0x328/0x738 fs/super.c:956
do_remount fs/namespace.c:2704 [inline]
path_mount+0xc0c/0xe04 fs/namespace.c:3364
do_mount fs/namespace.c:3385 [inline]
__do_sys_mount fs/namespace.c:3594 [inline]
__se_sys_mount fs/namespace.c:3571 [inline]
__arm64_sys_mount+0x45c/0x594 fs/namespace.c:3571
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x198 arch/arm64/kernel/syscall.c:193
el0_svc+0x4c/0x15c arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591
-> #0 (&type->s_umount_key#29){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x238/0x718 kernel/locking/lockdep.c:5669
down_read+0x50/0x6c kernel/locking/rwsem.c:1520
__do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
__se_sys_quotactl_fd fs/quota/quota.c:972 [inline]
__arm64_sys_quotactl_fd+0x30c/0x4a4 fs/quota/quota.c:972
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x198 arch/arm64/kernel/syscall.c:193
el0_svc+0x4c/0x15c arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(sb_writers#3);
lock(&type->s_umount_key#29);
lock(sb_writers#3);
lock(&type->s_umount_key#29);
*** DEADLOCK ***
1 lock held by syz-executor396/5961:
#0: ffff0000d9e5c460 (sb_writers#3){.+.+}-{0:0}, at: mnt_want_write+0x44/0x9c fs/namespace.c:394
stack backtrace:
CPU: 0 PID: 5961 Comm: syz-executor396 Not tainted 6.3.0-rc7-syzkaller-g14f8db1c0f9a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call trace:
dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:233
show_stack+0x2c/0x44 arch/arm64/kernel/stacktrace.c:240
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd0/0x124 lib/dump_stack.c:106
dump_stack+0x1c/0x28 lib/dump_stack.c:113
print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2056
check_noncircular+0x2cc/0x378 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3098 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain kernel/locking/lockdep.c:3832 [inline]
__lock_acquire+0x3338/0x764c kernel/locking/lockdep.c:5056
lock_acquire+0x238/0x718 kernel/locking/lockdep.c:5669
down_read+0x50/0x6c kernel/locking/rwsem.c:1520
__do_sys_quotactl_fd fs/quota/quota.c:999 [inline]
__se_sys_quotactl_fd fs/quota/quota.c:972 [inline]
__arm64_sys_quotactl_fd+0x30c/0x4a4 fs/quota/quota.c:972
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x198 arch/arm64/kernel/syscall.c:193
el0_svc+0x4c/0x15c arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [ext4?] possible deadlock in quotactl_fd
2023-04-11 6:53 [syzbot] [fs?] possible deadlock in quotactl_fd syzbot
2023-04-11 6:53 ` [syzbot] [fs?] possible deadlock in sys_quotactl_fd syzbot
@ 2023-05-10 7:31 ` syzbot
2023-08-16 3:22 ` syzbot
2 siblings, 0 replies; 10+ messages in thread
From: syzbot @ 2023-05-10 7:31 UTC (permalink / raw)
To: adilger.kernel, brauner, jack, jack, linux-ext4, linux-fsdevel,
linux-kernel, syzkaller-bugs, tytso
syzbot has found a reproducer for the following issue on:
HEAD commit: 1dc3731daf1f Merge tag 'for-6.4-rc1-tag' of git://git.kern..
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=13ef9566280000
kernel config: https://syzkaller.appspot.com/x/.config?x=8bc832f563d8bf38
dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12cc2a92280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10dc5fa6280000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c41d6364878c/disk-1dc3731d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ed2d9614f1c1/vmlinux-1dc3731d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/903dc319e88d/bzImage-1dc3731d.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/46ea6ec4210f/mount_0.gz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+cdcd444e4d3a256ada13@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.4.0-rc1-syzkaller-00011-g1dc3731daf1f #0 Not tainted
------------------------------------------------------
syz-executor197/5038 is trying to acquire lock:
ffff88802b6260e0 (&type->s_umount_key#32){++++}-{3:3}, at: __do_sys_quotactl_fd+0x27e/0x3f0 fs/quota/quota.c:999
but task is already holding lock:
ffff88802b626460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (sb_writers#4){.+.+}-{0:0}:
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write include/linux/fs.h:1494 [inline]
sb_start_write include/linux/fs.h:1569 [inline]
write_mmp_block+0xc4/0x820 fs/ext4/mmp.c:50
ext4_multi_mount_protect+0x50d/0xac0 fs/ext4/mmp.c:347
__ext4_remount fs/ext4/super.c:6578 [inline]
ext4_reconfigure+0x242b/0x2b60 fs/ext4/super.c:6677
reconfigure_super+0x40c/0xa30 fs/super.c:956
vfs_fsconfig_locked fs/fsopen.c:254 [inline]
__do_sys_fsconfig+0xa5e/0xc50 fs/fsopen.c:439
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (&type->s_umount_key#32){++++}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3108 [inline]
check_prevs_add kernel/locking/lockdep.c:3227 [inline]
validate_chain kernel/locking/lockdep.c:3842 [inline]
__lock_acquire+0x2f21/0x5df0 kernel/locking/lockdep.c:5074
lock_acquire kernel/locking/lockdep.c:5691 [inline]
lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5656
down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
__do_sys_quotactl_fd+0x27e/0x3f0 fs/quota/quota.c:999
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(sb_writers#4);
lock(&type->s_umount_key#32);
lock(sb_writers#4);
rlock(&type->s_umount_key#32);
*** DEADLOCK ***
1 lock held by syz-executor197/5038:
#0: ffff88802b626460 (sb_writers#4){.+.+}-{0:0}, at: __do_sys_quotactl_fd+0xd3/0x3f0 fs/quota/quota.c:990
stack backtrace:
CPU: 1 PID: 5038 Comm: syz-executor197 Not tainted 6.4.0-rc1-syzkaller-00011-g1dc3731daf1f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2188
check_prev_add kernel/locking/lockdep.c:3108 [inline]
check_prevs_add kernel/locking/lockdep.c:3227 [inline]
validate_chain kernel/locking/lockdep.c:3842 [inline]
__lock_acquire+0x2f21/0x5df0 kernel/locking/lockdep.c:5074
lock_acquire kernel/locking/lockdep.c:5691 [inline]
lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5656
down_read+0x3d/0x50 kernel/locking/rwsem.c:1520
__do_sys_quotactl_fd+0x27e/0x3f0 fs/quota/quota.c:999
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fc4ee1d7359
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 41 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fc4ee1832f8 EFLAGS: 00000246 ORIG_RAX: 00000000000001bb
RAX: ffffffffffffffda RBX: 00007fc4ee25b7a0 RCX: 00007fc4ee1d7359
RDX: 00000000ffffffff RSI: ffffffff80000802 RDI: 0000000000000003
RBP: 00007fc4ee22858c R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 7272655f61746164
R13: 6974797a616c6f6e R14: 0030656c69662f2e R15: 00007fc4ee25b7a8
</TASK>
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [syzbot] [ext4?] possible deadlock in quotactl_fd
2023-04-11 6:53 [syzbot] [fs?] possible deadlock in quotactl_fd syzbot
2023-04-11 6:53 ` [syzbot] [fs?] possible deadlock in sys_quotactl_fd syzbot
2023-05-10 7:31 ` [syzbot] [ext4?] possible deadlock in quotactl_fd syzbot
@ 2023-08-16 3:22 ` syzbot
2 siblings, 0 replies; 10+ messages in thread
From: syzbot @ 2023-08-16 3:22 UTC (permalink / raw)
To: adilger.kernel, brauner, jack, jack, linux-ext4, linux-fsdevel,
linux-kernel, syzkaller-bugs, tytso
syzbot suspects this issue was fixed by commit:
commit 949f95ff39bf188e594e7ecd8e29b82eb108f5bf
Author: Jan Kara <jack@suse.cz>
Date: Tue Apr 11 12:10:19 2023 +0000
ext4: fix lockdep warning when enabling MMP
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=13a06a65a80000
start commit: 1dc3731daf1f Merge tag 'for-6.4-rc1-tag' of git://git.kern..
git tree: upstream
kernel config: https://syzkaller.appspot.com/x/.config?x=8bc832f563d8bf38
dashboard link: https://syzkaller.appspot.com/bug?extid=cdcd444e4d3a256ada13
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12cc2a92280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10dc5fa6280000
If the result looks correct, please mark the issue as fixed by replying with:
#syz fix: ext4: fix lockdep warning when enabling MMP
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-08-16 3:23 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-11 6:53 [syzbot] [fs?] possible deadlock in quotactl_fd syzbot
2023-04-11 6:53 ` [syzbot] [fs?] possible deadlock in sys_quotactl_fd syzbot
2023-04-11 10:11 ` [syzbot] [fs?] possible deadlock in quotactl_fd Christian Brauner
2023-04-11 10:55 ` Jan Kara
2023-04-11 13:40 ` Christian Brauner
2023-04-11 14:01 ` Christian Brauner
2023-04-12 10:18 ` Jan Kara
2023-04-29 17:36 ` [syzbot] [ext4?] [fat?] possible deadlock in sys_quotactl_fd syzbot
2023-05-10 7:31 ` [syzbot] [ext4?] possible deadlock in quotactl_fd syzbot
2023-08-16 3:22 ` syzbot
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.