* [syzbot] [cgroups?] possible deadlock in static_key_slow_inc (3)
@ 2023-06-08 16:56 ` syzbot
0 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-06-08 16:56 UTC (permalink / raw)
To: cgroups, hannes, linux-kernel, lizefan.x, syzkaller-bugs, tj
Hello,
syzbot found the following issue on:
HEAD commit: 5f63595ebd82 Merge tag 'input-for-v6.4-rc5' of git://git.k..
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=153fcc63280000
kernel config: https://syzkaller.appspot.com/x/.config?x=7474de833c217bf4
dashboard link: https://syzkaller.appspot.com/bug?extid=2ab700fe1829880a2ec6
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1321e2fd280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11946afd280000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d12b9e46ffe8/disk-5f63595e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c9044ded7edd/vmlinux-5f63595e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/09f0fd3926e8/bzImage-5f63595e.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2ab700fe1829880a2ec6@syzkaller.appspotmail.com
======================================================
WARNING: possible circular locking dependency detected
6.4.0-rc5-syzkaller-00024-g5f63595ebd82 #0 Not tainted
------------------------------------------------------
syz-executor324/4995 is trying to acquire lock:
ffffffff8cdc3ff0 (cpu_hotplug_lock){++++}-{0:0}, at: static_key_slow_inc+0x12/0x30 kernel/jump_label.c:185
but task is already holding lock:
ffffffff8cf5a688 (freezer_mutex){+.+.}-{3:3}, at: freezer_css_online+0x4f/0x150 kernel/cgroup/legacy_freezer.c:111
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (freezer_mutex){+.+.}-{3:3}:
lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
__mutex_lock_common+0x1d8/0x2530 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x1b/0x20 kernel/locking/mutex.c:799
freezer_change_state kernel/cgroup/legacy_freezer.c:389 [inline]
freezer_write+0xa8/0x3f0 kernel/cgroup/legacy_freezer.c:429
cgroup_file_write+0x2ca/0x6a0 kernel/cgroup/cgroup.c:4071
kernfs_fop_write_iter+0x3a6/0x4f0 fs/kernfs/file.c:334
call_write_iter include/linux/fs.h:1868 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x790/0xb20 fs/read_write.c:584
ksys_write+0x1a0/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (cpu_hotplug_lock){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3113 [inline]
check_prevs_add kernel/locking/lockdep.c:3232 [inline]
validate_chain+0x166b/0x58f0 kernel/locking/lockdep.c:3847
__lock_acquire+0x1316/0x2070 kernel/locking/lockdep.c:5088
lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
cpus_read_lock+0x42/0x150 kernel/cpu.c:310
static_key_slow_inc+0x12/0x30 kernel/jump_label.c:185
freezer_css_online+0xc6/0x150 kernel/cgroup/legacy_freezer.c:117
online_css+0xba/0x260 kernel/cgroup/cgroup.c:5491
css_create kernel/cgroup/cgroup.c:5562 [inline]
cgroup_apply_control_enable+0x7d1/0xae0 kernel/cgroup/cgroup.c:3249
cgroup_mkdir+0xd8f/0x1000 kernel/cgroup/cgroup.c:5758
kernfs_iop_mkdir+0x279/0x400 fs/kernfs/dir.c:1219
vfs_mkdir+0x29d/0x450 fs/namei.c:4115
do_mkdirat+0x264/0x520 fs/namei.c:4138
__do_sys_mkdirat fs/namei.c:4153 [inline]
__se_sys_mkdirat fs/namei.c:4151 [inline]
__x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4151
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(freezer_mutex);
lock(cpu_hotplug_lock);
lock(freezer_mutex);
rlock(cpu_hotplug_lock);
*** DEADLOCK ***
4 locks held by syz-executor324/4995:
#0: ffff88807da6c460 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
#1: ffff888075c6eee0 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
#1: ffff888075c6eee0 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: filename_create+0x260/0x530 fs/namei.c:3884
#2: ffffffff8cf4f768 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:370 [inline]
#2: ffffffff8cf4f768 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xe9/0x290 kernel/cgroup/cgroup.c:1683
#3: ffffffff8cf5a688 (freezer_mutex){+.+.}-{3:3}, at: freezer_css_online+0x4f/0x150 kernel/cgroup/legacy_freezer.c:111
stack backtrace:
CPU: 0 PID: 4995 Comm: syz-executor324 Not tainted 6.4.0-rc5-syzkaller-00024-g5f63595ebd82 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2188
check_prev_add kernel/locking/lockdep.c:3113 [inline]
check_prevs_add kernel/locking/lockdep.c:3232 [inline]
validate_chain+0x166b/0x58f0 kernel/locking/lockdep.c:3847
__lock_acquire+0x1316/0x2070 kernel/locking/lockdep.c:5088
lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
cpus_read_lock+0x42/0x150 kernel/cpu.c:310
static_key_slow_inc+0x12/0x30 kernel/jump_label.c:185
freezer_css_online+0xc6/0x150 kernel/cgroup/legacy_freezer.c:117
online_css+0xba/0x260 kernel/cgroup/cgroup.c:5491
css_create kernel/cgroup/cgroup.c:5562 [inline]
cgroup_apply_control_enable+0x7d1/0xae0 kernel/cgroup/cgroup.c:3249
cgroup_mkdir+0xd8f/0x1000 kernel/cgroup/cgroup.c:5758
kernfs_iop_mkdir+0x279/0x400 fs/kernfs/dir.c:1219
vfs_mkdir+0x29d/0x450 fs/namei.c:4115
do_mkdirat+0x264/0x520 fs/namei.c:4138
__do_sys_mkdirat fs/namei.c:4153 [inline]
__se_sys_mkdirat fs/namei.c:4151 [inline]
__x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4151
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f7e55624e09
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff7236ac98 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f7e55624e09
RDX: 00000000000001ff RSI: 0000000020000180 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007fff7236acc0 R09: 00007fff7236acc0
R10: 00007fff7236acc0 R11: 0000000000000246 R12: 00007fff7236acbc
R13: 00007fff7236acd0 R14: 00007fff7236ad10 R15: 0000000000000000
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 6+ messages in thread
* [syzbot] [cgroups?] possible deadlock in static_key_slow_inc (3)
@ 2023-06-08 16:56 ` syzbot
0 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-06-08 16:56 UTC (permalink / raw)
To: cgroups-u79uwXL29TY76Z2rM5mHXA, hannes-druUgvl0LCNAfugRpC6u6w,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
syzkaller-bugs-/JYPxA39Uh5TLH3MbocFFw, tj-DgEjT+Ai2ygdnm+yROfE0A
Hello,
syzbot found the following issue on:
HEAD commit: 5f63595ebd82 Merge tag 'input-for-v6.4-rc5' of git://git.k..
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=153fcc63280000
kernel config: https://syzkaller.appspot.com/x/.config?x=7474de833c217bf4
dashboard link: https://syzkaller.appspot.com/bug?extid=2ab700fe1829880a2ec6
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1321e2fd280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=11946afd280000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d12b9e46ffe8/disk-5f63595e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c9044ded7edd/vmlinux-5f63595e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/09f0fd3926e8/bzImage-5f63595e.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+2ab700fe1829880a2ec6-Pl5Pbv+GP7P466ipTTIvnc23WoclnBCfAL8bYrjMMd8@public.gmane.org
======================================================
WARNING: possible circular locking dependency detected
6.4.0-rc5-syzkaller-00024-g5f63595ebd82 #0 Not tainted
------------------------------------------------------
syz-executor324/4995 is trying to acquire lock:
ffffffff8cdc3ff0 (cpu_hotplug_lock){++++}-{0:0}, at: static_key_slow_inc+0x12/0x30 kernel/jump_label.c:185
but task is already holding lock:
ffffffff8cf5a688 (freezer_mutex){+.+.}-{3:3}, at: freezer_css_online+0x4f/0x150 kernel/cgroup/legacy_freezer.c:111
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (freezer_mutex){+.+.}-{3:3}:
lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
__mutex_lock_common+0x1d8/0x2530 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x1b/0x20 kernel/locking/mutex.c:799
freezer_change_state kernel/cgroup/legacy_freezer.c:389 [inline]
freezer_write+0xa8/0x3f0 kernel/cgroup/legacy_freezer.c:429
cgroup_file_write+0x2ca/0x6a0 kernel/cgroup/cgroup.c:4071
kernfs_fop_write_iter+0x3a6/0x4f0 fs/kernfs/file.c:334
call_write_iter include/linux/fs.h:1868 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x790/0xb20 fs/read_write.c:584
ksys_write+0x1a0/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
-> #0 (cpu_hotplug_lock){++++}-{0:0}:
check_prev_add kernel/locking/lockdep.c:3113 [inline]
check_prevs_add kernel/locking/lockdep.c:3232 [inline]
validate_chain+0x166b/0x58f0 kernel/locking/lockdep.c:3847
__lock_acquire+0x1316/0x2070 kernel/locking/lockdep.c:5088
lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
cpus_read_lock+0x42/0x150 kernel/cpu.c:310
static_key_slow_inc+0x12/0x30 kernel/jump_label.c:185
freezer_css_online+0xc6/0x150 kernel/cgroup/legacy_freezer.c:117
online_css+0xba/0x260 kernel/cgroup/cgroup.c:5491
css_create kernel/cgroup/cgroup.c:5562 [inline]
cgroup_apply_control_enable+0x7d1/0xae0 kernel/cgroup/cgroup.c:3249
cgroup_mkdir+0xd8f/0x1000 kernel/cgroup/cgroup.c:5758
kernfs_iop_mkdir+0x279/0x400 fs/kernfs/dir.c:1219
vfs_mkdir+0x29d/0x450 fs/namei.c:4115
do_mkdirat+0x264/0x520 fs/namei.c:4138
__do_sys_mkdirat fs/namei.c:4153 [inline]
__se_sys_mkdirat fs/namei.c:4151 [inline]
__x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4151
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(freezer_mutex);
lock(cpu_hotplug_lock);
lock(freezer_mutex);
rlock(cpu_hotplug_lock);
*** DEADLOCK ***
4 locks held by syz-executor324/4995:
#0: ffff88807da6c460 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
#1: ffff888075c6eee0 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
#1: ffff888075c6eee0 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: filename_create+0x260/0x530 fs/namei.c:3884
#2: ffffffff8cf4f768 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:370 [inline]
#2: ffffffff8cf4f768 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xe9/0x290 kernel/cgroup/cgroup.c:1683
#3: ffffffff8cf5a688 (freezer_mutex){+.+.}-{3:3}, at: freezer_css_online+0x4f/0x150 kernel/cgroup/legacy_freezer.c:111
stack backtrace:
CPU: 0 PID: 4995 Comm: syz-executor324 Not tainted 6.4.0-rc5-syzkaller-00024-g5f63595ebd82 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2188
check_prev_add kernel/locking/lockdep.c:3113 [inline]
check_prevs_add kernel/locking/lockdep.c:3232 [inline]
validate_chain+0x166b/0x58f0 kernel/locking/lockdep.c:3847
__lock_acquire+0x1316/0x2070 kernel/locking/lockdep.c:5088
lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
cpus_read_lock+0x42/0x150 kernel/cpu.c:310
static_key_slow_inc+0x12/0x30 kernel/jump_label.c:185
freezer_css_online+0xc6/0x150 kernel/cgroup/legacy_freezer.c:117
online_css+0xba/0x260 kernel/cgroup/cgroup.c:5491
css_create kernel/cgroup/cgroup.c:5562 [inline]
cgroup_apply_control_enable+0x7d1/0xae0 kernel/cgroup/cgroup.c:3249
cgroup_mkdir+0xd8f/0x1000 kernel/cgroup/cgroup.c:5758
kernfs_iop_mkdir+0x279/0x400 fs/kernfs/dir.c:1219
vfs_mkdir+0x29d/0x450 fs/namei.c:4115
do_mkdirat+0x264/0x520 fs/namei.c:4138
__do_sys_mkdirat fs/namei.c:4153 [inline]
__se_sys_mkdirat fs/namei.c:4151 [inline]
__x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4151
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f7e55624e09
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 15 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff7236ac98 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f7e55624e09
RDX: 00000000000001ff RSI: 0000000020000180 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007fff7236acc0 R09: 00007fff7236acc0
R10: 00007fff7236acc0 R11: 0000000000000246 R12: 00007fff7236acbc
R13: 00007fff7236acd0 R14: 00007fff7236ad10 R15: 0000000000000000
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [syzbot] [cgroups?] possible deadlock in static_key_slow_inc (3)
@ 2023-06-09 3:17 ` syzbot
0 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-06-09 3:17 UTC (permalink / raw)
To: cgroups, hannes, hverkuil-cisco, hverkuil, linux-kernel,
lizefan.x, mchehab, syzkaller-bugs, tj
syzbot has bisected this issue to:
commit eb1d969203eb8212741751f88dcf5cb56bb11830
Author: Hans Verkuil <hverkuil@xs4all.nl>
Date: Fri Oct 21 12:21:25 2022 +0000
media: vivid: fix control handler mutex deadlock
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=16af9d95280000
start commit: 5f63595ebd82 Merge tag 'input-for-v6.4-rc5' of git://git.k..
git tree: upstream
final oops: https://syzkaller.appspot.com/x/report.txt?x=15af9d95280000
console output: https://syzkaller.appspot.com/x/log.txt?x=11af9d95280000
kernel config: https://syzkaller.appspot.com/x/.config?x=3c980bfe8b399968
dashboard link: https://syzkaller.appspot.com/bug?extid=2ab700fe1829880a2ec6
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=142e0463280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10a32add280000
Reported-by: syzbot+2ab700fe1829880a2ec6@syzkaller.appspotmail.com
Fixes: eb1d969203eb ("media: vivid: fix control handler mutex deadlock")
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [syzbot] [cgroups?] possible deadlock in static_key_slow_inc (3)
@ 2023-06-09 3:17 ` syzbot
0 siblings, 0 replies; 6+ messages in thread
From: syzbot @ 2023-06-09 3:17 UTC (permalink / raw)
To: cgroups-u79uwXL29TY76Z2rM5mHXA, hannes-druUgvl0LCNAfugRpC6u6w,
hverkuil-cisco-qWit8jRvyhVmR6Xm/wNWPw,
hverkuil-qWit8jRvyhVmR6Xm/wNWPw,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
lizefan.x-EC8Uxl6Npydl57MIdRCFDg, mchehab-DgEjT+Ai2ygdnm+yROfE0A,
syzkaller-bugs-/JYPxA39Uh5TLH3MbocFFw, tj-DgEjT+Ai2ygdnm+yROfE0A
syzbot has bisected this issue to:
commit eb1d969203eb8212741751f88dcf5cb56bb11830
Author: Hans Verkuil <hverkuil-qWit8jRvyhVmR6Xm/wNWPw@public.gmane.org>
Date: Fri Oct 21 12:21:25 2022 +0000
media: vivid: fix control handler mutex deadlock
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=16af9d95280000
start commit: 5f63595ebd82 Merge tag 'input-for-v6.4-rc5' of git://git.k..
git tree: upstream
final oops: https://syzkaller.appspot.com/x/report.txt?x=15af9d95280000
console output: https://syzkaller.appspot.com/x/log.txt?x=11af9d95280000
kernel config: https://syzkaller.appspot.com/x/.config?x=3c980bfe8b399968
dashboard link: https://syzkaller.appspot.com/bug?extid=2ab700fe1829880a2ec6
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=142e0463280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10a32add280000
Reported-by: syzbot+2ab700fe1829880a2ec6-Pl5Pbv+GP7P466ipTTIvnc23WoclnBCfAL8bYrjMMd8@public.gmane.org
Fixes: eb1d969203eb ("media: vivid: fix control handler mutex deadlock")
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] cgroup,freezer: hold cpu_hotplug_lock before freezer_mutex in freezer_css_{online,offline}()
[not found] ` <000000000000d1565005fda9cef1-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2023-06-11 13:48 ` Tetsuo Handa
[not found] ` <69ab449f-1981-2d53-79fb-b2ac91ea9cef-JPay3/Yim36HaxMnTkn67Xf5DAMn2ifp@public.gmane.org>
0 siblings, 1 reply; 6+ messages in thread
From: Tetsuo Handa @ 2023-06-11 13:48 UTC (permalink / raw)
To: Tejun Heo, Zefan Li, Johannes Weiner, Cgroups
syzbot is again reporting circular locking dependency between
cpu_hotplug_lock and freezer_mutex. Do like what we did with
commit 57dcd64c7e036299 ("cgroup,freezer: hold cpu_hotplug_lock
before freezer_mutex").
Reported-by: syzbot <syzbot+2ab700fe1829880a2ec6-Pl5Pbv+GP7P466ipTTIvnc23WoclnBCfAL8bYrjMMd8@public.gmane.org>
Closes: https://syzkaller.appspot.com/bug?extid=2ab700fe1829880a2ec6
Signed-off-by: Tetsuo Handa <penguin-kernel-JPay3/Yim36HaxMnTkn67Xf5DAMn2ifp@public.gmane.org>
Tested-by: syzbot <syzbot+2ab700fe1829880a2ec6-Pl5Pbv+GP7P466ipTTIvnc23WoclnBCfAL8bYrjMMd8@public.gmane.org>
---
kernel/cgroup/legacy_freezer.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup/legacy_freezer.c b/kernel/cgroup/legacy_freezer.c
index 936473203a6b..122dacb3a443 100644
--- a/kernel/cgroup/legacy_freezer.c
+++ b/kernel/cgroup/legacy_freezer.c
@@ -108,16 +108,18 @@ static int freezer_css_online(struct cgroup_subsys_state *css)
struct freezer *freezer = css_freezer(css);
struct freezer *parent = parent_freezer(freezer);
+ cpus_read_lock();
mutex_lock(&freezer_mutex);
freezer->state |= CGROUP_FREEZER_ONLINE;
if (parent && (parent->state & CGROUP_FREEZING)) {
freezer->state |= CGROUP_FREEZING_PARENT | CGROUP_FROZEN;
- static_branch_inc(&freezer_active);
+ static_branch_inc_cpuslocked(&freezer_active);
}
mutex_unlock(&freezer_mutex);
+ cpus_read_unlock();
return 0;
}
@@ -132,14 +134,16 @@ static void freezer_css_offline(struct cgroup_subsys_state *css)
{
struct freezer *freezer = css_freezer(css);
+ cpus_read_lock();
mutex_lock(&freezer_mutex);
if (freezer->state & CGROUP_FREEZING)
- static_branch_dec(&freezer_active);
+ static_branch_dec_cpuslocked(&freezer_active);
freezer->state = 0;
mutex_unlock(&freezer_mutex);
+ cpus_read_unlock();
}
static void freezer_css_free(struct cgroup_subsys_state *css)
--
2.18.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] cgroup,freezer: hold cpu_hotplug_lock before freezer_mutex in freezer_css_{online,offline}()
[not found] ` <69ab449f-1981-2d53-79fb-b2ac91ea9cef-JPay3/Yim36HaxMnTkn67Xf5DAMn2ifp@public.gmane.org>
@ 2023-06-12 16:43 ` Tejun Heo
0 siblings, 0 replies; 6+ messages in thread
From: Tejun Heo @ 2023-06-12 16:43 UTC (permalink / raw)
To: Tetsuo Handa; +Cc: Zefan Li, Johannes Weiner, Cgroups
On Sun, Jun 11, 2023 at 10:48:12PM +0900, Tetsuo Handa wrote:
> syzbot is again reporting circular locking dependency between
> cpu_hotplug_lock and freezer_mutex. Do like what we did with
> commit 57dcd64c7e036299 ("cgroup,freezer: hold cpu_hotplug_lock
> before freezer_mutex").
>
> Reported-by: syzbot <syzbot+2ab700fe1829880a2ec6-Pl5Pbv+GP7P466ipTTIvnc23WoclnBCfAL8bYrjMMd8@public.gmane.org>
> Closes: https://syzkaller.appspot.com/bug?extid=2ab700fe1829880a2ec6
> Signed-off-by: Tetsuo Handa <penguin-kernel-JPay3/Yim36HaxMnTkn67Xf5DAMn2ifp@public.gmane.org>
> Tested-by: syzbot <syzbot+2ab700fe1829880a2ec6-Pl5Pbv+GP7P466ipTTIvnc23WoclnBCfAL8bYrjMMd8@public.gmane.org>
Tagged w/ the same stable version as the earlier patch and applied to
cgroup/for-6.4-fixes.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-06-12 16:43 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-08 16:56 [syzbot] [cgroups?] possible deadlock in static_key_slow_inc (3) syzbot
2023-06-08 16:56 ` syzbot
2023-06-09 3:17 ` syzbot
2023-06-09 3:17 ` syzbot
[not found] ` <000000000000d1565005fda9cef1-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2023-06-11 13:48 ` [PATCH] cgroup,freezer: hold cpu_hotplug_lock before freezer_mutex in freezer_css_{online,offline}() Tetsuo Handa
[not found] ` <69ab449f-1981-2d53-79fb-b2ac91ea9cef-JPay3/Yim36HaxMnTkn67Xf5DAMn2ifp@public.gmane.org>
2023-06-12 16:43 ` Tejun Heo
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.