* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-14 16:20 ` Tejun Heo
0 siblings, 0 replies; 21+ messages in thread
From: Tejun Heo @ 2022-01-14 16:20 UTC (permalink / raw)
To: Zhang Qiao
Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Waiman Long,
Michal Koutný
(cc'ing Waiman and Michal and quoting whole body)
Seems sane to me but let's hear what Waiman and Michal think.
On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>
> Hello everyone
>
> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
> another, while I also performed the cpu hotplug operation, and got following calltrace.
>
> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>
> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
> doesn't fail, as follows:
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index d0e163a02099..2535d23d2c51 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>
> cgroup_taskset_for_each(task, css, tset) {
> + cpus_read_lock();
> if (cs != &top_cpuset)
> guarantee_online_cpus(task, cpus_attach);
> else
> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
> * fail. TODO: have a better way to handle failure here
> */
> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
> + cpus_read_unlock();
>
>
> Is there a better solution?
>
> Thanks
>
> log:
> [ 43.853794] ------------[ cut here ]------------
> [ 43.853798] WARNING: CPU: 7 PID: 463 at ../kernel/cgroup/cpuset.c:2279 cpuset_attach+0xee/0x1f0
> [ 43.853806] Modules linked in:
> [ 43.853807] CPU: 7 PID: 463 Comm: bash Not tainted 5.16.0-rc4+ #10
> [ 43.853810] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
> [ 43.853811] RIP: 0010:cpuset_attach+0xee/0x1f0
> [ 43.853814] Code: ff ff 48 85 c0 48 89 c3 74 24 48 81 fd 40 42 54 82 75 96 80 bb 38 07 00 00 6f 48 8b 05 93 b3 55 01 48 89 05 bc 05 bb 01 75 97 <0f> 0b eb b3 48 8b 85 e8 00 00 00 48 85
> [ 43.853816] RSP: 0018:ffffc90000623c30 EFLAGS: 00010246
> [ 43.853818] RAX: 0000000000000000 RBX: ffff888101f39c80 RCX: 0000000000000001
> [ 43.853819] RDX: 0000000000007fff RSI: ffffffff82cd5708 RDI: ffff888101f39c80
> [ 43.853821] RBP: ffff8881001afe00 R08: 0000000000000000 R09: ffffc90000623d00
> [ 43.853822] R10: ffffc900000a3de8 R11: 0000000000000001 R12: ffffc90000623cf0
> [ 43.853823] R13: ffffffff82cd56d0 R14: ffffffff82544240 R15: 0000000000000001
> [ 43.853824] FS: 00007f012414d740(0000) GS:ffff8882b5bc0000(0000) knlGS:0000000000000000
> [ 43.853828] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 43.853829] CR2: 000055cfdb27de28 CR3: 00000001020cc000 CR4: 00000000000006e0
> [ 43.853830] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 43.853831] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 43.853832] Call Trace:
> [ 43.853846] <TASK>
> [ 43.853848] cgroup_migrate_execute+0x319/0x410
> [ 43.853853] cgroup_attach_task+0x159/0x200
> [ 43.853857] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 43.853858] __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 43.853860] cgroup_file_write+0x65/0x160
> [ 43.853863] kernfs_fop_write_iter+0x12a/0x1a0
> [ 43.853870] new_sync_write+0x11d/0x1b0
> [ 43.853877] vfs_write+0x232/0x290
> [ 43.853880] ksys_write+0x9c/0xd0
> [ 43.853882] ? fpregs_assert_state_consistent+0x19/0x40
> [ 43.853886] do_syscall_64+0x3a/0x80
> [ 43.853896] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 43.853902] RIP: 0033:0x7f012381f224
> [ 43.853904] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
> [ 43.853906] RSP: 002b:00007ffd3f411f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> [ 43.853908] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f012381f224
> [ 43.853909] RDX: 0000000000000004 RSI: 000055cfdb297a70 RDI: 0000000000000001
> [ 43.853910] RBP: 000055cfdb297a70 R08: 000000000000000a R09: 0000000000000003
> [ 43.853911] R10: 000000000000000a R11: 0000000000000246 R12: 00007f0123afb760
> [ 43.853913] R13: 0000000000000004 R14: 00007f0123af72a0 R15: 00007f0123af6760
> [ 43.853914] </TASK>
> [ 43.853915] ---[ end trace 8292bcee7ea90403 ]---
--
tejun
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
2022-01-14 16:20 ` Tejun Heo
@ 2022-01-14 20:33 ` Waiman Long
-1 siblings, 0 replies; 21+ messages in thread
From: Waiman Long @ 2022-01-14 20:33 UTC (permalink / raw)
To: Tejun Heo, Zhang Qiao
Cc: lizefan.x, hannes, cgroups, linux-kernel, Michal Koutný
On 1/14/22 11:20, Tejun Heo wrote:
> (cc'ing Waiman and Michal and quoting whole body)
>
> Seems sane to me but let's hear what Waiman and Michal think.
>
> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>> Hello everyone
>>
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>
>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index d0e163a02099..2535d23d2c51 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>
>> cgroup_taskset_for_each(task, css, tset) {
>> + cpus_read_lock();
>> if (cs != &top_cpuset)
>> guarantee_online_cpus(task, cpus_attach);
>> else
>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> * fail. TODO: have a better way to handle failure here
>> */
>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>> + cpus_read_unlock();
>>
>>
>> Is there a better solution?
>>
>> Thanks
The change looks OK to me. However, we may need to run the full set of
regression test to make sure that lockdep won't complain about potential
deadlock.
Cheers,
Longman
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-14 20:33 ` Waiman Long
0 siblings, 0 replies; 21+ messages in thread
From: Waiman Long @ 2022-01-14 20:33 UTC (permalink / raw)
To: Tejun Heo, Zhang Qiao
Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Michal Koutný
On 1/14/22 11:20, Tejun Heo wrote:
> (cc'ing Waiman and Michal and quoting whole body)
>
> Seems sane to me but let's hear what Waiman and Michal think.
>
> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>> Hello everyone
>>
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>
>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index d0e163a02099..2535d23d2c51 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>
>> cgroup_taskset_for_each(task, css, tset) {
>> + cpus_read_lock();
>> if (cs != &top_cpuset)
>> guarantee_online_cpus(task, cpus_attach);
>> else
>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> * fail. TODO: have a better way to handle failure here
>> */
>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>> + cpus_read_unlock();
>>
>>
>> Is there a better solution?
>>
>> Thanks
The change looks OK to me. However, we may need to run the full set of
regression test to make sure that lockdep won't complain about potential
deadlock.
Cheers,
Longman
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 2:25 ` Zhang Qiao
0 siblings, 0 replies; 21+ messages in thread
From: Zhang Qiao @ 2022-01-17 2:25 UTC (permalink / raw)
To: Waiman Long
Cc: lizefan.x, hannes, cgroups, linux-kernel, Michal Koutný, Tejun Heo
hello
在 2022/1/15 4:33, Waiman Long 写道:
> On 1/14/22 11:20, Tejun Heo wrote:
>> (cc'ing Waiman and Michal and quoting whole body)
>>
>> Seems sane to me but let's hear what Waiman and Michal think.
>>
>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>> Hello everyone
>>>
>>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>
>>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>
>>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>> doesn't fail, as follows:
>>>
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index d0e163a02099..2535d23d2c51 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>
>>> cgroup_taskset_for_each(task, css, tset) {
>>> + cpus_read_lock();
>>> if (cs != &top_cpuset)
>>> guarantee_online_cpus(task, cpus_attach);
>>> else
>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>> * fail. TODO: have a better way to handle failure here
>>> */
>>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>> + cpus_read_unlock();
>>>
>>>
>>> Is there a better solution?
>>>
>>> Thanks
>
> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>
I run the test with lockdep enabled, and got lockdep warning like that below.
so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
thanks,
Zhang Qiao
[ 38.420372] ======================================================
[ 38.421339] WARNING: possible circular locking dependency detected
[ 38.422312] 5.16.0-rc4+ #13 Not tainted
[ 38.422920] ------------------------------------------------------
[ 38.423883] bash/594 is trying to acquire lock:
[ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
[ 38.425880]
[ 38.425880] but task is already holding lock:
[ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
[ 38.428015]
[ 38.428015] which lock already depends on the new lock.
[ 38.428015]
[ 38.429279]
[ 38.429279] the existing dependency chain (in reverse order) is:
[ 38.430445]
[ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
[ 38.431371] percpu_down_write+0x42/0x130
[ 38.432085] cpuset_css_online+0x2b/0x2e0
[ 38.432808] online_css+0x24/0x80
[ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
[ 38.434273] cgroup_mkdir+0x396/0x4c0
[ 38.434930] kernfs_iop_mkdir+0x56/0x80
[ 38.435614] vfs_mkdir+0xde/0x190
[ 38.436220] do_mkdirat+0x7d/0xf0
[ 38.436824] __x64_sys_mkdir+0x21/0x30
[ 38.437495] do_syscall_64+0x3a/0x80
[ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.439015]
[ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
[ 38.439980] __lock_acquire+0x17f6/0x2260
[ 38.440691] lock_acquire+0x277/0x320
[ 38.441347] cpus_read_lock+0x37/0xc0
[ 38.442011] cpuset_attach+0xc2/0x1e0
[ 38.442671] cgroup_migrate_execute+0x3a6/0x490
[ 38.443461] cgroup_attach_task+0x22c/0x3d0
[ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.445145] cgroup_file_write+0x6f/0x230
[ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
[ 38.446636] new_sync_write+0x120/0x1b0
[ 38.447319] vfs_write+0x359/0x3b0
[ 38.447937] ksys_write+0xa2/0xe0
[ 38.448540] do_syscall_64+0x3a/0x80
[ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.450057]
[ 38.450057] other info that might help us debug this:
[ 38.450057]
[ 38.451297] Possible unsafe locking scenario:
[ 38.451297]
[ 38.452218] CPU0 CPU1
[ 38.452935] ---- ----
[ 38.453650] lock(&cpuset_rwsem);
[ 38.454188] lock(cpu_hotplug_lock);
[ 38.455148] lock(&cpuset_rwsem);
[ 38.456069] lock(cpu_hotplug_lock);
[ 38.456645]
[ 38.456645] *** DEADLOCK ***
[ 38.456645]
[ 38.457572] 5 locks held by bash/594:
[ 38.458156] #0: ffff888100d67470 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0xa2/0xe0
[ 38.459392] #1: ffff888100d06290 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0xfe/0x1b0
[ 38.460761] #2: ffffffff82967330 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0xcf/0x1d0
[ 38.462137] #3: ffffffff82967100 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x78/0x240
[ 38.463749] #4: ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
[ 38.465052]
[ 38.465052] stack backtrace:
[ 38.465747] CPU: 0 PID: 594 Comm: bash Not tainted 5.16.0-rc4+ #13
[ 38.466712] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
[ 38.468507] Call Trace:
[ 38.468900] <TASK>
[ 38.469241] dump_stack_lvl+0x56/0x7b
[ 38.469827] check_noncircular+0x126/0x140
[ 38.470476] ? __lock_acquire+0x17f6/0x2260
[ 38.471136] __lock_acquire+0x17f6/0x2260
[ 38.471772] lock_acquire+0x277/0x320
[ 38.472352] ? cpuset_attach+0xc2/0x1e0
[ 38.472961] cpus_read_lock+0x37/0xc0
[ 38.473550] ? cpuset_attach+0xc2/0x1e0
[ 38.474159] cpuset_attach+0xc2/0x1e0
[ 38.474742] cgroup_migrate_execute+0x3a6/0x490
[ 38.475457] cgroup_attach_task+0x22c/0x3d0
[ 38.476121] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.477021] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.477904] cgroup_file_write+0x6f/0x230
[ 38.478540] kernfs_fop_write_iter+0x130/0x1b0
[ 38.479241] new_sync_write+0x120/0x1b0
[ 38.479849] vfs_write+0x359/0x3b0
[ 38.480391] ksys_write+0xa2/0xe0
[ 38.480920] do_syscall_64+0x3a/0x80
[ 38.481488] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.482289] RIP: 0033:0x7f229f42b224
[ 38.482857] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
[ 38.485758] RSP: 002b:00007fffaa3eadd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 38.486937] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f229f42b224
[ 38.488051] RDX: 0000000000000004 RSI: 0000562679dc5410 RDI: 0000000000000001
[ 38.489164] RBP: 0000562679dc5410 R08: 000000000000000a R09: 0000000000000003
[ 38.490282] R10: 000000000000000a R11: 0000000000000246 R12: 00007f229f707760
[ 38.491395] R13: 0000000000000004 R14: 00007f229f7032a0 R15: 00007f229f702760
[ 38.492516] </TASK>
> Cheers,
> Longman
>
> .
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 2:25 ` Zhang Qiao
0 siblings, 0 replies; 21+ messages in thread
From: Zhang Qiao @ 2022-01-17 2:25 UTC (permalink / raw)
To: Waiman Long
Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Michal Koutný,
Tejun Heo
hello
在 2022/1/15 4:33, Waiman Long 写道:
> On 1/14/22 11:20, Tejun Heo wrote:
>> (cc'ing Waiman and Michal and quoting whole body)
>>
>> Seems sane to me but let's hear what Waiman and Michal think.
>>
>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>> Hello everyone
>>>
>>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>
>>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>
>>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>> doesn't fail, as follows:
>>>
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index d0e163a02099..2535d23d2c51 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>
>>> cgroup_taskset_for_each(task, css, tset) {
>>> + cpus_read_lock();
>>> if (cs != &top_cpuset)
>>> guarantee_online_cpus(task, cpus_attach);
>>> else
>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>> * fail. TODO: have a better way to handle failure here
>>> */
>>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>> + cpus_read_unlock();
>>>
>>>
>>> Is there a better solution?
>>>
>>> Thanks
>
> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>
I run the test with lockdep enabled, and got lockdep warning like that below.
so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
thanks,
Zhang Qiao
[ 38.420372] ======================================================
[ 38.421339] WARNING: possible circular locking dependency detected
[ 38.422312] 5.16.0-rc4+ #13 Not tainted
[ 38.422920] ------------------------------------------------------
[ 38.423883] bash/594 is trying to acquire lock:
[ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
[ 38.425880]
[ 38.425880] but task is already holding lock:
[ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
[ 38.428015]
[ 38.428015] which lock already depends on the new lock.
[ 38.428015]
[ 38.429279]
[ 38.429279] the existing dependency chain (in reverse order) is:
[ 38.430445]
[ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
[ 38.431371] percpu_down_write+0x42/0x130
[ 38.432085] cpuset_css_online+0x2b/0x2e0
[ 38.432808] online_css+0x24/0x80
[ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
[ 38.434273] cgroup_mkdir+0x396/0x4c0
[ 38.434930] kernfs_iop_mkdir+0x56/0x80
[ 38.435614] vfs_mkdir+0xde/0x190
[ 38.436220] do_mkdirat+0x7d/0xf0
[ 38.436824] __x64_sys_mkdir+0x21/0x30
[ 38.437495] do_syscall_64+0x3a/0x80
[ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.439015]
[ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
[ 38.439980] __lock_acquire+0x17f6/0x2260
[ 38.440691] lock_acquire+0x277/0x320
[ 38.441347] cpus_read_lock+0x37/0xc0
[ 38.442011] cpuset_attach+0xc2/0x1e0
[ 38.442671] cgroup_migrate_execute+0x3a6/0x490
[ 38.443461] cgroup_attach_task+0x22c/0x3d0
[ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.445145] cgroup_file_write+0x6f/0x230
[ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
[ 38.446636] new_sync_write+0x120/0x1b0
[ 38.447319] vfs_write+0x359/0x3b0
[ 38.447937] ksys_write+0xa2/0xe0
[ 38.448540] do_syscall_64+0x3a/0x80
[ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.450057]
[ 38.450057] other info that might help us debug this:
[ 38.450057]
[ 38.451297] Possible unsafe locking scenario:
[ 38.451297]
[ 38.452218] CPU0 CPU1
[ 38.452935] ---- ----
[ 38.453650] lock(&cpuset_rwsem);
[ 38.454188] lock(cpu_hotplug_lock);
[ 38.455148] lock(&cpuset_rwsem);
[ 38.456069] lock(cpu_hotplug_lock);
[ 38.456645]
[ 38.456645] *** DEADLOCK ***
[ 38.456645]
[ 38.457572] 5 locks held by bash/594:
[ 38.458156] #0: ffff888100d67470 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0xa2/0xe0
[ 38.459392] #1: ffff888100d06290 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0xfe/0x1b0
[ 38.460761] #2: ffffffff82967330 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0xcf/0x1d0
[ 38.462137] #3: ffffffff82967100 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x78/0x240
[ 38.463749] #4: ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
[ 38.465052]
[ 38.465052] stack backtrace:
[ 38.465747] CPU: 0 PID: 594 Comm: bash Not tainted 5.16.0-rc4+ #13
[ 38.466712] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
[ 38.468507] Call Trace:
[ 38.468900] <TASK>
[ 38.469241] dump_stack_lvl+0x56/0x7b
[ 38.469827] check_noncircular+0x126/0x140
[ 38.470476] ? __lock_acquire+0x17f6/0x2260
[ 38.471136] __lock_acquire+0x17f6/0x2260
[ 38.471772] lock_acquire+0x277/0x320
[ 38.472352] ? cpuset_attach+0xc2/0x1e0
[ 38.472961] cpus_read_lock+0x37/0xc0
[ 38.473550] ? cpuset_attach+0xc2/0x1e0
[ 38.474159] cpuset_attach+0xc2/0x1e0
[ 38.474742] cgroup_migrate_execute+0x3a6/0x490
[ 38.475457] cgroup_attach_task+0x22c/0x3d0
[ 38.476121] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.477021] __cgroup1_procs_write.constprop.21+0x10d/0x170
[ 38.477904] cgroup_file_write+0x6f/0x230
[ 38.478540] kernfs_fop_write_iter+0x130/0x1b0
[ 38.479241] new_sync_write+0x120/0x1b0
[ 38.479849] vfs_write+0x359/0x3b0
[ 38.480391] ksys_write+0xa2/0xe0
[ 38.480920] do_syscall_64+0x3a/0x80
[ 38.481488] entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 38.482289] RIP: 0033:0x7f229f42b224
[ 38.482857] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
[ 38.485758] RSP: 002b:00007fffaa3eadd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 38.486937] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f229f42b224
[ 38.488051] RDX: 0000000000000004 RSI: 0000562679dc5410 RDI: 0000000000000001
[ 38.489164] RBP: 0000562679dc5410 R08: 000000000000000a R09: 0000000000000003
[ 38.490282] R10: 000000000000000a R11: 0000000000000246 R12: 00007f229f707760
[ 38.491395] R13: 0000000000000004 R14: 00007f229f7032a0 R15: 00007f229f702760
[ 38.492516] </TASK>
> Cheers,
> Longman
>
> .
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 4:35 ` Waiman Long
0 siblings, 0 replies; 21+ messages in thread
From: Waiman Long @ 2022-01-17 4:35 UTC (permalink / raw)
To: Zhang Qiao
Cc: lizefan.x, hannes, cgroups, linux-kernel, Michal Koutný, Tejun Heo
On 1/16/22 21:25, Zhang Qiao wrote:
> hello
>
> 在 2022/1/15 4:33, Waiman Long 写道:
>> On 1/14/22 11:20, Tejun Heo wrote:
>>> (cc'ing Waiman and Michal and quoting whole body)
>>>
>>> Seems sane to me but let's hear what Waiman and Michal think.
>>>
>>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>>> Hello everyone
>>>>
>>>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>>
>>>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>>
>>>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>>> doesn't fail, as follows:
>>>>
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index d0e163a02099..2535d23d2c51 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>>
>>>> cgroup_taskset_for_each(task, css, tset) {
>>>> + cpus_read_lock();
>>>> if (cs != &top_cpuset)
>>>> guarantee_online_cpus(task, cpus_attach);
>>>> else
>>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>> * fail. TODO: have a better way to handle failure here
>>>> */
>>>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>>> + cpus_read_unlock();
>>>>
>>>>
>>>> Is there a better solution?
>>>>
>>>> Thanks
>> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>>
> I run the test with lockdep enabled, and got lockdep warning like that below.
> so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
>
> thanks,
> Zhang Qiao
>
> [ 38.420372] ======================================================
> [ 38.421339] WARNING: possible circular locking dependency detected
> [ 38.422312] 5.16.0-rc4+ #13 Not tainted
> [ 38.422920] ------------------------------------------------------
> [ 38.423883] bash/594 is trying to acquire lock:
> [ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
> [ 38.425880]
> [ 38.425880] but task is already holding lock:
> [ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
> [ 38.428015]
> [ 38.428015] which lock already depends on the new lock.
> [ 38.428015]
> [ 38.429279]
> [ 38.429279] the existing dependency chain (in reverse order) is:
> [ 38.430445]
> [ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
> [ 38.431371] percpu_down_write+0x42/0x130
> [ 38.432085] cpuset_css_online+0x2b/0x2e0
> [ 38.432808] online_css+0x24/0x80
> [ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
> [ 38.434273] cgroup_mkdir+0x396/0x4c0
> [ 38.434930] kernfs_iop_mkdir+0x56/0x80
> [ 38.435614] vfs_mkdir+0xde/0x190
> [ 38.436220] do_mkdirat+0x7d/0xf0
> [ 38.436824] __x64_sys_mkdir+0x21/0x30
> [ 38.437495] do_syscall_64+0x3a/0x80
> [ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 38.439015]
> [ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
> [ 38.439980] __lock_acquire+0x17f6/0x2260
> [ 38.440691] lock_acquire+0x277/0x320
> [ 38.441347] cpus_read_lock+0x37/0xc0
> [ 38.442011] cpuset_attach+0xc2/0x1e0
> [ 38.442671] cgroup_migrate_execute+0x3a6/0x490
> [ 38.443461] cgroup_attach_task+0x22c/0x3d0
> [ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 38.445145] cgroup_file_write+0x6f/0x230
> [ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
> [ 38.446636] new_sync_write+0x120/0x1b0
> [ 38.447319] vfs_write+0x359/0x3b0
> [ 38.447937] ksys_write+0xa2/0xe0
> [ 38.448540] do_syscall_64+0x3a/0x80
> [ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 38.450057]
> [ 38.450057] other info that might help us debug this:
> [ 38.450057]
> [ 38.451297] Possible unsafe locking scenario:
> [ 38.451297]
> [ 38.452218] CPU0 CPU1
> [ 38.452935] ---- ----
> [ 38.453650] lock(&cpuset_rwsem);
> [ 38.454188] lock(cpu_hotplug_lock);
> [ 38.455148] lock(&cpuset_rwsem);
> [ 38.456069] lock(cpu_hotplug_lock);
Yes, you need to play around with lock ordering to make sure that
lockdep won't complain.
Cheers,
Longman
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 4:35 ` Waiman Long
0 siblings, 0 replies; 21+ messages in thread
From: Waiman Long @ 2022-01-17 4:35 UTC (permalink / raw)
To: Zhang Qiao
Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Michal Koutný,
Tejun Heo
On 1/16/22 21:25, Zhang Qiao wrote:
> hello
>
> 在 2022/1/15 4:33, Waiman Long 写道:
>> On 1/14/22 11:20, Tejun Heo wrote:
>>> (cc'ing Waiman and Michal and quoting whole body)
>>>
>>> Seems sane to me but let's hear what Waiman and Michal think.
>>>
>>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>>> Hello everyone
>>>>
>>>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>>
>>>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>>
>>>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>>> doesn't fail, as follows:
>>>>
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index d0e163a02099..2535d23d2c51 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>>
>>>> cgroup_taskset_for_each(task, css, tset) {
>>>> + cpus_read_lock();
>>>> if (cs != &top_cpuset)
>>>> guarantee_online_cpus(task, cpus_attach);
>>>> else
>>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>> * fail. TODO: have a better way to handle failure here
>>>> */
>>>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>>> + cpus_read_unlock();
>>>>
>>>>
>>>> Is there a better solution?
>>>>
>>>> Thanks
>> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>>
> I run the test with lockdep enabled, and got lockdep warning like that below.
> so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
>
> thanks,
> Zhang Qiao
>
> [ 38.420372] ======================================================
> [ 38.421339] WARNING: possible circular locking dependency detected
> [ 38.422312] 5.16.0-rc4+ #13 Not tainted
> [ 38.422920] ------------------------------------------------------
> [ 38.423883] bash/594 is trying to acquire lock:
> [ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
> [ 38.425880]
> [ 38.425880] but task is already holding lock:
> [ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
> [ 38.428015]
> [ 38.428015] which lock already depends on the new lock.
> [ 38.428015]
> [ 38.429279]
> [ 38.429279] the existing dependency chain (in reverse order) is:
> [ 38.430445]
> [ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
> [ 38.431371] percpu_down_write+0x42/0x130
> [ 38.432085] cpuset_css_online+0x2b/0x2e0
> [ 38.432808] online_css+0x24/0x80
> [ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
> [ 38.434273] cgroup_mkdir+0x396/0x4c0
> [ 38.434930] kernfs_iop_mkdir+0x56/0x80
> [ 38.435614] vfs_mkdir+0xde/0x190
> [ 38.436220] do_mkdirat+0x7d/0xf0
> [ 38.436824] __x64_sys_mkdir+0x21/0x30
> [ 38.437495] do_syscall_64+0x3a/0x80
> [ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 38.439015]
> [ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
> [ 38.439980] __lock_acquire+0x17f6/0x2260
> [ 38.440691] lock_acquire+0x277/0x320
> [ 38.441347] cpus_read_lock+0x37/0xc0
> [ 38.442011] cpuset_attach+0xc2/0x1e0
> [ 38.442671] cgroup_migrate_execute+0x3a6/0x490
> [ 38.443461] cgroup_attach_task+0x22c/0x3d0
> [ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
> [ 38.445145] cgroup_file_write+0x6f/0x230
> [ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
> [ 38.446636] new_sync_write+0x120/0x1b0
> [ 38.447319] vfs_write+0x359/0x3b0
> [ 38.447937] ksys_write+0xa2/0xe0
> [ 38.448540] do_syscall_64+0x3a/0x80
> [ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 38.450057]
> [ 38.450057] other info that might help us debug this:
> [ 38.450057]
> [ 38.451297] Possible unsafe locking scenario:
> [ 38.451297]
> [ 38.452218] CPU0 CPU1
> [ 38.452935] ---- ----
> [ 38.453650] lock(&cpuset_rwsem);
> [ 38.454188] lock(cpu_hotplug_lock);
> [ 38.455148] lock(&cpuset_rwsem);
> [ 38.456069] lock(cpu_hotplug_lock);
Yes, you need to play around with lock ordering to make sure that
lockdep won't complain.
Cheers,
Longman
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 6:27 ` Zhang Qiao
0 siblings, 0 replies; 21+ messages in thread
From: Zhang Qiao @ 2022-01-17 6:27 UTC (permalink / raw)
To: Waiman Long
Cc: lizefan.x, hannes, cgroups, linux-kernel, Michal Koutný, Tejun Heo
在 2022/1/17 12:35, Waiman Long 写道:
> On 1/16/22 21:25, Zhang Qiao wrote:
>> hello
>>
>> 在 2022/1/15 4:33, Waiman Long 写道:
>>> On 1/14/22 11:20, Tejun Heo wrote:
>>>> (cc'ing Waiman and Michal and quoting whole body)
>>>>
>>>> Seems sane to me but let's hear what Waiman and Michal think.
>>>>
>>>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>>>> Hello everyone
>>>>>
>>>>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>>>
>>>>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>>>
>>>>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>>>> doesn't fail, as follows:
>>>>>
>>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>>> index d0e163a02099..2535d23d2c51 100644
>>>>> --- a/kernel/cgroup/cpuset.c
>>>>> +++ b/kernel/cgroup/cpuset.c
>>>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>>>
>>>>> cgroup_taskset_for_each(task, css, tset) {
>>>>> + cpus_read_lock();
>>>>> if (cs != &top_cpuset)
>>>>> guarantee_online_cpus(task, cpus_attach);
>>>>> else
>>>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>> * fail. TODO: have a better way to handle failure here
>>>>> */
>>>>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>>>> + cpus_read_unlock();
>>>>>
>>>>>
>>>>> Is there a better solution?
>>>>>
>>>>> Thanks
>>> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>>>
>> I run the test with lockdep enabled, and got lockdep warning like that below.
>> so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
>>
>> thanks,
>> Zhang Qiao
>>
>> [ 38.420372] ======================================================
>> [ 38.421339] WARNING: possible circular locking dependency detected
>> [ 38.422312] 5.16.0-rc4+ #13 Not tainted
>> [ 38.422920] ------------------------------------------------------
>> [ 38.423883] bash/594 is trying to acquire lock:
>> [ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
>> [ 38.425880]
>> [ 38.425880] but task is already holding lock:
>> [ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
>> [ 38.428015]
>> [ 38.428015] which lock already depends on the new lock.
>> [ 38.428015]
>> [ 38.429279]
>> [ 38.429279] the existing dependency chain (in reverse order) is:
>> [ 38.430445]
>> [ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
>> [ 38.431371] percpu_down_write+0x42/0x130
>> [ 38.432085] cpuset_css_online+0x2b/0x2e0
>> [ 38.432808] online_css+0x24/0x80
>> [ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
>> [ 38.434273] cgroup_mkdir+0x396/0x4c0
>> [ 38.434930] kernfs_iop_mkdir+0x56/0x80
>> [ 38.435614] vfs_mkdir+0xde/0x190
>> [ 38.436220] do_mkdirat+0x7d/0xf0
>> [ 38.436824] __x64_sys_mkdir+0x21/0x30
>> [ 38.437495] do_syscall_64+0x3a/0x80
>> [ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 38.439015]
>> [ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
>> [ 38.439980] __lock_acquire+0x17f6/0x2260
>> [ 38.440691] lock_acquire+0x277/0x320
>> [ 38.441347] cpus_read_lock+0x37/0xc0
>> [ 38.442011] cpuset_attach+0xc2/0x1e0
>> [ 38.442671] cgroup_migrate_execute+0x3a6/0x490
>> [ 38.443461] cgroup_attach_task+0x22c/0x3d0
>> [ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 38.445145] cgroup_file_write+0x6f/0x230
>> [ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
>> [ 38.446636] new_sync_write+0x120/0x1b0
>> [ 38.447319] vfs_write+0x359/0x3b0
>> [ 38.447937] ksys_write+0xa2/0xe0
>> [ 38.448540] do_syscall_64+0x3a/0x80
>> [ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 38.450057]
>> [ 38.450057] other info that might help us debug this:
>> [ 38.450057]
>> [ 38.451297] Possible unsafe locking scenario:
>> [ 38.451297]
>> [ 38.452218] CPU0 CPU1
>> [ 38.452935] ---- ----
>> [ 38.453650] lock(&cpuset_rwsem);
>> [ 38.454188] lock(cpu_hotplug_lock);
>> [ 38.455148] lock(&cpuset_rwsem);
>> [ 38.456069] lock(cpu_hotplug_lock);
>
> Yes, you need to play around with lock ordering to make sure that lockdep won't complain.
>
Thank you for taking a look!
if ok, i will send a patch.
Thanks,
Zhang Qiao.
> Cheers,
> Longman
>
> .
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 6:27 ` Zhang Qiao
0 siblings, 0 replies; 21+ messages in thread
From: Zhang Qiao @ 2022-01-17 6:27 UTC (permalink / raw)
To: Waiman Long
Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Michal Koutný,
Tejun Heo
在 2022/1/17 12:35, Waiman Long 写道:
> On 1/16/22 21:25, Zhang Qiao wrote:
>> hello
>>
>> 在 2022/1/15 4:33, Waiman Long 写道:
>>> On 1/14/22 11:20, Tejun Heo wrote:
>>>> (cc'ing Waiman and Michal and quoting whole body)
>>>>
>>>> Seems sane to me but let's hear what Waiman and Michal think.
>>>>
>>>> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>>>> Hello everyone
>>>>>
>>>>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>>>>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>>>>
>>>>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>>>>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>>>>
>>>>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>>>>> doesn't fail, as follows:
>>>>>
>>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>>> index d0e163a02099..2535d23d2c51 100644
>>>>> --- a/kernel/cgroup/cpuset.c
>>>>> +++ b/kernel/cgroup/cpuset.c
>>>>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>>>>
>>>>> cgroup_taskset_for_each(task, css, tset) {
>>>>> + cpus_read_lock();
>>>>> if (cs != &top_cpuset)
>>>>> guarantee_online_cpus(task, cpus_attach);
>>>>> else
>>>>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>>>>> * fail. TODO: have a better way to handle failure here
>>>>> */
>>>>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>>>>> + cpus_read_unlock();
>>>>>
>>>>>
>>>>> Is there a better solution?
>>>>>
>>>>> Thanks
>>> The change looks OK to me. However, we may need to run the full set of regression test to make sure that lockdep won't complain about potential deadlock.
>>>
>> I run the test with lockdep enabled, and got lockdep warning like that below.
>> so we should take the cpu_hotplug_lock first, then take the cpuset_rwsem lock.
>>
>> thanks,
>> Zhang Qiao
>>
>> [ 38.420372] ======================================================
>> [ 38.421339] WARNING: possible circular locking dependency detected
>> [ 38.422312] 5.16.0-rc4+ #13 Not tainted
>> [ 38.422920] ------------------------------------------------------
>> [ 38.423883] bash/594 is trying to acquire lock:
>> [ 38.424595] ffffffff8286afc0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuset_attach+0xc2/0x1e0
>> [ 38.425880]
>> [ 38.425880] but task is already holding lock:
>> [ 38.426787] ffffffff8296a5a0 (&cpuset_rwsem){++++}-{0:0}, at: cpuset_attach+0x3e/0x1e0
>> [ 38.428015]
>> [ 38.428015] which lock already depends on the new lock.
>> [ 38.428015]
>> [ 38.429279]
>> [ 38.429279] the existing dependency chain (in reverse order) is:
>> [ 38.430445]
>> [ 38.430445] -> #1 (&cpuset_rwsem){++++}-{0:0}:
>> [ 38.431371] percpu_down_write+0x42/0x130
>> [ 38.432085] cpuset_css_online+0x2b/0x2e0
>> [ 38.432808] online_css+0x24/0x80
>> [ 38.433411] cgroup_apply_control_enable+0x2fa/0x330
>> [ 38.434273] cgroup_mkdir+0x396/0x4c0
>> [ 38.434930] kernfs_iop_mkdir+0x56/0x80
>> [ 38.435614] vfs_mkdir+0xde/0x190
>> [ 38.436220] do_mkdirat+0x7d/0xf0
>> [ 38.436824] __x64_sys_mkdir+0x21/0x30
>> [ 38.437495] do_syscall_64+0x3a/0x80
>> [ 38.438145] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 38.439015]
>> [ 38.439015] -> #0 (cpu_hotplug_lock){++++}-{0:0}:
>> [ 38.439980] __lock_acquire+0x17f6/0x2260
>> [ 38.440691] lock_acquire+0x277/0x320
>> [ 38.441347] cpus_read_lock+0x37/0xc0
>> [ 38.442011] cpuset_attach+0xc2/0x1e0
>> [ 38.442671] cgroup_migrate_execute+0x3a6/0x490
>> [ 38.443461] cgroup_attach_task+0x22c/0x3d0
>> [ 38.444197] __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 38.445145] cgroup_file_write+0x6f/0x230
>> [ 38.445860] kernfs_fop_write_iter+0x130/0x1b0
>> [ 38.446636] new_sync_write+0x120/0x1b0
>> [ 38.447319] vfs_write+0x359/0x3b0
>> [ 38.447937] ksys_write+0xa2/0xe0
>> [ 38.448540] do_syscall_64+0x3a/0x80
>> [ 38.449183] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 38.450057]
>> [ 38.450057] other info that might help us debug this:
>> [ 38.450057]
>> [ 38.451297] Possible unsafe locking scenario:
>> [ 38.451297]
>> [ 38.452218] CPU0 CPU1
>> [ 38.452935] ---- ----
>> [ 38.453650] lock(&cpuset_rwsem);
>> [ 38.454188] lock(cpu_hotplug_lock);
>> [ 38.455148] lock(&cpuset_rwsem);
>> [ 38.456069] lock(cpu_hotplug_lock);
>
> Yes, you need to play around with lock ordering to make sure that lockdep won't complain.
>
Thank you for taking a look!
if ok, i will send a patch.
Thanks,
Zhang Qiao.
> Cheers,
> Longman
>
> .
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
2022-01-14 16:20 ` Tejun Heo
@ 2022-01-17 6:25 ` Zhang Qiao
-1 siblings, 0 replies; 21+ messages in thread
From: Zhang Qiao @ 2022-01-17 6:25 UTC (permalink / raw)
To: Tejun Heo
Cc: lizefan.x, hannes, cgroups, linux-kernel, Waiman Long,
Michal Koutný
在 2022/1/15 0:20, Tejun Heo 写道:
> (cc'ing Waiman and Michal and quoting whole body)
>
> Seems sane to me but let's hear what Waiman and Michal think.
>
Thank you for taking a look!
if ok, i will send a patch.
Thanks,
Zhang Qiao.
> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>
>> Hello everyone
>>
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>
>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index d0e163a02099..2535d23d2c51 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>
>> cgroup_taskset_for_each(task, css, tset) {
>> + cpus_read_lock();
>> if (cs != &top_cpuset)
>> guarantee_online_cpus(task, cpus_attach);
>> else
>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> * fail. TODO: have a better way to handle failure here
>> */
>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>> + cpus_read_unlock();
>>
>>
>> Is there a better solution?
>>
>> Thanks
>>
>> log:
>> [ 43.853794] ------------[ cut here ]------------
>> [ 43.853798] WARNING: CPU: 7 PID: 463 at ../kernel/cgroup/cpuset.c:2279 cpuset_attach+0xee/0x1f0
>> [ 43.853806] Modules linked in:
>> [ 43.853807] CPU: 7 PID: 463 Comm: bash Not tainted 5.16.0-rc4+ #10
>> [ 43.853810] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
>> [ 43.853811] RIP: 0010:cpuset_attach+0xee/0x1f0
>> [ 43.853814] Code: ff ff 48 85 c0 48 89 c3 74 24 48 81 fd 40 42 54 82 75 96 80 bb 38 07 00 00 6f 48 8b 05 93 b3 55 01 48 89 05 bc 05 bb 01 75 97 <0f> 0b eb b3 48 8b 85 e8 00 00 00 48 85
>> [ 43.853816] RSP: 0018:ffffc90000623c30 EFLAGS: 00010246
>> [ 43.853818] RAX: 0000000000000000 RBX: ffff888101f39c80 RCX: 0000000000000001
>> [ 43.853819] RDX: 0000000000007fff RSI: ffffffff82cd5708 RDI: ffff888101f39c80
>> [ 43.853821] RBP: ffff8881001afe00 R08: 0000000000000000 R09: ffffc90000623d00
>> [ 43.853822] R10: ffffc900000a3de8 R11: 0000000000000001 R12: ffffc90000623cf0
>> [ 43.853823] R13: ffffffff82cd56d0 R14: ffffffff82544240 R15: 0000000000000001
>> [ 43.853824] FS: 00007f012414d740(0000) GS:ffff8882b5bc0000(0000) knlGS:0000000000000000
>> [ 43.853828] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 43.853829] CR2: 000055cfdb27de28 CR3: 00000001020cc000 CR4: 00000000000006e0
>> [ 43.853830] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> [ 43.853831] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>> [ 43.853832] Call Trace:
>> [ 43.853846] <TASK>
>> [ 43.853848] cgroup_migrate_execute+0x319/0x410
>> [ 43.853853] cgroup_attach_task+0x159/0x200
>> [ 43.853857] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 43.853858] __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 43.853860] cgroup_file_write+0x65/0x160
>> [ 43.853863] kernfs_fop_write_iter+0x12a/0x1a0
>> [ 43.853870] new_sync_write+0x11d/0x1b0
>> [ 43.853877] vfs_write+0x232/0x290
>> [ 43.853880] ksys_write+0x9c/0xd0
>> [ 43.853882] ? fpregs_assert_state_consistent+0x19/0x40
>> [ 43.853886] do_syscall_64+0x3a/0x80
>> [ 43.853896] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 43.853902] RIP: 0033:0x7f012381f224
>> [ 43.853904] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
>> [ 43.853906] RSP: 002b:00007ffd3f411f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> [ 43.853908] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f012381f224
>> [ 43.853909] RDX: 0000000000000004 RSI: 000055cfdb297a70 RDI: 0000000000000001
>> [ 43.853910] RBP: 000055cfdb297a70 R08: 000000000000000a R09: 0000000000000003
>> [ 43.853911] R10: 000000000000000a R11: 0000000000000246 R12: 00007f0123afb760
>> [ 43.853913] R13: 0000000000000004 R14: 00007f0123af72a0 R15: 00007f0123af6760
>> [ 43.853914] </TASK>
>> [ 43.853915] ---[ end trace 8292bcee7ea90403 ]---
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Question] set_cpus_allowed_ptr() call failed at cpuset_attach()
@ 2022-01-17 6:25 ` Zhang Qiao
0 siblings, 0 replies; 21+ messages in thread
From: Zhang Qiao @ 2022-01-17 6:25 UTC (permalink / raw)
To: Tejun Heo
Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Waiman Long,
Michal Koutný
在 2022/1/15 0:20, Tejun Heo 写道:
> (cc'ing Waiman and Michal and quoting whole body)
>
> Seems sane to me but let's hear what Waiman and Michal think.
>
Thank you for taking a look!
if ok, i will send a patch.
Thanks,
Zhang Qiao.
> On Fri, Jan 14, 2022 at 09:15:06AM +0800, Zhang Qiao wrote:
>>
>> Hello everyone
>>
>> I found the following warning log on qemu. I migrated a task from one cpuset cgroup to
>> another, while I also performed the cpu hotplug operation, and got following calltrace.
>>
>> This may lead to a inconsistency between the affinity of the task and cpuset.cpus of the
>> dest cpuset, but this task can be successfully migrated to the dest cpuset cgroup.
>>
>> Can we use cpus_read_lock()/cpus_read_unlock() to guarantee that set_cpus_allowed_ptr()
>> doesn't fail, as follows:
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index d0e163a02099..2535d23d2c51 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -2265,6 +2265,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
>>
>> cgroup_taskset_for_each(task, css, tset) {
>> + cpus_read_lock();
>> if (cs != &top_cpuset)
>> guarantee_online_cpus(task, cpus_attach);
>> else
>> @@ -2274,6 +2275,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
>> * fail. TODO: have a better way to handle failure here
>> */
>> WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
>> + cpus_read_unlock();
>>
>>
>> Is there a better solution?
>>
>> Thanks
>>
>> log:
>> [ 43.853794] ------------[ cut here ]------------
>> [ 43.853798] WARNING: CPU: 7 PID: 463 at ../kernel/cgroup/cpuset.c:2279 cpuset_attach+0xee/0x1f0
>> [ 43.853806] Modules linked in:
>> [ 43.853807] CPU: 7 PID: 463 Comm: bash Not tainted 5.16.0-rc4+ #10
>> [ 43.853810] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-prebuilt.qemu.org 04/01/2014
>> [ 43.853811] RIP: 0010:cpuset_attach+0xee/0x1f0
>> [ 43.853814] Code: ff ff 48 85 c0 48 89 c3 74 24 48 81 fd 40 42 54 82 75 96 80 bb 38 07 00 00 6f 48 8b 05 93 b3 55 01 48 89 05 bc 05 bb 01 75 97 <0f> 0b eb b3 48 8b 85 e8 00 00 00 48 85
>> [ 43.853816] RSP: 0018:ffffc90000623c30 EFLAGS: 00010246
>> [ 43.853818] RAX: 0000000000000000 RBX: ffff888101f39c80 RCX: 0000000000000001
>> [ 43.853819] RDX: 0000000000007fff RSI: ffffffff82cd5708 RDI: ffff888101f39c80
>> [ 43.853821] RBP: ffff8881001afe00 R08: 0000000000000000 R09: ffffc90000623d00
>> [ 43.853822] R10: ffffc900000a3de8 R11: 0000000000000001 R12: ffffc90000623cf0
>> [ 43.853823] R13: ffffffff82cd56d0 R14: ffffffff82544240 R15: 0000000000000001
>> [ 43.853824] FS: 00007f012414d740(0000) GS:ffff8882b5bc0000(0000) knlGS:0000000000000000
>> [ 43.853828] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 43.853829] CR2: 000055cfdb27de28 CR3: 00000001020cc000 CR4: 00000000000006e0
>> [ 43.853830] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> [ 43.853831] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>> [ 43.853832] Call Trace:
>> [ 43.853846] <TASK>
>> [ 43.853848] cgroup_migrate_execute+0x319/0x410
>> [ 43.853853] cgroup_attach_task+0x159/0x200
>> [ 43.853857] ? __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 43.853858] __cgroup1_procs_write.constprop.21+0x10d/0x170
>> [ 43.853860] cgroup_file_write+0x65/0x160
>> [ 43.853863] kernfs_fop_write_iter+0x12a/0x1a0
>> [ 43.853870] new_sync_write+0x11d/0x1b0
>> [ 43.853877] vfs_write+0x232/0x290
>> [ 43.853880] ksys_write+0x9c/0xd0
>> [ 43.853882] ? fpregs_assert_state_consistent+0x19/0x40
>> [ 43.853886] do_syscall_64+0x3a/0x80
>> [ 43.853896] entry_SYSCALL_64_after_hwframe+0x44/0xae
>> [ 43.853902] RIP: 0033:0x7f012381f224
>> [ 43.853904] Code: 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8d 05 c1 07 2e 00 8b 00 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 45
>> [ 43.853906] RSP: 002b:00007ffd3f411f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> [ 43.853908] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f012381f224
>> [ 43.853909] RDX: 0000000000000004 RSI: 000055cfdb297a70 RDI: 0000000000000001
>> [ 43.853910] RBP: 000055cfdb297a70 R08: 000000000000000a R09: 0000000000000003
>> [ 43.853911] R10: 000000000000000a R11: 0000000000000246 R12: 00007f0123afb760
>> [ 43.853913] R13: 0000000000000004 R14: 00007f0123af72a0 R15: 00007f0123af6760
>> [ 43.853914] </TASK>
>> [ 43.853915] ---[ end trace 8292bcee7ea90403 ]---
>
^ permalink raw reply [flat|nested] 21+ messages in thread