* Re: [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
@ 2015-01-14 12:05 Hillf Danton
0 siblings, 0 replies; 6+ messages in thread
From: Hillf Danton @ 2015-01-14 12:05 UTC (permalink / raw)
To: 'Lai Jiangshan', linux-kernel
Cc: 'Tejun Heo', 'Yasuaki Ishimatsu',
'Gu Zheng', 'tangchen',
'Hiroyuki KAMEZAWA', 钟伟(鎏光)
> Hi, All
>
> This patches are un-changloged, un-compiled, un-booted, un-tested,
> they are just shits, I even hope them un-sent or blocked.
>
Good evening, Lai
Duno why you are on that fire(looks very high, really).
Perhaps you can simply turn off computer and take a vacation?
I have two cups of visky(three monkey) and
feel free to call me if ready.
Hillf
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] workqueue: update numa affinity info at node hotplug
@ 2015-01-14 2:47 Lai Jiangshan
2015-01-14 8:54 ` [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Lai Jiangshan
0 siblings, 1 reply; 6+ messages in thread
From: Lai Jiangshan @ 2015-01-14 2:47 UTC (permalink / raw)
To: Tejun Heo
Cc: Kamezawa Hiroyuki, linux-kernel, "Ishimatsu,
Yasuaki/石松 靖章",
Tang Chen, guz.fnst
On 01/13/2015 11:22 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Jan 13, 2015 at 03:19:09PM +0800, Lai Jiangshan wrote:
>> The Mapping of the *online* cpus to nodes is already maintained by numa code.
>>
>> What the workqueue needs is a special Mapping:
>> The Mapping of the *possible* cpus to nodes
>>
>> But this mapping (if the numa code maintain it) is a trouble:
>> "possible" implies the mapping is stable/constant/immutable, it is hard to
>> ensure it in the numa code.
>>
>> if mutability of this mapping is acceptable, we just move 20~40 LOC
>> of code from workqueue to numa code, all the other complexities
>> about it are still in workqueue.c.
>
> Make numa code maintain the mapping to the best of its knowledge and
> invoke notification callbacks when it changes.
The best of its knowledge is the physical onlined nodes and CPUs.
The cpu_present_mask can represent this knowledge. But it lacks of
the per-node cpu_present_mask and the notification callbacks.
> Even if that involves slightly more code, that's the right thing to do at this point.
Right, but in currently, the workqueue will be the only user, and I don't known
asking who to do it, so I may keep it in the workqueue.c.
> This puts the logic which is complicated by the fact that the mapping may
> change where it's caused not some random unrelated place.
> It'd be
> awesome if somebody more familiar with the numa side can chime in and
> explain why this mapping change can't be avoided.
I'm also looking for someone answer it.
>
> Thanks.
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
2015-01-14 2:47 [PATCH 1/2] workqueue: update numa affinity info at node hotplug Lai Jiangshan
@ 2015-01-14 8:54 ` Lai Jiangshan
2015-01-16 5:22 ` Yasuaki Ishimatsu
2015-01-23 6:13 ` Izumi, Taku
0 siblings, 2 replies; 6+ messages in thread
From: Lai Jiangshan @ 2015-01-14 8:54 UTC (permalink / raw)
To: linux-kernel
Cc: Lai Jiangshan, Tejun Heo, Yasuaki Ishimatsu, Gu, Zheng, tangchen,
Hiroyuki KAMEZAWA
Hi, All
This patches are un-changloged, un-compiled, un-booted, un-tested,
they are just shits, I even hope them un-sent or blocked.
The patches include two -solutions-:
Shit_A:
workqueue: reset pool->node and unhash the pool when the node is
offline
update wq_numa when cpu_present_mask changed
kernel/workqueue.c | 107 +++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 84 insertions(+), 23 deletions(-)
Shit_B:
workqueue: reset pool->node and unhash the pool when the node is
offline
workqueue: remove wq_numa_possible_cpumask
workqueue: directly update attrs of pools when cpu hot[un]plug
kernel/workqueue.c | 135 +++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 101 insertions(+), 34 deletions(-)
Both patch1 of the both solutions are: reset pool->node and unhash the pool,
it is suggested by TJ, I found it is a good leading-step for fixing the bug.
The other patches are handling wq_numa_possible_cpumask where the solutions
diverge.
Solution_A uses present_mask rather than possible_cpumask. It adds
wq_numa_notify_cpu_present_set/cleared() for notifications of
the changes of cpu_present_mask. But the notifications are un-existed
right now, so I fake one (wq_numa_check_present_cpumask_changes())
to imitate them. I hope the memory people add a real one.
Solution_B uses online_mask rather than possible_cpumask.
this solution remove more coupling between numa_code and workqueue,
it just depends on cpumask_of_node(node).
Patch2_of_Solution_B removes the wq_numa_possible_cpumask and add
overhead when cpu hot[un]plug, Patch3 reduce this overhead.
Thanks,
Lai
Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Gu, Zheng" <guz.fnst@cn.fujitsu.com>
Cc: tangchen <tangchen@cn.fujitsu.com>
Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
--
2.1.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
2015-01-14 8:54 ` [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Lai Jiangshan
@ 2015-01-16 5:22 ` Yasuaki Ishimatsu
2015-01-16 8:04 ` Lai Jiangshan
2015-01-23 6:13 ` Izumi, Taku
1 sibling, 1 reply; 6+ messages in thread
From: Yasuaki Ishimatsu @ 2015-01-16 5:22 UTC (permalink / raw)
To: Lai Jiangshan
Cc: linux-kernel, Tejun Heo, Gu, Zheng, tangchen, Hiroyuki KAMEZAWA
Hi Lai,
Thanks you for posting the patch-set.
I'll try your it next Monday. So, please wait a while.
Thanks,
Yasuaki Ishimatsu
(2015/01/14 17:54), Lai Jiangshan wrote:
> Hi, All
>
> This patches are un-changloged, un-compiled, un-booted, un-tested,
> they are just shits, I even hope them un-sent or blocked.
>
> The patches include two -solutions-:
>
> Shit_A:
> workqueue: reset pool->node and unhash the pool when the node is
> offline
> update wq_numa when cpu_present_mask changed
>
> kernel/workqueue.c | 107 +++++++++++++++++++++++++++++++++++++++++------------
> 1 file changed, 84 insertions(+), 23 deletions(-)
>
>
> Shit_B:
> workqueue: reset pool->node and unhash the pool when the node is
> offline
> workqueue: remove wq_numa_possible_cpumask
> workqueue: directly update attrs of pools when cpu hot[un]plug
>
> kernel/workqueue.c | 135 +++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 101 insertions(+), 34 deletions(-)
>
>
> Both patch1 of the both solutions are: reset pool->node and unhash the pool,
> it is suggested by TJ, I found it is a good leading-step for fixing the bug.
>
> The other patches are handling wq_numa_possible_cpumask where the solutions
> diverge.
>
> Solution_A uses present_mask rather than possible_cpumask. It adds
> wq_numa_notify_cpu_present_set/cleared() for notifications of
> the changes of cpu_present_mask. But the notifications are un-existed
> right now, so I fake one (wq_numa_check_present_cpumask_changes())
> to imitate them. I hope the memory people add a real one.
>
> Solution_B uses online_mask rather than possible_cpumask.
> this solution remove more coupling between numa_code and workqueue,
> it just depends on cpumask_of_node(node).
>
> Patch2_of_Solution_B removes the wq_numa_possible_cpumask and add
> overhead when cpu hot[un]plug, Patch3 reduce this overhead.
>
> Thanks,
> Lai
>
>
> Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
> Cc: "Gu, Zheng" <guz.fnst@cn.fujitsu.com>
> Cc: tangchen <tangchen@cn.fujitsu.com>
> Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
2015-01-16 5:22 ` Yasuaki Ishimatsu
@ 2015-01-16 8:04 ` Lai Jiangshan
0 siblings, 0 replies; 6+ messages in thread
From: Lai Jiangshan @ 2015-01-16 8:04 UTC (permalink / raw)
To: Yasuaki Ishimatsu
Cc: linux-kernel, Tejun Heo, Gu, Zheng, tangchen, Hiroyuki KAMEZAWA
On 01/16/2015 01:22 PM, Yasuaki Ishimatsu wrote:
> Hi Lai,
>
> Thanks you for posting the patch-set.
>
> I'll try your it next Monday. So, please wait a while.
>
I think it is just waste for testing before the maintainer make the decision.
(discussions/ideas are welcome.)
Before TJ's decision, maybe you can do this at first:
"Make numa code maintain the mapping to the best of its knowledge and
invoke notification callbacks when it changes. "
Thanks,
Lai
> Thanks,
> Yasuaki Ishimatsu
>
>
> (2015/01/14 17:54), Lai Jiangshan wrote:
>> Hi, All
>>
>> This patches are un-changloged, un-compiled, un-booted, un-tested,
>> they are just shits, I even hope them un-sent or blocked.
>>
>> The patches include two -solutions-:
>>
>> Shit_A:
>> workqueue: reset pool->node and unhash the pool when the node is
>> offline
>> update wq_numa when cpu_present_mask changed
>>
>> kernel/workqueue.c | 107 +++++++++++++++++++++++++++++++++++++++++------------
>> 1 file changed, 84 insertions(+), 23 deletions(-)
>>
>>
>> Shit_B:
>> workqueue: reset pool->node and unhash the pool when the node is
>> offline
>> workqueue: remove wq_numa_possible_cpumask
>> workqueue: directly update attrs of pools when cpu hot[un]plug
>>
>> kernel/workqueue.c | 135 +++++++++++++++++++++++++++++++++++++++--------------
>> 1 file changed, 101 insertions(+), 34 deletions(-)
>>
>>
>> Both patch1 of the both solutions are: reset pool->node and unhash the pool,
>> it is suggested by TJ, I found it is a good leading-step for fixing the bug.
>>
>> The other patches are handling wq_numa_possible_cpumask where the solutions
>> diverge.
>>
>> Solution_A uses present_mask rather than possible_cpumask. It adds
>> wq_numa_notify_cpu_present_set/cleared() for notifications of
>> the changes of cpu_present_mask. But the notifications are un-existed
>> right now, so I fake one (wq_numa_check_present_cpumask_changes())
>> to imitate them. I hope the memory people add a real one.
>>
>> Solution_B uses online_mask rather than possible_cpumask.
>> this solution remove more coupling between numa_code and workqueue,
>> it just depends on cpumask_of_node(node).
>>
>> Patch2_of_Solution_B removes the wq_numa_possible_cpumask and add
>> overhead when cpu hot[un]plug, Patch3 reduce this overhead.
>>
>> Thanks,
>> Lai
>>
>>
>> Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>> Cc: Tejun Heo <tj@kernel.org>
>> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>> Cc: "Gu, Zheng" <guz.fnst@cn.fujitsu.com>
>> Cc: tangchen <tangchen@cn.fujitsu.com>
>> Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
>>
>
>
> .
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
2015-01-14 8:54 ` [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Lai Jiangshan
2015-01-16 5:22 ` Yasuaki Ishimatsu
@ 2015-01-23 6:13 ` Izumi, Taku
2015-01-23 8:18 ` Lai Jiangshan
1 sibling, 1 reply; 6+ messages in thread
From: Izumi, Taku @ 2015-01-23 6:13 UTC (permalink / raw)
To: Lai, Jiangshan, linux-kernel
Cc: Lai, Jiangshan, Tejun Heo, Ishimatsu, Yasuaki, Gu, Zheng, Tang,
Chen, Kamezawa, Hiroyuki
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb2312", Size: 5811 bytes --]
> This patches are un-changloged, un-compiled, un-booted, un-tested,
> they are just shits, I even hope them un-sent or blocked.
>
> The patches include two -solutions-:
>
> Shit_A:
> workqueue: reset pool->node and unhash the pool when the node is
> offline
> update wq_numa when cpu_present_mask changed
>
> kernel/workqueue.c | 107 +++++++++++++++++++++++++++++++++++++++++------------
> 1 file changed, 84 insertions(+), 23 deletions(-)
>
>
> Shit_B:
> workqueue: reset pool->node and unhash the pool when the node is
> offline
> workqueue: remove wq_numa_possible_cpumask
> workqueue: directly update attrs of pools when cpu hot[un]plug
>
> kernel/workqueue.c | 135 +++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 101 insertions(+), 34 deletions(-)
>
I tried your patchsets.
linux-3.18.3 + Shit_A:
Build OK.
I tried to reproduce the problem that Ishimatsu had reported, but it doesn't occur.
It seems that your patch fixes this problem.
linux-3.18.3 + Shit_B:
Build OK, but I encountered kernel panic at boot time.
[ 0.189000] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
[ 0.189000] IP: [<ffffffff8131ef96>] __list_add+0x16/0xc0
[ 0.189000] PGD 0
[ 0.189000] Oops: 0000 [#1] SMP
[ 0.189000] Modules linked in:
[ 0.189000] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.18.3+ #3
[ 0.189000] Hardware name: FUJITSU PRIMEQUEST2800E/SB, BIOS PRIMEQUEST 2000 Series BIOS Version 01.81 12/03/2014
[ 0.189000] task: ffff880869678000 ti: ffff880869664000 task.ti: ffff880869664000
[ 0.189000] RIP: 0010:[<ffffffff8131ef96>] [<ffffffff8131ef96>] __list_add+0x16/0xc0
[ 0.189000] RSP: 0000:ffff880869667be8 EFLAGS: 00010296
[ 0.189000] RAX: ffff88087f83cda8 RBX: ffff88087f83cd80 RCX: 0000000000000000
[ 0.189000] RDX: 0000000000000000 RSI: ffff88086912bb98 RDI: ffff88087f83cd80
[ 0.189000] RBP: ffff880869667c08 R08: 0000000000000000 R09: ffff88087f807480
[ 0.189000] R10: ffffffff810911b6 R11: ffffffff810956ac R12: 0000000000000000
[ 0.189000] R13: ffff88086912bb98 R14: 0000000000000400 R15: 0000000000000400
[ 0.189000] FS: 0000000000000000(0000) GS:ffff88087fc00000(0000) knlGS:0000000000000000
[ 0.189000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.189000] CR2: 0000000000000008 CR3: 0000000001998000 CR4: 00000000001407f0
[ 0.189000] Stack:
[ 0.189000] 000000000000000a ffff88086912b800 ffff88087f83cd00 ffff88087f80c000
[ 0.189000] ffff880869667c48 ffffffff810912c8 ffff880869667c28 ffff88087f803f00
[ 0.189000] 00000000fffffff4 ffff88086964b760 ffff88086964b6a0 ffff88086964b740
[ 0.189000] Call Trace:
[ 0.189000] [<ffffffff810912c8>] alloc_unbound_pwq+0x298/0x3b0
[ 0.189000] [<ffffffff81091ce8>] apply_workqueue_attrs+0x158/0x4c0
[ 0.189000] [<ffffffff81092424>] __alloc_workqueue_key+0x174/0x5b0
[ 0.189000] [<ffffffff813052a6>] ? alloc_cpumask_var_node+0x56/0x80
[ 0.189000] [<ffffffff81b21573>] init_workqueues+0x33d/0x40f
[ 0.189000] [<ffffffff81b21236>] ? ftrace_define_fields_workqueue_execute_start+0x6a/0x6a
[ 0.189000] [<ffffffff81002144>] do_one_initcall+0xd4/0x210
[ 0.189000] [<ffffffff81b12f4d>] ? native_smp_prepare_cpus+0x34d/0x352
[ 0.189000] [<ffffffff81b0026d>] kernel_init_freeable+0xf5/0x23c
[ 0.189000] [<ffffffff81653370>] ? rest_init+0x80/0x80
[ 0.189000] [<ffffffff8165337e>] kernel_init+0xe/0xf0
[ 0.189000] [<ffffffff8166bcfc>] ret_from_fork+0x7c/0xb0
[ 0.189000] [<ffffffff81653370>] ? rest_init+0x80/0x80
[ 0.189000] Code: ff b8 f4 ff ff ff e9 3b ff ff ff b8 f4 ff ff ff e9 31 ff ff ff 55 48 89 e5 41 55 49 89 f5 41 54 49 89 d4 53 48 89 fb 48 83 ec 08 <4c> 8b 42 08 49 39 f0 75 2e 4d 8b 45 00 4d 39 c4 75 6c 4c 39 e3
[ 0.189000] RIP [<ffffffff8131ef96>] __list_add+0x16/0xc0
[ 0.189000] RSP <ffff880869667be8>
[ 0.189000] CR2: 0000000000000008
[ 0.189000] ---[ end trace 58feee6875cf67cf ]---
[ 0.189000] Kernel panic - not syncing: Fatal exception
[ 0.189000] ---[ end Kernel panic - not syncing: Fatal exception
Sincerely,
Taku Izumi
> Both patch1 of the both solutions are: reset pool->node and unhash the pool,
> it is suggested by TJ, I found it is a good leading-step for fixing the bug.
>
> The other patches are handling wq_numa_possible_cpumask where the solutions
> diverge.
>
> Solution_A uses present_mask rather than possible_cpumask. It adds
> wq_numa_notify_cpu_present_set/cleared() for notifications of
> the changes of cpu_present_mask. But the notifications are un-existed
> right now, so I fake one (wq_numa_check_present_cpumask_changes())
> to imitate them. I hope the memory people add a real one.
>
> Solution_B uses online_mask rather than possible_cpumask.
> this solution remove more coupling between numa_code and workqueue,
> it just depends on cpumask_of_node(node).
>
> Patch2_of_Solution_B removes the wq_numa_possible_cpumask and add
> overhead when cpu hot[un]plug, Patch3 reduce this overhead.
>
> Thanks,
> Lai
>
>
> Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
> Cc: "Gu, Zheng" <guz.fnst@cn.fujitsu.com>
> Cc: tangchen <tangchen@cn.fujitsu.com>
> Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
> --
> 2.1.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
ÿôèº{.nÇ+·®+%Ëÿ±éݶ\x17¥wÿº{.nÇ+·¥{±þG«éÿ{ayº\x1dÊÚë,j\a¢f£¢·hïêÿêçz_è®\x03(éÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?¨èÚ&£ø§~á¶iOæ¬z·vØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?I¥
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
2015-01-23 6:13 ` Izumi, Taku
@ 2015-01-23 8:18 ` Lai Jiangshan
0 siblings, 0 replies; 6+ messages in thread
From: Lai Jiangshan @ 2015-01-23 8:18 UTC (permalink / raw)
To: "Izumi, Taku/泉 拓", linux-kernel
Cc: Tejun Heo, "Ishimatsu,
Yasuaki/石松 靖章",
"Gu, Zheng/顾 政",
"Tang, Chen/汤 晨",
"Kamezawa, Hiroyuki/亀澤 寛之"
On 01/23/2015 02:13 PM, Izumi, Taku/泉 拓 wrote:
>
>> This patches are un-changloged, un-compiled, un-booted, un-tested,
>> they are just shits, I even hope them un-sent or blocked.
>>
>> The patches include two -solutions-:
>>
>> Shit_A:
>> workqueue: reset pool->node and unhash the pool when the node is
>> offline
>> update wq_numa when cpu_present_mask changed
>>
>> kernel/workqueue.c | 107 +++++++++++++++++++++++++++++++++++++++++------------
>> 1 file changed, 84 insertions(+), 23 deletions(-)
>>
>>
>> Shit_B:
>> workqueue: reset pool->node and unhash the pool when the node is
>> offline
>> workqueue: remove wq_numa_possible_cpumask
>> workqueue: directly update attrs of pools when cpu hot[un]plug
>>
>> kernel/workqueue.c | 135 +++++++++++++++++++++++++++++++++++++++--------------
>> 1 file changed, 101 insertions(+), 34 deletions(-)
>>
>
> I tried your patchsets.
> linux-3.18.3 + Shit_A:
>
> Build OK.
> I tried to reproduce the problem that Ishimatsu had reported, but it doesn't occur.
> It seems that your patch fixes this problem.
>
> linux-3.18.3 + Shit_B:
>
> Build OK, but I encountered kernel panic at boot time.
pool->unbound_pwqs was forgotten to be initialized.
Even though, I prefer to this solution_B.
>
> [ 0.189000] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> [ 0.189000] IP: [<ffffffff8131ef96>] __list_add+0x16/0xc0
> [ 0.189000] PGD 0
> [ 0.189000] Oops: 0000 [#1] SMP
> [ 0.189000] Modules linked in:
> [ 0.189000] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.18.3+ #3
> [ 0.189000] Hardware name: FUJITSU PRIMEQUEST2800E/SB, BIOS PRIMEQUEST 2000 Series BIOS Version 01.81 12/03/2014
> [ 0.189000] task: ffff880869678000 ti: ffff880869664000 task.ti: ffff880869664000
> [ 0.189000] RIP: 0010:[<ffffffff8131ef96>] [<ffffffff8131ef96>] __list_add+0x16/0xc0
> [ 0.189000] RSP: 0000:ffff880869667be8 EFLAGS: 00010296
> [ 0.189000] RAX: ffff88087f83cda8 RBX: ffff88087f83cd80 RCX: 0000000000000000
> [ 0.189000] RDX: 0000000000000000 RSI: ffff88086912bb98 RDI: ffff88087f83cd80
> [ 0.189000] RBP: ffff880869667c08 R08: 0000000000000000 R09: ffff88087f807480
> [ 0.189000] R10: ffffffff810911b6 R11: ffffffff810956ac R12: 0000000000000000
> [ 0.189000] R13: ffff88086912bb98 R14: 0000000000000400 R15: 0000000000000400
> [ 0.189000] FS: 0000000000000000(0000) GS:ffff88087fc00000(0000) knlGS:0000000000000000
> [ 0.189000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 0.189000] CR2: 0000000000000008 CR3: 0000000001998000 CR4: 00000000001407f0
> [ 0.189000] Stack:
> [ 0.189000] 000000000000000a ffff88086912b800 ffff88087f83cd00 ffff88087f80c000
> [ 0.189000] ffff880869667c48 ffffffff810912c8 ffff880869667c28 ffff88087f803f00
> [ 0.189000] 00000000fffffff4 ffff88086964b760 ffff88086964b6a0 ffff88086964b740
> [ 0.189000] Call Trace:
> [ 0.189000] [<ffffffff810912c8>] alloc_unbound_pwq+0x298/0x3b0
> [ 0.189000] [<ffffffff81091ce8>] apply_workqueue_attrs+0x158/0x4c0
> [ 0.189000] [<ffffffff81092424>] __alloc_workqueue_key+0x174/0x5b0
> [ 0.189000] [<ffffffff813052a6>] ? alloc_cpumask_var_node+0x56/0x80
> [ 0.189000] [<ffffffff81b21573>] init_workqueues+0x33d/0x40f
> [ 0.189000] [<ffffffff81b21236>] ? ftrace_define_fields_workqueue_execute_start+0x6a/0x6a
> [ 0.189000] [<ffffffff81002144>] do_one_initcall+0xd4/0x210
> [ 0.189000] [<ffffffff81b12f4d>] ? native_smp_prepare_cpus+0x34d/0x352
> [ 0.189000] [<ffffffff81b0026d>] kernel_init_freeable+0xf5/0x23c
> [ 0.189000] [<ffffffff81653370>] ? rest_init+0x80/0x80
> [ 0.189000] [<ffffffff8165337e>] kernel_init+0xe/0xf0
> [ 0.189000] [<ffffffff8166bcfc>] ret_from_fork+0x7c/0xb0
> [ 0.189000] [<ffffffff81653370>] ? rest_init+0x80/0x80
> [ 0.189000] Code: ff b8 f4 ff ff ff e9 3b ff ff ff b8 f4 ff ff ff e9 31 ff ff ff 55 48 89 e5 41 55 49 89 f5 41 54 49 89 d4 53 48 89 fb 48 83 ec 08 <4c> 8b 42 08 49 39 f0 75 2e 4d 8b 45 00 4d 39 c4 75 6c 4c 39 e3
> [ 0.189000] RIP [<ffffffff8131ef96>] __list_add+0x16/0xc0
> [ 0.189000] RSP <ffff880869667be8>
> [ 0.189000] CR2: 0000000000000008
> [ 0.189000] ---[ end trace 58feee6875cf67cf ]---
> [ 0.189000] Kernel panic - not syncing: Fatal exception
> [ 0.189000] ---[ end Kernel panic - not syncing: Fatal exception
>
>
> Sincerely,
> Taku Izumi
>
>
>> Both patch1 of the both solutions are: reset pool->node and unhash the pool,
>> it is suggested by TJ, I found it is a good leading-step for fixing the bug.
>>
>> The other patches are handling wq_numa_possible_cpumask where the solutions
>> diverge.
>>
>> Solution_A uses present_mask rather than possible_cpumask. It adds
>> wq_numa_notify_cpu_present_set/cleared() for notifications of
>> the changes of cpu_present_mask. But the notifications are un-existed
>> right now, so I fake one (wq_numa_check_present_cpumask_changes())
>> to imitate them. I hope the memory people add a real one.
>>
>> Solution_B uses online_mask rather than possible_cpumask.
>> this solution remove more coupling between numa_code and workqueue,
>> it just depends on cpumask_of_node(node).
>>
>> Patch2_of_Solution_B removes the wq_numa_possible_cpumask and add
>> overhead when cpu hot[un]plug, Patch3 reduce this overhead.
>>
>> Thanks,
>> Lai
>>
>>
>> Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>> Cc: Tejun Heo <tj@kernel.org>
>> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
>> Cc: "Gu, Zheng" <guz.fnst@cn.fujitsu.com>
>> Cc: tangchen <tangchen@cn.fujitsu.com>
>> Cc: Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>
>> --
>> 2.1.0
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-01-23 8:17 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-14 12:05 [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Hillf Danton
-- strict thread matches above, loose matches on Subject: below --
2015-01-14 2:47 [PATCH 1/2] workqueue: update numa affinity info at node hotplug Lai Jiangshan
2015-01-14 8:54 ` [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Lai Jiangshan
2015-01-16 5:22 ` Yasuaki Ishimatsu
2015-01-16 8:04 ` Lai Jiangshan
2015-01-23 6:13 ` Izumi, Taku
2015-01-23 8:18 ` Lai Jiangshan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.