All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug
@ 2015-01-14 12:05 Hillf Danton
  0 siblings, 0 replies; 6+ messages in thread
From: Hillf Danton @ 2015-01-14 12:05 UTC (permalink / raw)
  To: 'Lai Jiangshan', linux-kernel
  Cc: 'Tejun Heo', 'Yasuaki Ishimatsu',
	'Gu Zheng', 'tangchen',
	'Hiroyuki KAMEZAWA', 钟伟(鎏光)

> Hi, All
> 
> This patches are un-changloged, un-compiled, un-booted, un-tested,
> they are just shits, I even hope them un-sent or blocked.
> 
Good evening, Lai

Duno why you are on that fire(looks very high, really).

Perhaps you can simply turn off computer and take a vacation?

I have two cups of visky(three monkey) and
feel free to call me if ready.

Hillf


^ permalink raw reply	[flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] workqueue: update numa affinity info at node hotplug
@ 2015-01-14  2:47 Lai Jiangshan
  2015-01-14  8:54 ` [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Lai Jiangshan
  0 siblings, 1 reply; 6+ messages in thread
From: Lai Jiangshan @ 2015-01-14  2:47 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Kamezawa Hiroyuki, linux-kernel, "Ishimatsu,
	Yasuaki/石松 靖章",
	Tang Chen, guz.fnst

On 01/13/2015 11:22 PM, Tejun Heo wrote:
> Hello,
> 
> On Tue, Jan 13, 2015 at 03:19:09PM +0800, Lai Jiangshan wrote:
>> The Mapping of the *online* cpus to nodes is already maintained by numa code.
>>
>> What the workqueue needs is a special Mapping:
>> 	The Mapping of the *possible* cpus to nodes
>>
>> But this mapping (if the numa code maintain it) is a trouble:
>> 	"possible" implies the mapping is stable/constant/immutable, it is hard to
>> 	ensure it in the numa code.
>>
>> if mutability of this mapping is acceptable, we just move 20~40 LOC
>> of code from workqueue to numa code, all the other complexities
>> about it are still in workqueue.c.
> 
> Make numa code maintain the mapping to the best of its knowledge and
> invoke notification callbacks when it changes.  

The best of its knowledge is the physical onlined nodes and CPUs.
The cpu_present_mask can represent this knowledge. But it lacks of
the per-node cpu_present_mask and the notification callbacks.

> Even if that involves slightly more code, that's the right thing to do at this point.

Right, but in currently, the workqueue will be the only user, and I don't known
asking who to do it, so I may keep it in the workqueue.c.

> This puts the logic which is complicated by the fact that the mapping may
> change where it's caused not some random unrelated place.  


> It'd be
> awesome if somebody more familiar with the numa side can chime in and
> explain why this mapping change can't be avoided.

I'm also looking for someone answer it.

> 
> Thanks.
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-01-23  8:17 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-14 12:05 [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Hillf Danton
  -- strict thread matches above, loose matches on Subject: below --
2015-01-14  2:47 [PATCH 1/2] workqueue: update numa affinity info at node hotplug Lai Jiangshan
2015-01-14  8:54 ` [RFC PATCH 0/2 shit_A shit_B] workqueue: fix wq_numa bug Lai Jiangshan
2015-01-16  5:22   ` Yasuaki Ishimatsu
2015-01-16  8:04     ` Lai Jiangshan
2015-01-23  6:13   ` Izumi, Taku
2015-01-23  8:18     ` Lai Jiangshan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.