* Re: [RFC PATCH] workqueue: cut wq_rr_cpu_last
[not found] <20201203102841.2100-1-hdanton@sina.com>
@ 2020-12-03 15:36 ` Tejun Heo
0 siblings, 0 replies; only message in thread
From: Tejun Heo @ 2020-12-03 15:36 UTC (permalink / raw)
To: Hillf Danton; +Cc: NeilBrown, LKML
Hello,
On Thu, Dec 03, 2020 at 06:28:41PM +0800, Hillf Danton wrote:
> + new_cpu = cpumask_any_and_distribute(wq_unbound_cpumask, cpu_online_mask);
> + if (new_cpu < nr_cpu_ids)
> + return new_cpu;
> + else
> + return cpu;
> }
>
> static void __queue_work(int cpu, struct workqueue_struct *wq,
> @@ -1554,7 +1546,7 @@ static int workqueue_select_cpu_near(int
> return cpu;
>
> /* Use "random" otherwise know as "first" online CPU of node */
> - cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
> + cpu = cpumask_any_and_distribute(cpumask_of_node(node), cpu_online_mask);
This looks generally okay but I think there's a real risk of different
cpumasks interfering with cpu selection. e.g. imagine a cpu issuing work
items to two unbound workqueues consecutively, one numa-bound, the other
not. The above change will basically confine the !numa one to the numa node.
I think the right thing to do here is expanding the
cpumask_any_and_distribute() so that the user can provide its own cursor
similar to what we do with ratelimits.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-12-03 15:37 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20201203102841.2100-1-hdanton@sina.com>
2020-12-03 15:36 ` [RFC PATCH] workqueue: cut wq_rr_cpu_last Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).