linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Valentin Schneider <valentin.schneider@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	linux-kernel@vger.kernel.org, Qian Cai <cai@redhat.com>,
	Vincent Donnefort <vincent.donnefort@arm.com>,
	Dexuan Cui <decui@microsoft.com>,
	Lai Jiangshan <laijs@linux.alibaba.com>,
	Paul McKenney <paulmck@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH -tip V3 0/8] workqueue: break affinity initiatively
Date: Mon, 11 Jan 2021 22:47:39 +0000	[thread overview]
Message-ID: <jhj4kjn146s.mognet@arm.com> (raw)
In-Reply-To: <X/yzrJw4UbQsK3KB@hirez.programming.kicks-ass.net>

On 11/01/21 21:23, Peter Zijlstra wrote:
> On Mon, Jan 11, 2021 at 07:21:06PM +0000, Valentin Schneider wrote:
>> I'm less fond of the workqueue pcpu flag toggling, but it gets us what
>> we want: allow those threads to run on !active CPUs during online, but
>> move them away before !online during offline.
>>
>> Before I get ahead of myself, do we *actually* require that first part
>> for workqueue kthreads? I'm thinking (raise alarm) we could try another
>> approach of making them pcpu kthreads that don't abide by the !active &&
>> online rule.
>
> There is code that really requires percpu workqueues to be percpu. Such
> code will flush the percpu workqueue on hotplug and never hit the unbind
> scenario.
>
> Other code uses those same percpu workqueues and only uses it as a
> performance enhancer, it likes things to stay local, but if not, meh..
> And these users are what got us the weird ass semantics of workqueue.
>
> Sadly workqueue itself can't tell them apart.
>

Oh well...

FWIW now that I've unconfused myself, that does look okay.

>> > ---
>> >  include/linux/kthread.h |  3 +++
>> >  kernel/kthread.c        | 25 ++++++++++++++++++++++++-
>> >  kernel/sched/core.c     |  2 +-
>> >  kernel/sched/sched.h    |  4 ++--
>> >  kernel/smpboot.c        |  1 +
>> >  kernel/workqueue.c      | 12 +++++++++---
>> >  6 files changed, 40 insertions(+), 7 deletions(-)
>> >
>> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> > index 15d2562118d1..e71f9e44789e 100644
>> > --- a/kernel/sched/core.c
>> > +++ b/kernel/sched/core.c
>> > @@ -7277,7 +7277,7 @@ static void balance_push(struct rq *rq)
>> >        * Both the cpu-hotplug and stop task are in this case and are
>> >        * required to complete the hotplug process.
>> >        */
>> > -	if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) {
>> > +	if (rq->idle == push_task || is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) {
>>
>> I take it the p->set_child_tid thing you were complaining about on IRC
>> is what prevents us from having the idle task seen as a pcpu kthread?
>
> Yes, to to_kthread() only tests PF_KTHREAD and then assumes
> p->set_child_tid points to struct kthread, _however_ init_task has
> PF_KTHREAD set, but a NULL ->set_child_tid.
>
> This then means the idle thread for the boot CPU will malfunction with
> to_kthread() and will certainly not have KTHREAD_IS_PER_CPU set. Per
> construction (fork_idle()) none of the other idle threads will have that
> cured either.
>
> For fun and giggles, init (pid-1) will have PF_KTHREAD set for a while
> as well, until we exec /sbin/init.
>
> Anyway, idle will fail kthread_is_per_cpu(), and hence without the
> above, we'll try and push the idle task away, which results in much
> fail.
>

Quite!

>> Also, shouldn't this be done before the previous set_cpus_allowed_ptr()
>> call (in the same function)?
>
> Don't see why; we need nr_cpus_allowed == 1, so best do it after, right?
>

Duh, yes.

>> That is, if we patch
>> __set_cpus_allowed_ptr() to also use kthread_is_per_cpu().
>
> That seems wrong.
>

It is, apologies.

>> >       list_add_tail(&worker->node, &pool->workers);
>> >       worker->pool = pool;

  reply	other threads:[~2021-01-12  0:25 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-26  2:51 [PATCH -tip V3 0/8] workqueue: break affinity initiatively Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 1/8] workqueue: use cpu_possible_mask instead of cpu_active_mask to break affinity Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 2/8] workqueue: Manually break affinity on pool detachment Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 3/8] workqueue: introduce wq_online_cpumask Lai Jiangshan
2021-01-04 13:56   ` Peter Zijlstra
2021-01-05  2:41     ` Lai Jiangshan
2021-01-05  2:53       ` Lai Jiangshan
2021-01-05  8:23       ` Lai Jiangshan
2021-01-05 13:17         ` Peter Zijlstra
2021-01-05 14:37           ` Lai Jiangshan
2021-01-05 14:40             ` Lai Jiangshan
2021-01-05 16:24         ` Peter Zijlstra
2020-12-26  2:51 ` [PATCH -tip V3 4/8] workqueue: use wq_online_cpumask in restore_unbound_workers_cpumask() Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 5/8] workqueue: Manually break affinity on hotplug for unbound pool Lai Jiangshan
     [not found]   ` <20201226101631.5448-1-hdanton@sina.com>
2020-12-27 14:04     ` Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 6/8] workqueue: reorganize workqueue_online_cpu() Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 7/8] workqueue: reorganize workqueue_offline_cpu() unbind_workers() Lai Jiangshan
2020-12-26  2:51 ` [PATCH -tip V3 8/8] workqueue: Fix affinity of kworkers when attaching into pool Lai Jiangshan
     [not found]   ` <20201229100639.2086-1-hdanton@sina.com>
2020-12-29 10:13     ` Lai Jiangshan
2021-01-08 11:46 ` [PATCH -tip V3 0/8] workqueue: break affinity initiatively Peter Zijlstra
2021-01-11 10:07   ` Thomas Gleixner
2021-01-11 11:01     ` Peter Zijlstra
2021-01-11 15:00       ` Paul E. McKenney
2021-01-11 17:16       ` Peter Zijlstra
2021-01-11 18:09         ` Paul E. McKenney
2021-01-11 21:50           ` Paul E. McKenney
2021-01-12 17:14             ` Paul E. McKenney
2021-01-12 23:53               ` Paul E. McKenney
2021-01-15  9:11                 ` Peter Zijlstra
2021-01-15 13:04                   ` Peter Zijlstra
2021-01-16  6:00                     ` Lai Jiangshan
2021-01-11 19:21         ` Valentin Schneider
2021-01-11 20:23           ` Peter Zijlstra
2021-01-11 22:47             ` Valentin Schneider [this message]
2021-01-12  4:33             ` Lai Jiangshan
2021-01-12 14:53               ` Peter Zijlstra
2021-01-12 15:38                 ` Lai Jiangshan
2021-01-13 11:10                   ` Peter Zijlstra
2021-01-13 12:00                     ` Lai Jiangshan
2021-01-13 12:57                     ` Lai Jiangshan
2021-01-12 17:52               ` Valentin Schneider
2021-01-12 14:57           ` Jens Axboe
2021-01-12 15:51             ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=jhj4kjn146s.mognet@arm.com \
    --to=valentin.schneider@arm.com \
    --cc=axboe@kernel.dk \
    --cc=cai@redhat.com \
    --cc=decui@microsoft.com \
    --cc=jiangshanlai@gmail.com \
    --cc=laijs@linux.alibaba.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=vincent.donnefort@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).