From: Mel Gorman <mgorman@techsingularity.net>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Jirka Hladky <jhladky@redhat.com>, Phil Auld <pauld@redhat.com>,
Ingo Molnar <mingo@kernel.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Juri Lelli <juri.lelli@redhat.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
Valentin Schneider <valentin.schneider@arm.com>,
Hillf Danton <hdanton@sina.com>,
LKML <linux-kernel@vger.kernel.org>,
Douglas Shakshober <dshaks@redhat.com>,
Waiman Long <longman@redhat.com>, Joe Mario <jmario@redhat.com>,
Bill Gray <bgray@redhat.com>
Subject: Re: [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6
Date: Fri, 15 May 2020 14:03:46 +0100 [thread overview]
Message-ID: <20200515130346.GM3758@techsingularity.net> (raw)
In-Reply-To: <20200515111732.GS2957@hirez.programming.kicks-ass.net>
On Fri, May 15, 2020 at 01:17:32PM +0200, Peter Zijlstra wrote:
> On Fri, May 15, 2020 at 09:47:40AM +0100, Mel Gorman wrote:
>
> > However, the wakeups are so rapid that the wakeup
> > happens while the server is descheduling. That forces the waker to spin
> > on smp_cond_load_acquire for longer. In this case, it can be cheaper to
> > add the task to the rq->wake_list even if that potentially requires an IPI.
>
> Right, I think Rik ran into that as well at some point. He wanted to
> make ->on_cpu do a hand-off, but simply queueing the wakeup on the prev
> cpu (which is currently in the middle of schedule()) should be an easier
> proposition.
>
> Maybe something like this untested thing... could explode most mighty,
> didn't thing too hard.
>
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index fa6c19d38e82..c07b92a0ee5d 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2312,7 +2312,7 @@ static void wake_csd_func(void *info)
> sched_ttwu_pending();
> }
>
> -static void ttwu_queue_remote(struct task_struct *p, int cpu, int wake_flags)
> +static void __ttwu_queue_remote(struct task_struct *p, int cpu, int wake_flags)
> {
> struct rq *rq = cpu_rq(cpu);
>
> @@ -2354,6 +2354,17 @@ bool cpus_share_cache(int this_cpu, int that_cpu)
> {
> return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
> }
> +
> +static bool ttwu_queue_remote(struct task_struct *p, int cpu, int wake_flags)
> +{
> + if (sched_feat(TTWU_QUEUE) && !cpus_share_cache(smp_processor_id(), cpu)) {
> + sched_clock_cpu(cpu); /* Sync clocks across CPUs */
> + __ttwu_queue_remote(p, cpu, wake_flags);
> + return true;
> + }
> +
> + return false;
> +}
> #endif /* CONFIG_SMP */
>
> static void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
> @@ -2362,11 +2373,8 @@ static void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
> struct rq_flags rf;
>
> #if defined(CONFIG_SMP)
> - if (sched_feat(TTWU_QUEUE) && !cpus_share_cache(smp_processor_id(), cpu)) {
> - sched_clock_cpu(cpu); /* Sync clocks across CPUs */
> - ttwu_queue_remote(p, cpu, wake_flags);
> + if (ttwu_queue_remote(p, cpu, wake_flags))
> return;
> - }
> #endif
>
> rq_lock(rq, &rf);
> @@ -2550,7 +2558,15 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> if (p->on_rq && ttwu_remote(p, wake_flags))
> goto unlock;
>
> + if (p->in_iowait) {
> + delayacct_blkio_end(p);
> + atomic_dec(&task_rq(p)->nr_iowait);
> + }
> +
> #ifdef CONFIG_SMP
> + p->sched_contributes_to_load = !!task_contributes_to_load(p);
> + p->state = TASK_WAKING;
> +
> /*
> * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
> * possible to, falsely, observe p->on_cpu == 0.
> @@ -2581,15 +2597,10 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> * This ensures that tasks getting woken will be fully ordered against
> * their previous state and preserve Program Order.
> */
> - smp_cond_load_acquire(&p->on_cpu, !VAL);
> -
> - p->sched_contributes_to_load = !!task_contributes_to_load(p);
> - p->state = TASK_WAKING;
> + if (READ_ONCE(p->on_cpu) && __ttwu_queue_remote(p, cpu, wake_flags))
> + goto unlock;
>
> - if (p->in_iowait) {
> - delayacct_blkio_end(p);
> - atomic_dec(&task_rq(p)->nr_iowait);
> - }
> + smp_cond_load_acquire(&p->on_cpu, !VAL);
>
> cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags);
> if (task_cpu(p) != cpu) {
I don't see a problem with moving the updating of p->state to the other
side of the barrier but I'm relying on the comment that the barrier is
only related to on_rq and on_cpu.
However, I'm less sure about what exactly you intended to do.
__ttwu_queue_remote is void so maybe you meant to use ttwu_queue_remote.
In that case, we potentially avoid spinning on on_rq for wakeups between
tasks that do not share CPU but it's not clear why it would be specific to
remote tasks. If you meant to call __ttwu_queue_remote unconditionally,
it's not clear why that's now safe when smp_cond_load_acquire()
cared about on_rq being 0 before queueing a task for wakup or directly
waking it up.
Also because __ttwu_queue_remote() now happens before select_task_rq(), is
there not a risk that in some cases we end up stacking tasks unnecessarily?
> @@ -2597,14 +2608,6 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> psi_ttwu_dequeue(p);
> set_task_cpu(p, cpu);
> }
> -
> -#else /* CONFIG_SMP */
> -
> - if (p->in_iowait) {
> - delayacct_blkio_end(p);
> - atomic_dec(&task_rq(p)->nr_iowait);
> - }
> -
> #endif /* CONFIG_SMP */
>
> ttwu_queue(p, cpu, wake_flags);
--
Mel Gorman
SUSE Labs
next prev parent reply other threads:[~2020-05-15 13:03 UTC|newest]
Thread overview: 83+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-24 9:52 [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Mel Gorman
2020-02-24 9:52 ` [PATCH 01/13] sched/fair: Allow a per-CPU kthread waking a task to stack on the same CPU, to fix XFS performance regression Mel Gorman
2020-02-24 9:52 ` [PATCH 02/13] sched/numa: Trace when no candidate CPU was found on the preferred node Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24 9:52 ` [PATCH 03/13] sched/numa: Distinguish between the different task_numa_migrate failure cases Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] sched/numa: Distinguish between the different task_numa_migrate() " tip-bot2 for Mel Gorman
2020-02-24 9:52 ` [PATCH 04/13] sched/fair: Reorder enqueue/dequeue_task_fair path Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24 9:52 ` [PATCH 05/13] sched/numa: Replace runnable_load_avg by load_avg Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24 9:52 ` [PATCH 06/13] sched/numa: Use similar logic to the load balancer for moving between domains with spare capacity Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24 9:52 ` [PATCH 07/13] sched/pelt: Remove unused runnable load average Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24 9:52 ` [PATCH 08/13] sched/pelt: Add a new runnable average signal Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24 16:01 ` Valentin Schneider
2020-02-24 16:34 ` Mel Gorman
2020-02-25 8:23 ` Vincent Guittot
2020-02-24 9:52 ` [PATCH 09/13] sched/fair: Take into account runnable_avg to classify group Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Vincent Guittot
2020-02-24 9:52 ` [PATCH 10/13] sched/numa: Prefer using an idle cpu as a migration target instead of comparing tasks Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] sched/numa: Prefer using an idle CPU " tip-bot2 for Mel Gorman
2020-02-24 9:52 ` [PATCH 11/13] sched/numa: Find an alternative idle CPU if the CPU is part of an active NUMA balance Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24 9:52 ` [PATCH 12/13] sched/numa: Bias swapping tasks based on their preferred node Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24 9:52 ` [PATCH 13/13] sched/numa: Stop an exhastive search if a reasonable swap candidate or idle CPU is found Mel Gorman
2020-02-24 15:20 ` [tip: sched/core] " tip-bot2 for Mel Gorman
2020-02-24 15:16 ` [PATCH 00/13] Reconcile NUMA balancing decisions with the load balancer v6 Ingo Molnar
2020-02-25 11:59 ` Mel Gorman
2020-02-25 13:28 ` Vincent Guittot
2020-02-25 14:24 ` Mel Gorman
2020-02-25 14:53 ` Vincent Guittot
2020-02-27 9:09 ` Ingo Molnar
2020-03-09 19:12 ` Phil Auld
2020-03-09 20:36 ` Mel Gorman
2020-03-12 9:54 ` Mel Gorman
2020-03-12 12:17 ` Jirka Hladky
[not found] ` <CAE4VaGA4q4_qfC5qe3zaLRfiJhvMaSb2WADgOcQeTwmPvNat+A@mail.gmail.com>
2020-03-12 15:56 ` Mel Gorman
2020-03-12 17:06 ` Jirka Hladky
[not found] ` <CAE4VaGD8DUEi6JnKd8vrqUL_8HZXnNyHMoK2D+1-F5wo+5Z53Q@mail.gmail.com>
2020-03-12 21:47 ` Mel Gorman
2020-03-12 22:24 ` Jirka Hladky
2020-03-20 15:08 ` Jirka Hladky
[not found] ` <CAE4VaGC09OfU2zXeq2yp_N0zXMbTku5ETz0KEocGi-RSiKXv-w@mail.gmail.com>
2020-03-20 15:22 ` Mel Gorman
2020-03-20 15:33 ` Jirka Hladky
[not found] ` <CAE4VaGBGbTT8dqNyLWAwuiqL8E+3p1_SqP6XTTV71wNZMjc9Zg@mail.gmail.com>
2020-03-20 16:38 ` Mel Gorman
2020-03-20 17:21 ` Jirka Hladky
2020-05-07 15:24 ` Jirka Hladky
2020-05-07 15:54 ` Mel Gorman
2020-05-07 16:29 ` Jirka Hladky
2020-05-07 17:49 ` Phil Auld
[not found] ` <20200508034741.13036-1-hdanton@sina.com>
2020-05-18 14:52 ` Jirka Hladky
[not found] ` <20200519043154.10876-1-hdanton@sina.com>
2020-05-20 13:58 ` Jirka Hladky
2020-05-20 16:01 ` Jirka Hladky
2020-05-21 11:06 ` Mel Gorman
[not found] ` <20200521140931.15232-1-hdanton@sina.com>
2020-05-21 16:04 ` Mel Gorman
[not found] ` <20200522010950.3336-1-hdanton@sina.com>
2020-05-22 11:05 ` Mel Gorman
2020-05-08 9:22 ` Mel Gorman
2020-05-08 11:05 ` Jirka Hladky
[not found] ` <CAE4VaGC_v6On-YvqdTwAWu3Mq4ofiV0pLov-QpV+QHr_SJr+Rw@mail.gmail.com>
2020-05-13 14:57 ` Jirka Hladky
2020-05-13 15:30 ` Mel Gorman
2020-05-13 16:20 ` Jirka Hladky
2020-05-14 9:50 ` Mel Gorman
[not found] ` <CAE4VaGCGUFOAZ+YHDnmeJ95o4W0j04Yb7EWnf8a43caUQs_WuQ@mail.gmail.com>
2020-05-14 10:08 ` Mel Gorman
2020-05-14 10:22 ` Jirka Hladky
2020-05-14 11:50 ` Mel Gorman
2020-05-14 13:34 ` Jirka Hladky
2020-05-14 15:31 ` Peter Zijlstra
2020-05-15 8:47 ` Mel Gorman
2020-05-15 11:17 ` Peter Zijlstra
2020-05-15 13:03 ` Mel Gorman [this message]
2020-05-15 13:12 ` Peter Zijlstra
2020-05-15 13:28 ` Peter Zijlstra
2020-05-15 14:24 ` Peter Zijlstra
2020-05-21 10:38 ` Mel Gorman
2020-05-21 11:41 ` Peter Zijlstra
2020-05-22 13:28 ` Mel Gorman
2020-05-22 14:38 ` Peter Zijlstra
2020-05-15 11:28 ` Peter Zijlstra
2020-05-15 12:22 ` Mel Gorman
2020-05-15 12:51 ` Peter Zijlstra
2020-05-15 14:43 ` Jirka Hladky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200515130346.GM3758@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=bgray@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=dshaks@redhat.com \
--cc=hdanton@sina.com \
--cc=jhladky@redhat.com \
--cc=jmario@redhat.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@kernel.org \
--cc=pauld@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=valentin.schneider@arm.com \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).