From: Vincent Guittot <vincent.guittot@linaro.org>
To: "Li, Aubrey" <aubrey.li@linux.intel.com>
Cc: Mel Gorman <mgorman@suse.de>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>,
Valentin Schneider <valentin.schneider@arm.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
linux-kernel <linux-kernel@vger.kernel.org>,
Qais Yousef <qais.yousef@arm.com>,
Jiang Biao <benbjiang@gmail.com>
Subject: Re: [RFC PATCH v2] sched/fair: select idle cpu from idle cpumask in sched domain
Date: Fri, 25 Sep 2020 18:45:14 +0200 [thread overview]
Message-ID: <20200925164514.GA6432@vingu-book> (raw)
In-Reply-To: <aff0c293-cf4d-9770-cd54-fc0d06662f09@linux.intel.com>
Le vendredi 25 sept. 2020 à 17:21:46 (+0800), Li, Aubrey a écrit :
> Hi Vicent,
>
> On 2020/9/24 21:09, Vincent Guittot wrote:
> >>>>
> >>>> Would you mind share uperf(netperf load) result on your side? That's the
> >>>> workload I have seen the most benefit this patch contributed under heavy
> >>>> load level.
> >>>
> >>> with uperf, i've got the same kind of result as sched pipe
> >>> tip/sched/core: Throughput 24.83Mb/s (+/- 0.09%)
> >>> with this patch: Throughput 19.02Mb/s (+/- 0.71%) which is a 23%
> >>> regression as for sched pipe
> >>>
> >> In case this is caused by the logic error in this patch(sorry again), did
> >> you see any improvement in patch V2? Though it does not helps for nohz=off
> >> case, just want to know if it helps or does not help at all on arm platform.
> >
> > With the v2 which rate limit the update of the cpumask (but doesn't
> > support sched_idle stask), I don't see any performance impact:
>
> I agree we should go the way with cpumask update rate limited.
>
> And I think no performance impact for sched-pipe is expected, as this workload
> has only 2 threads and the platform has 8 cores, so mostly previous cpu is
> returned, and even if select_idle_sibling is called, select_idle_core is hit
> and rarely call select_idle_cpu.
my platform is not smt so select_idle_core is nop. Nevertheless select_idle_cpu
is almost never called because prev is idle and selected before calling it in
our case
>
> But I'm more curious why there is 23% performance penalty? So for this patch, if
> you revert this change but keep cpumask updated, is 23% penalty still there?
>
> - cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> + cpumask_and(cpus, sds_idle_cpus(sd->shared), p->cpus_ptr);
I was about to say that reverting this line should not change anything because
we never reach this point but it does in fact. And after looking at a trace,
I can see that the 2 threads of perf bench sched pipe are on the same CPU and
that the sds_idle_cpus(sd->shared) is always empty. In fact, the rq->curr is
not yet idle and still point to the cfs task when you call update_idle_cpumask().
This means that once cleared, the bit will never be set
You can remove the test in update_idle_cpumask() which is called either when
entering idle or when there is only sched_idle tasks that are runnable.
@@ -6044,8 +6044,7 @@ void update_idle_cpumask(struct rq *rq)
sd = rcu_dereference(per_cpu(sd_llc, cpu));
if (!sd || !sd->shared)
goto unlock;
- if (!available_idle_cpu(cpu) || !sched_idle_cpu(cpu))
- goto unlock;
+
cpumask_set_cpu(cpu, sds_idle_cpus(sd->shared));
unlock:
rcu_read_unlock();
With this fix, the performance decrease is only 2%
>
> I just wonder if it's caused by the atomic ops as you have two cache domains with
> sd_llc(?). Do you have a x86 machine to make a comparison? It's hard for me to find
> an ARM machine but I'll try.
>
> Also, for uperf(task thread num = cpu num) workload, how is it on patch v2? no any
> performance impact?
with v2 : Throughput 24.97Mb/s (+/- 0.07%) so there is no perf regression
>
>
> Thanks,
> -Aubrey
next prev parent reply other threads:[~2020-09-25 16:45 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-16 4:31 [RFC PATCH v2] sched/fair: select idle cpu from idle cpumask in sched domain Aubrey Li
2020-09-16 11:00 ` Mel Gorman
2020-09-16 11:40 ` Valentin Schneider
2020-09-16 12:04 ` Vincent Guittot
2020-09-17 9:21 ` Li, Aubrey
2020-09-21 15:14 ` Vincent Guittot
2020-09-21 15:21 ` Vincent Guittot
[not found] ` <af0237e0-1451-9d11-2ee2-1468a8bb6180@linux.intel.com>
2020-09-22 7:14 ` Vincent Guittot
[not found] ` <8a86b085-b445-b1c2-9b46-6346d923abf0@linux.intel.com>
2020-09-23 8:50 ` Vincent Guittot
[not found] ` <eb1c4c84-e361-d5a7-d071-b0dd7310eab4@linux.intel.com>
2020-09-24 13:09 ` Vincent Guittot
2020-09-25 9:21 ` Li, Aubrey
2020-09-25 16:45 ` Vincent Guittot [this message]
2020-09-27 5:56 ` Li, Aubrey
2020-09-24 16:37 ` Tim Chen
2020-09-24 17:13 ` Phil Auld
2020-09-24 17:43 ` Tim Chen
2020-09-24 17:45 ` Phil Auld
2020-09-25 6:50 ` Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200925164514.GA6432@vingu-book \
--to=vincent.guittot@linaro.org \
--cc=aubrey.li@linux.intel.com \
--cc=benbjiang@gmail.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=qais.yousef@arm.com \
--cc=rostedt@goodmis.org \
--cc=tim.c.chen@linux.intel.com \
--cc=valentin.schneider@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).