From: Chen Yu <yu.c.chen@intel.com>
To: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Ingo Molnar <mingo@redhat.com>,
Juri Lelli <juri.lelli@redhat.com>,
Mel Gorman <mgorman@techsingularity.net>,
Tim Chen <tim.c.chen@intel.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
"Steven Rostedt" <rostedt@goodmis.org>,
K Prateek Nayak <kprateek.nayak@amd.com>,
Abel Wu <wuyun.abel@bytedance.com>,
Yicong Yang <yangyicong@hisilicon.com>,
"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
Len Brown <len.brown@intel.com>, Chen Yu <yu.chen.surf@gmail.com>,
Arjan Van De Ven <arjan.van.de.ven@intel.com>,
Aaron Lu <aaron.lu@intel.com>, Barry Song <baohua@kernel.org>,
<linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH] sched/fair: Introduce SIS_PAIR to wakeup task on local idle core first
Date: Thu, 18 May 2023 11:41:55 +0800 [thread overview]
Message-ID: <ZGWeg6UaZ3WJ6ykI@chenyu5-mobl1> (raw)
In-Reply-To: <a2a4cd5b398390dcf01b800c964b80c6eba89d18.camel@gmx.de>
On 2023-05-17 at 21:52:21 +0200, Mike Galbraith wrote:
> On Thu, 2023-05-18 at 00:57 +0800, Chen Yu wrote:
> > >
> > I'm thinking of two directions based on current patch:
> >
> > 1. Check the task duration, if it is a high speed ping-pong pair, let the
> > wakee search for an idle SMT sibling on current core.
> >
> > This strategy give the best overall performance improvement, but
> > the short task duration tweak based on online CPU number would be
> > an obstacle.
>
> Duration is pretty useless, as it says nothing about concurrency.
> Taking the 500us metric as an example, one pipe ping-pong can meet
> that, and toss up to nearly 50% of throughput out the window if you
> stack based only on duration.
>
> > Or
> >
> > 2. Honors the idle core.
> > That is to say, if there is an idle core in the system, choose that
> > idle core first. Otherwise, fall back to searching for an idle smt
> > sibling rather than choosing a idle CPU in a random half-busy core.
> >
> > This strategy could partially mitigate the C2C overhead, and not
> > breaking the idle-core-first strategy. So I had a try on it, with
> > above change, I did see some improvement when the system is around
> > half busy(afterall, the idle_has_core has to be false):
>
> If mitigation is the goal, and until the next iteration of socket
> growth that's not a waste of effort, continuing to honor idle core is
> the only option that has a ghost of a chance.
>
> That said, I don't like the waker/wakee have met heuristic much either,
> because tasks waking one another before can just as well mean they met
> at a sleeping lock, it does not necessarily imply latency bound IPC.
>
Yes, for a sleeping lock case, it does not matter whether it is woken up
on sibling idle, or an idle CPU on another half-busy core. But for the
pair sharing data, it could bring benefit.
> I haven't met a heuristic I like, and that includes the ones I invent.
> The smarter you try to make them, the more precious fast path cycles
> they eat, and there's a never ending supply of holes in the damn things
> that want plugging. A prime example was the SIS_CURRENT heuristic self
> destructing in my box, rendering that patch a not quite free noop :)
>
Yes.. SIS_CURRENT is not a universal win.
thanks,
Chenyu
next prev parent reply other threads:[~2023-05-18 3:42 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-16 1:11 [RFC PATCH] sched/fair: Introduce SIS_PAIR to wakeup task on local idle core first Chen Yu
2023-05-16 6:23 ` Mike Galbraith
2023-05-16 8:41 ` Chen Yu
2023-05-16 11:51 ` Mike Galbraith
2023-05-17 16:57 ` Chen Yu
2023-05-17 19:52 ` Mike Galbraith
2023-05-18 3:41 ` Chen Yu [this message]
2023-05-19 11:15 ` Mike Galbraith
2023-05-18 3:30 ` K Prateek Nayak
2023-05-18 4:17 ` Chen Yu
2023-05-18 10:26 ` K Prateek Nayak
2023-05-22 4:10 ` Chen Yu
2023-05-22 7:10 ` Mike Galbraith
2023-05-25 7:47 ` Chen Yu
2023-05-25 9:33 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZGWeg6UaZ3WJ6ykI@chenyu5-mobl1 \
--to=yu.c.chen@intel.com \
--cc=aaron.lu@intel.com \
--cc=arjan.van.de.ven@intel.com \
--cc=baohua@kernel.org \
--cc=dietmar.eggemann@arm.com \
--cc=efault@gmx.de \
--cc=gautham.shenoy@amd.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tim.c.chen@intel.com \
--cc=vincent.guittot@linaro.org \
--cc=wuyun.abel@bytedance.com \
--cc=yangyicong@hisilicon.com \
--cc=yu.chen.surf@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).