From: Josef Bacik <jbacik@fb.com>
To: Peter Zijlstra <peterz@infradead.org>, Rik van Riel <riel@redhat.com>
Cc: <mingo@redhat.com>, <linux-kernel@vger.kernel.org>,
<umgwanakikbuti@gmail.com>, <morten.rasmussen@arm.com>,
kernel-team <Kernel-team@fb.com>
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE
Date: Wed, 3 Jun 2015 10:49:53 -0400 [thread overview]
Message-ID: <556F1411.6050206@fb.com> (raw)
In-Reply-To: <1433341448.1495.4.camel@twins>
On 06/03/2015 10:24 AM, Peter Zijlstra wrote:
> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
>
>> There is a policy vs mechanism thing here. Ingo and Peter
>> are worried about the overhead in the mechanism of finding
>> an idle CPU. Your measurements show that the policy of
>> finding an idle CPU is the correct one.
>
> For his workload; I'm sure I can find a workload where it hurts.
>
> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
> how he's the one playing about trying to shrink that idle search :-)
>
So the perf bench sched microbenchmarks are a pretty good analog for our
workload. I run
perf bench sched messaging -g 100 -l 10000
perf bench sched pipe
5 times and average the results to get an answer, really the messaging
one is closest one and the one I look at. I get like 56 seconds of
runtime on plain 4.0 and 47 seconds patched, it's how I check my little
experiments before doing the full real workload.
I don't want to tune the scheduler just for our workload, but the
microbenchmarks we have are also showing the same performance
improvements. I would be super interested in workloads where this patch
doesn't help so we could integrate that workload into perf sched bench
to make us more confident in making policy changes in the scheduler. So
Mike if you have something specific in mind please elaborate and I'm
happy to do the legwork to get it into perf bench and to test things
until we're happy.
In the meantime I really want to get this fixed for us, I do not want to
pull some weird old patch around for the next year until we rebase again
next year, and then do this whole dance again. What would be the way
forward for getting this fixed now? Do I need to hide it behind a
sysctl or config option? Thanks,
Josef
next prev parent reply other threads:[~2015-06-03 14:50 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-27 21:22 [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE Josef Bacik
2015-05-28 3:46 ` Mike Galbraith
2015-05-28 9:49 ` Morten Rasmussen
2015-05-28 10:57 ` Mike Galbraith
2015-05-28 11:48 ` Morten Rasmussen
2015-05-28 11:49 ` Mike Galbraith
2015-05-28 10:21 ` Peter Zijlstra
2015-05-28 11:05 ` Peter Zijlstra
2015-05-28 14:27 ` Josef Bacik
2015-05-29 21:03 ` Josef Bacik
2015-05-30 3:55 ` Mike Galbraith
2015-06-01 19:38 ` Josef Bacik
2015-06-01 20:42 ` Peter Zijlstra
2015-06-01 21:03 ` Josef Bacik
2015-06-02 17:12 ` Josef Bacik
2015-06-03 14:12 ` Rik van Riel
2015-06-03 14:24 ` Peter Zijlstra
2015-06-03 14:49 ` Josef Bacik [this message]
2015-06-03 15:30 ` Mike Galbraith
2015-06-03 15:57 ` Josef Bacik
2015-06-03 16:53 ` Mike Galbraith
2015-06-03 17:16 ` Josef Bacik
2015-06-03 17:43 ` Mike Galbraith
2015-06-03 20:34 ` Josef Bacik
2015-06-04 4:52 ` Mike Galbraith
2015-06-01 22:15 ` Rik van Riel
2015-06-11 20:33 ` Josef Bacik
2015-06-12 3:42 ` Rik van Riel
2015-06-12 5:35 ` Mike Galbraith
2015-06-17 18:06 ` Josef Bacik
2015-06-18 0:55 ` Mike Galbraith
2015-06-18 3:46 ` Josef Bacik
2015-06-18 4:12 ` Mike Galbraith
2015-07-02 17:44 ` Josef Bacik
2015-07-03 6:40 ` Mike Galbraith
2015-07-03 9:29 ` Mike Galbraith
2015-07-04 15:57 ` Mike Galbraith
2015-07-05 7:17 ` Mike Galbraith
2015-07-06 5:13 ` Mike Galbraith
2015-07-06 14:34 ` Josef Bacik
2015-07-06 18:36 ` Mike Galbraith
2015-07-06 19:41 ` Josef Bacik
2015-07-07 4:01 ` Mike Galbraith
2015-07-07 9:43 ` [patch] " Mike Galbraith
2015-07-07 13:40 ` Josef Bacik
2015-07-07 15:24 ` Mike Galbraith
2015-07-07 17:06 ` Josef Bacik
2015-07-08 6:13 ` [patch] sched: beef up wake_wide() Mike Galbraith
2015-07-09 13:26 ` Peter Zijlstra
2015-07-09 14:07 ` Mike Galbraith
2015-07-09 14:46 ` Mike Galbraith
2015-07-10 5:19 ` Mike Galbraith
2015-07-10 13:41 ` Josef Bacik
2015-07-10 20:59 ` Josef Bacik
2015-07-11 3:11 ` Mike Galbraith
2015-07-13 13:53 ` Josef Bacik
2015-07-14 11:19 ` Peter Zijlstra
2015-07-14 13:49 ` Mike Galbraith
2015-07-14 14:07 ` Peter Zijlstra
2015-07-14 14:17 ` Mike Galbraith
2015-07-14 15:04 ` Peter Zijlstra
2015-07-14 15:39 ` Mike Galbraith
2015-07-14 16:01 ` Josef Bacik
2015-07-14 17:59 ` Mike Galbraith
2015-07-15 17:11 ` Josef Bacik
2015-08-03 17:07 ` [tip:sched/core] sched/fair: Beef " tip-bot for Mike Galbraith
2015-05-28 11:16 ` [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE Mike Galbraith
2015-05-28 11:49 ` Ingo Molnar
2015-05-28 12:15 ` Mike Galbraith
2015-05-28 12:19 ` Peter Zijlstra
2015-05-28 12:29 ` Ingo Molnar
2015-05-28 15:22 ` David Ahern
2015-05-28 11:55 ` Srikar Dronamraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=556F1411.6050206@fb.com \
--to=jbacik@fb.com \
--cc=Kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).