linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Josef Bacik <jbacik@fb.com>
To: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>, <riel@redhat.com>,
	<mingo@redhat.com>, <linux-kernel@vger.kernel.org>,
	<morten.rasmussen@arm.com>, kernel-team <Kernel-team@fb.com>
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE
Date: Thu, 2 Jul 2015 13:44:17 -0400	[thread overview]
Message-ID: <55957871.7080906@fb.com> (raw)
In-Reply-To: <1434600765.3393.9.camel@gmail.com>

On 06/18/2015 12:12 AM, Mike Galbraith wrote:
> On Wed, 2015-06-17 at 20:46 -0700, Josef Bacik wrote:
>> On 06/17/2015 05:55 PM, Mike Galbraith wrote:
>>> On Wed, 2015-06-17 at 11:06 -0700, Josef Bacik wrote:
>>>> On 06/11/2015 10:35 PM, Mike Galbraith wrote:
>>>>> On Thu, 2015-05-28 at 13:05 +0200, Peter Zijlstra wrote:
>>>
>>>>> If sd == NULL, we fall through and try to pull wakee despite nacked-by
>>>>> tsk_cpus_allowed() or wake_affine().
>>>>>
>>>>
>>>> So maybe add a check in the if (sd_flag & SD_BALANCE_WAKE) for something
>>>> like this
>>>>
>>>> if (tmp >= 0) {
>>>> 	new_cpu = tmp;
>>>> 	goto unlock;
>>>> } else if (!want_affine) {
>>>> 	new_cpu = prev_cpu;
>>>> }
>>>>
>>>> so we can make sure we're not being pushed onto a cpu that we aren't
>>>> allowed on?  Thanks,
>>>
>>> The buglet is a messenger methinks.  You saying the patch helped without
>>> SD_BALANCE_WAKE being set is why I looked.  The buglet would seem to say
>>> that preferring cache is not harming your load after all.  It now sounds
>>> as though wake_wide() may be what you're squabbling with.
>>>
>>> Things aren't adding up all that well.
>>
>> Yeah I'm horribly confused.  The other thing is I had to switch clusters
>> (I know, I know, I'm changing the parameters of the test).  So these new
>> boxes are haswell boxes, but basically the same otherwise, 2 socket 12
>> core with HT, just newer/faster CPUs.  I'll re-run everything again and
>> give the numbers so we're all on the same page again, but as it stands
>> now I think we have this
>>
>> 3.10 with wake_idle forward ported - good
>> 4.0 stock - 20% perf drop
>> 4.0 w/ Peter's patch - good
>> 4.0 w/ Peter's patch + SD_BALANCE_WAKE - 5% perf drop
>>
>> I can do all these iterations again to verify, is there any other
>> permutation you'd like to see?  Thanks,
>
> Yeah, after re-baseline, please apply/poke these buttons individually in
> 4.0-virgin.
>
> (cat /sys/kernel/debug/sched_features, prepend NO_, echo it back)
>

Sorry it took me a while to get these numbers to you, migrating the 
whole fleet to a new setup broke the performance test suite thing so 
I've only just been able to run tests again.  I'll do my best to 
describe what is going on and hopefully that will make the results make 
sense.

This is on our webservers, which is HHVM.  A request comes in for a page 
and this goes onto one of the two hhvm.node.# threads, one thread per 
NUMA node.  From there it is farmed off to one of the worker threads. 
If there are no idle workers the request gets put on what is called the 
"select_queue".  Basically the select_queue should never be larger than 
0 in a perfect world.  If it's more than we've hit latency somewhere and 
that's not good.  The other measurement we care about is how long a 
thread spends on a request before it sends a response (this would be the 
actual work being done).

Our tester slowly increases load to a group of servers until the select 
queue is consistently >= 1.  That means we've loaded the boxes so high 
that they can't process the requests as soon as they've come in.  Then 
it backs down and then ramps up a second time.  It takes all of these 
measurements and puts them into these pretty graphs.  There are 2 graphs 
we care about, the duration of the requests vs the requests per second 
and the probability that our select queue is >= 1 vs requests per second.

Now for 3.10 vs 4.0 our request duration time is the same if not 
slightly better on 4.0, so once the workers are doing their job 
everything is a-ok.

The problem is the probability the select queue >= 1 is way different on 
4.0 vs 3.10.  Normally this graph looks like an S, it's essentially 0 up 
to some RPS (requests per second) threshold and then shoots up to 100% 
after the threshold.  I'll make a table of these graphs that hopefully 
makes sense, the numbers are different from run to run because of 
traffic and such, the test and control are both run at the same time. 
The header is the probability the select queue >=1

		25%	50%	75%
4.0 plain: 	371	388	402
control:	386	394	402
difference:	15	6	0

So with 4.0 its basically a straight line, at lower RPS we are getting a 
higher probability of a select queue >= 1.  We are measuring the cpu 
delay avg ms thing from the scheduler netlink stuff which is how I 
noticed it was scheduler related, our cpu delay is way higher on 4.0 
than it is on 3.10 or 4.0 with the wake idle patch.

So the next test is NO_PREFER_IDLE.  This is slightly better than 4.0 plain
		25%	50%	75%
NO_PREFER_IDLE:	399	401	414
control:	385	408	416
difference:	14	7	2

The numbers don't really show it well, but the graphs are closer 
together, it's slightly more s shaped, but still not great.

Next is NO_WAKE_WIDE, which is horrible

		25%	50%	75%
NO_WAKE_WIDE:	315	344	369
control:	373	380	388
difference:	58	36	19

This isn't even in the same ballpark, it's a way worse regression than 
plain.

The next bit is NO_WAKE_WIDE|NO_PREFER_IDLE, which is just as bad

		25%	50%	75%
EVERYTHING:	327	360	383
control:	381	390	399
difference:	54	30	19

Hopefully that helps.  Thanks,

Josef

  reply	other threads:[~2015-07-02 17:45 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-27 21:22 [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE Josef Bacik
2015-05-28  3:46 ` Mike Galbraith
2015-05-28  9:49   ` Morten Rasmussen
2015-05-28 10:57     ` Mike Galbraith
2015-05-28 11:48       ` Morten Rasmussen
2015-05-28 11:49         ` Mike Galbraith
2015-05-28 10:21 ` Peter Zijlstra
2015-05-28 11:05   ` Peter Zijlstra
2015-05-28 14:27     ` Josef Bacik
2015-05-29 21:03     ` Josef Bacik
2015-05-30  3:55       ` Mike Galbraith
2015-06-01 19:38       ` Josef Bacik
2015-06-01 20:42         ` Peter Zijlstra
2015-06-01 21:03           ` Josef Bacik
2015-06-02 17:12           ` Josef Bacik
2015-06-03 14:12             ` Rik van Riel
2015-06-03 14:24               ` Peter Zijlstra
2015-06-03 14:49                 ` Josef Bacik
2015-06-03 15:30                 ` Mike Galbraith
2015-06-03 15:57                   ` Josef Bacik
2015-06-03 16:53                     ` Mike Galbraith
2015-06-03 17:16                       ` Josef Bacik
2015-06-03 17:43                         ` Mike Galbraith
2015-06-03 20:34                           ` Josef Bacik
2015-06-04  4:52                             ` Mike Galbraith
2015-06-01 22:15         ` Rik van Riel
2015-06-11 20:33     ` Josef Bacik
2015-06-12  3:42       ` Rik van Riel
2015-06-12  5:35     ` Mike Galbraith
2015-06-17 18:06       ` Josef Bacik
2015-06-18  0:55         ` Mike Galbraith
2015-06-18  3:46           ` Josef Bacik
2015-06-18  4:12             ` Mike Galbraith
2015-07-02 17:44               ` Josef Bacik [this message]
2015-07-03  6:40                 ` Mike Galbraith
2015-07-03  9:29                   ` Mike Galbraith
2015-07-04 15:57                   ` Mike Galbraith
2015-07-05  7:17                     ` Mike Galbraith
2015-07-06  5:13                       ` Mike Galbraith
2015-07-06 14:34                         ` Josef Bacik
2015-07-06 18:36                           ` Mike Galbraith
2015-07-06 19:41                             ` Josef Bacik
2015-07-07  4:01                               ` Mike Galbraith
2015-07-07  9:43                                 ` [patch] " Mike Galbraith
2015-07-07 13:40                                   ` Josef Bacik
2015-07-07 15:24                                     ` Mike Galbraith
2015-07-07 17:06                                   ` Josef Bacik
2015-07-08  6:13                                     ` [patch] sched: beef up wake_wide() Mike Galbraith
2015-07-09 13:26                                       ` Peter Zijlstra
2015-07-09 14:07                                         ` Mike Galbraith
2015-07-09 14:46                                           ` Mike Galbraith
2015-07-10  5:19                                         ` Mike Galbraith
2015-07-10 13:41                                           ` Josef Bacik
2015-07-10 20:59                                           ` Josef Bacik
2015-07-11  3:11                                             ` Mike Galbraith
2015-07-13 13:53                                               ` Josef Bacik
2015-07-14 11:19                                               ` Peter Zijlstra
2015-07-14 13:49                                                 ` Mike Galbraith
2015-07-14 14:07                                                   ` Peter Zijlstra
2015-07-14 14:17                                                     ` Mike Galbraith
2015-07-14 15:04                                                       ` Peter Zijlstra
2015-07-14 15:39                                                         ` Mike Galbraith
2015-07-14 16:01                                                           ` Josef Bacik
2015-07-14 17:59                                                             ` Mike Galbraith
2015-07-15 17:11                                                               ` Josef Bacik
2015-08-03 17:07                                                           ` [tip:sched/core] sched/fair: Beef " tip-bot for Mike Galbraith
2015-05-28 11:16   ` [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE Mike Galbraith
2015-05-28 11:49     ` Ingo Molnar
2015-05-28 12:15       ` Mike Galbraith
2015-05-28 12:19         ` Peter Zijlstra
2015-05-28 12:29           ` Ingo Molnar
2015-05-28 15:22           ` David Ahern
2015-05-28 11:55 ` Srikar Dronamraju

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55957871.7080906@fb.com \
    --to=jbacik@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=umgwanakikbuti@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).