All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michael Wang <wangyun@linux.vnet.ibm.com>
To: Mike Galbraith <bitbucket@online.de>
Cc: linux-kernel@vger.kernel.org, mingo@redhat.com,
	peterz@infradead.org, mingo@kernel.org, a.p.zijlstra@chello.nl
Subject: Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
Date: Wed, 23 Jan 2013 15:10:16 +0800	[thread overview]
Message-ID: <50FF8CD8.4060105@linux.vnet.ibm.com> (raw)
In-Reply-To: <1358922520.5752.91.camel@marge.simpson.net>

On 01/23/2013 02:28 PM, Mike Galbraith wrote:
> On Wed, 2013-01-23 at 13:09 +0800, Michael Wang wrote: 
>> On 01/23/2013 12:31 PM, Mike Galbraith wrote:
> 
>>> Another thing that wants fixing: root can set flags for _existing_
>>> domains any way he likes,
>>
>> Can he? on running time changing the domain flags? I do remember I used to
>> send out some patch to achieve that but was refused since it's dangerous...
> 
> Yes, flags can be set any way you like, which works just fine when flags
> are evaluated at runtime.
> 
> WRT dangerous: if root says "Let there be stupidity", stupidity should
> appear immediately :)
> 
>> but when he invokes godly powers to rebuild
>>> domains, he gets what's hard coded, which is neither clever (godly
>>> wrath;), nor wonderful for godly runtime path decisions.
>>
>> The purpose is to using a map to describe the sd topology of a cpu, it
>> should be rebuild correctly according to the new topology when attaching
>> new domain to a cpu.
> 
> Try turning FORK/EXEC/WAKE on/off.
> 
> echo [01] > [cpuset]/sched_load_balance will rebuild, but resulting
> domains won't reflect flag your change. 

Yeah, I've done some test on it previously, but I failed to enter the
rebuild procedure, need more research on it.

> 
>> For this case, it's really strange that level 2 was missed in topology,
>> I found that in build_sched_domains(), the level was added one by one,
>> and I don't know why it jumps here...sounds like some BUG to me.
>>
>> Whatever, the sbm should still work properly by designed, even in such
>> strange topology, if it's initialized correctly.
>>
>> And below patch will do help on it, just based on the original patch set.
>>
>> Could you please take a try on it, it's supposed to make the balance path
>> correctly, and please apply below DEBUG patch too, so we could know how it
>> changes, I think this time, we may be able to solve the issue by the right
>> way ;-)
> 
> Done, previous changes backed out, new change applied on top of v2 set.
> Full debug output attached.
> 
> Domain flags on this box (bogus CPU domain is still patched away).
> 
> monteverdi:/abuild/mike/aim7/:[127]# tune-sched-domains
> usage: tune-sched-domains <val>
> {cpu0/domain0:SIBLING} SD flag: 687
> +   1: SD_LOAD_BALANCE:          Do load balancing on this domain
> +   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
> +   4: SD_BALANCE_EXEC:          Balance on exec
> +   8: SD_BALANCE_FORK:          Balance on fork, clone
> -  16: SD_BALANCE_WAKE:          Wake to idle CPU on task wakeup
> +  32: SD_WAKE_AFFINE:           Wake task to waking CPU
> -  64: SD_PREFER_LOCAL:          Prefer to keep tasks local to this domain
> + 128: SD_SHARE_CPUPOWER:        Domain members share cpu power
> - 256: SD_POWERSAVINGS_BALANCE:  Balance for power savings
> + 512: SD_SHARE_PKG_RESOURCES:   Domain members share cpu pkg resources
> -1024: SD_SERIALIZE:             Only a single load balancing instance
> -2048: SD_ASYM_PACKING:          Place busy groups earlier in the domain
> -4096: SD_PREFER_SIBLING:        Prefer to place tasks in a sibling domain
> -8192: SD_PREFER_UTILIZATION:    Prefer utilization over SMP nice
> {cpu0/domain1:MC} SD flag: 559
> +   1: SD_LOAD_BALANCE:          Do load balancing on this domain
> +   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
> +   4: SD_BALANCE_EXEC:          Balance on exec
> +   8: SD_BALANCE_FORK:          Balance on fork, clone
> -  16: SD_BALANCE_WAKE:          Wake to idle CPU on task wakeup
> +  32: SD_WAKE_AFFINE:           Wake task to waking CPU
> -  64: SD_PREFER_LOCAL:          Prefer to keep tasks local to this domain
> - 128: SD_SHARE_CPUPOWER:        Domain members share cpu power
> - 256: SD_POWERSAVINGS_BALANCE:  Balance for power savings
> + 512: SD_SHARE_PKG_RESOURCES:   Domain members share cpu pkg resources
> -1024: SD_SERIALIZE:             Only a single load balancing instance
> -2048: SD_ASYM_PACKING:          Place busy groups earlier in the domain
> -4096: SD_PREFER_SIBLING:        Prefer to place tasks in a sibling domain
> -8192: SD_PREFER_UTILIZATION:    Prefer utilization over SMP nice
> {cpu0/domain2:NUMA} SD flag: 9263
> +   1: SD_LOAD_BALANCE:          Do load balancing on this domain
> +   2: SD_BALANCE_NEWIDLE:       Balance when about to become idle
> +   4: SD_BALANCE_EXEC:          Balance on exec
> +   8: SD_BALANCE_FORK:          Balance on fork, clone
> -  16: SD_BALANCE_WAKE:          Wake to idle CPU on task wakeup
> +  32: SD_WAKE_AFFINE:           Wake task to waking CPU
> -  64: SD_PREFER_LOCAL:          Prefer to keep tasks local to this domain
> - 128: SD_SHARE_CPUPOWER:        Domain members share cpu power
> - 256: SD_POWERSAVINGS_BALANCE:  Balance for power savings
> - 512: SD_SHARE_PKG_RESOURCES:   Domain members share cpu pkg resources
> +1024: SD_SERIALIZE:             Only a single load balancing instance
> -2048: SD_ASYM_PACKING:          Place busy groups earlier in the domain
> -4096: SD_PREFER_SIBLING:        Prefer to place tasks in a sibling domain
> +8192: SD_PREFER_UTILIZATION:    Prefer utilization over SMP nice

I will study this BUG candidate later.

> 
> Abbreviated test run:
> Tasks    jobs/min  jti  jobs/min/task      real       cpu
>   640   158044.01   81       246.9438     24.54    577.66   Wed Jan 23 07:14:33 2013
>  1280    50434.33   39        39.4018    153.80   5737.57   Wed Jan 23 07:17:07 2013
>  2560    47214.07   34        18.4430    328.58  12715.56   Wed Jan 23 07:22:36 2013

So still not works... and not going to balance path while waking up will
fix it, looks like that's the only choice if no error on balance path
could be found...benchmark wins again, I'm feeling bad...

I will conclude the info we collected and make a v3 later.

Regards,
Michael Wang

> 


  reply	other threads:[~2013-01-23  7:10 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1356588535-23251-1-git-send-email-wangyun@linux.vnet.ibm.com>
2013-01-09  9:28 ` [RFC PATCH 0/2] sched: simplify the select_task_rq_fair() Michael Wang
2013-01-12  8:01   ` Mike Galbraith
2013-01-12 10:19     ` Mike Galbraith
2013-01-14  9:21       ` Mike Galbraith
2013-01-15  3:10         ` Michael Wang
2013-01-15  4:52           ` Mike Galbraith
2013-01-15  8:26             ` Michael Wang
2013-01-17  5:55         ` Michael Wang
2013-01-20  4:09           ` Mike Galbraith
2013-01-21  2:50             ` Michael Wang
2013-01-21  4:38               ` Mike Galbraith
2013-01-21  5:07                 ` Michael Wang
2013-01-21  6:42                   ` Mike Galbraith
2013-01-21  7:09                     ` Mike Galbraith
2013-01-21  7:45                       ` Michael Wang
2013-01-21  9:09                         ` Mike Galbraith
2013-01-21  9:22                           ` Michael Wang
2013-01-21  9:44                             ` Mike Galbraith
2013-01-21 10:30                               ` Mike Galbraith
2013-01-22  3:43                               ` Michael Wang
2013-01-22  8:03                                 ` Mike Galbraith
2013-01-22  8:56                                   ` Michael Wang
2013-01-22 11:34                                     ` Mike Galbraith
2013-01-23  3:01                                       ` Michael Wang
2013-01-23  5:02                                         ` Mike Galbraith
2013-01-22 14:41                                     ` Mike Galbraith
2013-01-23  2:44                                       ` Michael Wang
2013-01-23  4:31                                         ` Mike Galbraith
2013-01-23  5:09                                           ` Michael Wang
2013-01-23  6:28                                             ` Mike Galbraith
2013-01-23  7:10                                               ` Michael Wang [this message]
2013-01-23  8:20                                                 ` Mike Galbraith
2013-01-23  8:30                                                   ` Michael Wang
2013-01-23  8:49                                                     ` Mike Galbraith
2013-01-23  9:00                                                       ` Michael Wang
2013-01-23  9:18                                                         ` Mike Galbraith
2013-01-23  9:26                                                           ` Michael Wang
2013-01-23  9:37                                                             ` Mike Galbraith
2013-01-23  9:32                                                           ` Mike Galbraith
2013-01-24  6:01                                                             ` Michael Wang
2013-01-24  6:51                                                               ` Mike Galbraith
2013-01-24  7:15                                                                 ` Michael Wang
2013-01-24  7:47                                                                   ` Mike Galbraith
2013-01-24  8:14                                                                     ` Michael Wang
2013-01-24  9:07                                                                       ` Mike Galbraith
2013-01-24  9:26                                                                         ` Michael Wang
2013-01-24 10:34                                                                           ` Mike Galbraith
2013-01-25  2:14                                                                             ` Michael Wang
2013-01-24  7:00                                                               ` Michael Wang
2013-01-21  7:34                     ` Michael Wang
2013-01-21  8:26                       ` Mike Galbraith
2013-01-21  8:46                         ` Michael Wang
2013-01-21  9:11                           ` Mike Galbraith
2013-01-15  2:46     ` Michael Wang
2013-01-11  8:15 Michael Wang
2013-01-11 10:13 ` Nikunj A Dadhania
2013-01-15  2:20   ` Michael Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50FF8CD8.4060105@linux.vnet.ibm.com \
    --to=wangyun@linux.vnet.ibm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=bitbucket@online.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.