linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: vincent.guittot@linaro.org (Vincent Guittot)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 5/5] sched: ARM: create a dedicated scheduler topology table
Date: Thu, 24 Apr 2014 09:30:07 +0200	[thread overview]
Message-ID: <CAKfTPtAOZqnP=sWeJuqFAqrGT8vLQZKj+tpqiwOTG37paXoGjg@mail.gmail.com> (raw)
In-Reply-To: <5357DBA5.1010106@arm.com>

On 23 April 2014 17:26, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
> On 23/04/14 15:46, Vincent Guittot wrote:
>> On 23 April 2014 13:46, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>>> Hi,
>>>

[snip]

>> You  have an inconsistency in your topology description:
>
> That's true functional-wise but I think that this is not the reason why
> the code in get_group()/build_sched_groups() isn't working correctly any
> more with this set-up.
>
> Like I said above the cpu_cpupower_flags() is just bogus and I only
> added it to illustrate the issue (Shouldn't have used
> SD_SHARE_POWERDOMAIN in the first place).

More than the flag that is used for the example, it's about the
cpumask which are inconsistent across CPUs for the same level and the
build_sched_domain sequence rely on this consistency to build
sched_group

> Essentially what I want to do is bind an SD_SHARE_*FOO* flag to the GDIE
> related sd's of CPU2/3/4 and not to the DIE related sd's of CPU0/1.
>
> I thought so far that I can achieve that by getting rid of GDIE sd level
> for CPU0/1 simply by choosing the cpu_foo_mask() function in such a way
> that it returns the same cpu mask as its child sd level (MC) and of DIE
> sd level for CPU2/3/4 because it returns the same cpu mask as its child
> sd level (GDIE) related cpu mask function. This will let sd degenerate
> do it's job of folding sd levels which it does. The only problem I have
> is that the groups are not created correctly any more.
>
> I don't see right now how the flag SD_SHARE_FOO affects the code in
> get_group()/build_sched_groups().
>
> Think of SD_SHARE_FOO of something I would like to have for all sd's of
> CPU's of cluster 1 (CPU2/3/4) and not on cluster 0 (CPU0/1) in the sd
> level where each CPU sees two groups (group0 containing CPU0/1 and
> group1 containing CPU2/3/4 or vice versa) (GDIE/DIE) .

I'm not sure that's it's feasible because it's not possible from a
topology pov to have different flags if the span include all cpus.
Could you give us more details about what you want to achieve with
this flag ?

Vincent

>
> -- Dietmar
>
>>
>> At GDIE level:
>> CPU1 cpu_cpupower_mask=0-1
>> but
>> CPU2 cpu_cpupower_mask=0-4
>>
>> so CPU2 says that it shares power domain with CPU0 but CPU1 says the opposite
>>
>> Regards
>> Vincent
>>
>>>
>>> Firstly, I had to get rid of the cpumask_equal(cpu_map,
>>> sched_domain_span(sd)) condition in build_sched_domains() to allow that
>>> I can have two sd levels which span CPU 0-4 (for CPU2/3/4).
>>>
>>> But it still doesn't work correctly:
>>>
>>> dmesg snippet 2:
>>>
>>> CPU0 attaching sched-domain:
>>>  domain 0: span 0-1 level MC
>>>   groups: 0 1
>>>   domain 1: span 0-4 level DIE     <-- error (there's only one group)
>>>    groups: 0-4 (cpu_power = 2048)
>>> ...
>>> CPU2 attaching sched-domain:
>>>  domain 0: span 2-4 level MC
>>>   groups: 2 3 4
>>>   domain 1: span 0-4 level GDIE
>>> ERROR: domain->groups does not contain CPU2
>>>    groups:
>>> ERROR: domain->cpu_power not set
>>>
>>> ERROR: groups don't span domain->span
>>> ...
>>>

[snip]

>>>>
>>>
>>>
>>
>
>

  reply	other threads:[~2014-04-24  7:30 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-11  9:44 [PATCH v4 0/5] rework sched_domain topology description Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 1/5] sched: rework of sched_domain topology definition Vincent Guittot
2014-04-18 10:56   ` Peter Zijlstra
2014-04-18 11:34     ` [PATCH] fix: " Vincent Guittot
2014-04-18 11:39       ` Peter Zijlstra
2014-04-18 11:34     ` [PATCH v4 1/5] " Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 2/5] sched: s390: create a dedicated topology table Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 3/5] sched: powerpc: " Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 4/5] sched: add a new SD_SHARE_POWERDOMAIN for sched_domain Vincent Guittot
2014-04-18 10:58   ` Peter Zijlstra
2014-04-18 11:54     ` [PATCH] fix: sched: rework of sched_domain topology definition Vincent Guittot
2014-04-18 11:54     ` [PATCH v4 4/5] sched: add a new SD_SHARE_POWERDOMAIN for sched_domain Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 5/5] sched: ARM: create a dedicated scheduler topology table Vincent Guittot
2014-04-23 11:46   ` Dietmar Eggemann
2014-04-23 14:46     ` Vincent Guittot
2014-04-23 15:26       ` Dietmar Eggemann
2014-04-24  7:30         ` Vincent Guittot [this message]
2014-04-24 12:48           ` Dietmar Eggemann
2014-04-25  7:45             ` Vincent Guittot
2014-04-25 15:55               ` Dietmar Eggemann
2014-04-25 16:04               ` Peter Zijlstra
2014-04-25 16:05                 ` Peter Zijlstra
2014-04-12 12:56 ` [PATCH v4 0/5] rework sched_domain topology description Dietmar Eggemann
2014-04-14  7:29   ` Vincent Guittot
2014-04-15  7:53   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKfTPtAOZqnP=sWeJuqFAqrGT8vLQZKj+tpqiwOTG37paXoGjg@mail.gmail.com' \
    --to=vincent.guittot@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).