linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: dietmar.eggemann@arm.com (Dietmar Eggemann)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 5/5] sched: ARM: create a dedicated scheduler topology table
Date: Thu, 24 Apr 2014 13:48:53 +0100	[thread overview]
Message-ID: <53590835.9090803@arm.com> (raw)
In-Reply-To: <CAKfTPtAOZqnP=sWeJuqFAqrGT8vLQZKj+tpqiwOTG37paXoGjg@mail.gmail.com>

On 24/04/14 08:30, Vincent Guittot wrote:
> On 23 April 2014 17:26, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>> On 23/04/14 15:46, Vincent Guittot wrote:
>>> On 23 April 2014 13:46, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>>>> Hi,

[...]

> 
> More than the flag that is used for the example, it's about the
> cpumask which are inconsistent across CPUs for the same level and the
> build_sched_domain sequence rely on this consistency to build
> sched_group

Now I'm lost here. I thought so far that by specifying different cpu
masks per CPU in an sd level, we get the sd level folding functionality
in sd degenerate?

We discussed this here for an example on TC2 for the GMC level:
https://lkml.org/lkml/2014/3/21/126

Back than I had
  CPU0: cpu_corepower_mask=0-1
  CPU2: cpu_corepower_mask=2
so for GMC level the cpumasks are inconsistent across CPUs and it worked.

The header of '[PATCH v4 1/5] sched: rework of sched_domain topology
definition' mentions only the requirement "Then, each level must be a
subset on the next one" and this one I haven't broken w/ my
GMC/MC/GDIE/DIE set-up.

Do I miss something else here?

> 
>> Essentially what I want to do is bind an SD_SHARE_*FOO* flag to the GDIE
>> related sd's of CPU2/3/4 and not to the DIE related sd's of CPU0/1.
>>
>> I thought so far that I can achieve that by getting rid of GDIE sd level
>> for CPU0/1 simply by choosing the cpu_foo_mask() function in such a way
>> that it returns the same cpu mask as its child sd level (MC) and of DIE
>> sd level for CPU2/3/4 because it returns the same cpu mask as its child
>> sd level (GDIE) related cpu mask function. This will let sd degenerate
>> do it's job of folding sd levels which it does. The only problem I have
>> is that the groups are not created correctly any more.
>>
>> I don't see right now how the flag SD_SHARE_FOO affects the code in
>> get_group()/build_sched_groups().
>>
>> Think of SD_SHARE_FOO of something I would like to have for all sd's of
>> CPU's of cluster 1 (CPU2/3/4) and not on cluster 0 (CPU0/1) in the sd
>> level where each CPU sees two groups (group0 containing CPU0/1 and
>> group1 containing CPU2/3/4 or vice versa) (GDIE/DIE) .
> 
> I'm not sure that's it's feasible because it's not possible from a
> topology pov to have different flags if the span include all cpus.
> Could you give us more details about what you want to achieve with
> this flag ?

IMHO, the flag is not important for this discussion.  OTHA, information
like you can't use sd degenerate functionality to fold adjacent sd
levels (GFOO/FOO) on sd level which span all CPUs would be.  I want to
make sure we understand what are the limitations to use folding of
adjacent sd levels based on per-cpu differences in the return value of
cpu_mask functions.

-- Dietmar

[...]

  reply	other threads:[~2014-04-24 12:48 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-11  9:44 [PATCH v4 0/5] rework sched_domain topology description Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 1/5] sched: rework of sched_domain topology definition Vincent Guittot
2014-04-18 10:56   ` Peter Zijlstra
2014-04-18 11:34     ` [PATCH] fix: " Vincent Guittot
2014-04-18 11:39       ` Peter Zijlstra
2014-04-18 11:34     ` [PATCH v4 1/5] " Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 2/5] sched: s390: create a dedicated topology table Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 3/5] sched: powerpc: " Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 4/5] sched: add a new SD_SHARE_POWERDOMAIN for sched_domain Vincent Guittot
2014-04-18 10:58   ` Peter Zijlstra
2014-04-18 11:54     ` [PATCH] fix: sched: rework of sched_domain topology definition Vincent Guittot
2014-04-18 11:54     ` [PATCH v4 4/5] sched: add a new SD_SHARE_POWERDOMAIN for sched_domain Vincent Guittot
2014-04-11  9:44 ` [PATCH v4 5/5] sched: ARM: create a dedicated scheduler topology table Vincent Guittot
2014-04-23 11:46   ` Dietmar Eggemann
2014-04-23 14:46     ` Vincent Guittot
2014-04-23 15:26       ` Dietmar Eggemann
2014-04-24  7:30         ` Vincent Guittot
2014-04-24 12:48           ` Dietmar Eggemann [this message]
2014-04-25  7:45             ` Vincent Guittot
2014-04-25 15:55               ` Dietmar Eggemann
2014-04-25 16:04               ` Peter Zijlstra
2014-04-25 16:05                 ` Peter Zijlstra
2014-04-12 12:56 ` [PATCH v4 0/5] rework sched_domain topology description Dietmar Eggemann
2014-04-14  7:29   ` Vincent Guittot
2014-04-15  7:53   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53590835.9090803@arm.com \
    --to=dietmar.eggemann@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).