linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lauro Venancio <lvenanci@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: lwang@redhat.com, riel@redhat.com, Mike Galbraith <efault@gmx.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/4] sched/topology: the group balance cpu must be a cpu where the group is installed
Date: Tue, 25 Apr 2017 12:56:23 -0300	[thread overview]
Message-ID: <91317113-f1a7-a1c6-812e-cbda5284d404@redhat.com> (raw)
In-Reply-To: <20170425153937.7icdvd7uofqcr2nr@hirez.programming.kicks-ass.net>

On 04/25/2017 12:39 PM, Peter Zijlstra wrote:
> On Tue, Apr 25, 2017 at 05:27:03PM +0200, Peter Zijlstra wrote:
>> On Tue, Apr 25, 2017 at 05:22:36PM +0200, Peter Zijlstra wrote:
>>> On Tue, Apr 25, 2017 at 05:12:00PM +0200, Peter Zijlstra wrote:
>>>> But I'll first try and figure out why I'm not having empty masks.
>>> Ah, so this is before all the degenerate stuff, so there's a bunch of
>>> redundant domains below that make it work -- and there always will be,
>>> unless FORCE_SD_OVERLAP.
>>>
>>> Now I wonder what triggered it.. let me put it back.
>> Ah! the asymmetric setup, where @sibling is entirely uninitialized for
>> the top domain.
>>
> And it still works correctly too:
>
>
> [    0.078756] XXX 1 NUMA 
> [    0.079005] XXX 2 NUMA 
> [    0.080003] XXY 0-2:0
> [    0.081007] XXX 1 NUMA 
> [    0.082005] XXX 2 NUMA 
> [    0.083003] XXY 1-3:3
> [    0.084032] XXX 1 NUMA 
> [    0.085003] XXX 2 NUMA 
> [    0.086003] XXY 1-3:3
> [    0.087015] XXX 1 NUMA 
> [    0.088003] XXX 2 NUMA 
> [    0.089002] XXY 0-2:0
>
>
> [    0.090007] CPU0 attaching sched-domain:
> [    0.091002]  domain 0: span 0-2 level NUMA
> [    0.092002]   groups: 0 (mask: 0), 1, 2
> [    0.093002]   domain 1: span 0-3 level NUMA
> [    0.094002]    groups: 0-2 (mask: 0) (cpu_capacity: 3072), 1-3 (cpu_capacity: 3072)
> [    0.095005] CPU1 attaching sched-domain:
> [    0.096003]  domain 0: span 0-3 level NUMA
> [    0.097002]   groups: 1 (mask: 1), 2, 3, 0
> [    0.098004] CPU2 attaching sched-domain:
> [    0.099002]  domain 0: span 0-3 level NUMA
> [    0.100002]   groups: 2 (mask: 2), 3, 0, 1
> [    0.101004] CPU3 attaching sched-domain:
> [    0.102002]  domain 0: span 1-3 level NUMA
> [    0.103002]   groups: 3 (mask: 3), 1, 2
> [    0.104002]   domain 1: span 0-3 level NUMA
> [    0.105002]    groups: 1-3 (mask: 3) (cpu_capacity: 3072), 0-2 (cpu_capacity: 3072)
>
>
> static void
> build_group_mask(struct sched_domain *sd, struct sched_group *sg, struct cpumask *mask)
> {
>         const struct cpumask *sg_span = sched_group_cpus(sg);
>         struct sd_data *sdd = sd->private;
>         struct sched_domain *sibling;
>         int i, funny = 0;
>
>         cpumask_clear(mask);
>
>         for_each_cpu(i, sg_span) {
>                 sibling = *per_cpu_ptr(sdd->sd, i);
>
>                 if (!sibling->child) {
>                         funny = 1;
>                         printk("XXX %d %s %*pbl\n", i, sd->name, cpumask_pr_args(sched_domain_span(sibling)));
>                         continue;
>                 }
>
>                 /* If we would not end up here, we can't continue from here */
>                 if (!cpumask_equal(sg_span, sched_domain_span(sibling->child)))
>                         continue;
>
>                 cpumask_set_cpu(i, mask);
>         }
>
>         if (funny) {
>                 printk("XXY %*pbl:%*pbl\n",
>                                 cpumask_pr_args(sg_span),
>                                 cpumask_pr_args(mask));
>         }
> }
>
>
> So that will still get the right balance cpu and thus sgc.
>
> Another thing I've been thinking about; I think we can do away with the
> kzalloc() in build_group_from_child_sched_domain() and use the sdd->sg
> storage.
I considered this too. I decided to do not change this because I was not
sure if the kzalloc() was there for performance reasons. Currently, all
groups are allocated in the NUMA node they are used.
If we use sdd->sg storage, we may have groups allocated in one NUMA node
being used in another node.
>
> I just didn't want to move too much code around again, and ideally put
> more assertions in place to catch bad stuff; I just haven't had a good
> time thinking of good assertions :/

  parent reply	other threads:[~2017-04-25 15:56 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-20 19:51 [PATCH 0/4] sched/topology: fix overlap group capacity and balance cpu Lauro Ramos Venancio
2017-04-20 19:51 ` [PATCH 1/4] sched/topology: optimize build_group_mask() Lauro Ramos Venancio
2017-05-15  9:06   ` [tip:sched/core] sched/topology: Optimize build_group_mask() tip-bot for Lauro Ramos Venancio
2017-04-20 19:51 ` [PATCH 2/4] sched/topology: all instances of a sched group must use the same sched_group_capacity Lauro Ramos Venancio
2017-04-20 19:51 ` [PATCH 3/4] sched/topology: move comment about asymmetric node setups Lauro Ramos Venancio
2017-04-21 16:31   ` Peter Zijlstra
2017-05-15  9:06   ` [tip:sched/core] sched/topology: Move " tip-bot for Lauro Ramos Venancio
2017-04-20 19:51 ` [PATCH 4/4] sched/topology: the group balance cpu must be a cpu where the group is installed Lauro Ramos Venancio
2017-04-24 13:03   ` Peter Zijlstra
2017-04-24 14:19     ` Peter Zijlstra
2017-04-24 14:27       ` Peter Zijlstra
2017-04-24 15:19         ` Lauro Venancio
2017-04-24 22:19           ` Peter Zijlstra
2017-04-24 15:11     ` Lauro Venancio
2017-04-24 22:15       ` Peter Zijlstra
2017-04-25 12:17       ` Peter Zijlstra
2017-04-25 14:33         ` Lauro Venancio
2017-04-25 15:12           ` Peter Zijlstra
2017-04-25 15:22             ` Peter Zijlstra
2017-04-25 15:27               ` Peter Zijlstra
2017-04-25 15:39                 ` Peter Zijlstra
2017-04-25 15:52                   ` Peter Zijlstra
2017-04-25 15:56                   ` Lauro Venancio [this message]
2017-04-25 16:26                     ` Peter Zijlstra
2017-04-26 16:31 ` [PATCH 0/4] sched/topology: fix overlap group capacity and balance cpu Peter Zijlstra
2017-04-26 17:59   ` Lauro Venancio
2017-04-26 22:43     ` Peter Zijlstra
2017-04-28 10:33     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=91317113-f1a7-a1c6-812e-cbda5284d404@redhat.com \
    --to=lvenanci@redhat.com \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lwang@redhat.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).