All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rik van Riel <riel@redhat.com>
To: Lauro Ramos Venancio <lvenanci@redhat.com>, linux-kernel@vger.kernel.org
Cc: lwang@redhat.com, Mike Galbraith <efault@gmx.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: Re: [RFC 3/3] sched/topology: Different sched groups must not have the same balance cpu
Date: Thu, 13 Apr 2017 11:27:49 -0400	[thread overview]
Message-ID: <1492097269.2896.8.camel@redhat.com> (raw)
In-Reply-To: <1492091769-19879-4-git-send-email-lvenanci@redhat.com>

On Thu, 2017-04-13 at 10:56 -0300, Lauro Ramos Venancio wrote:
> Currently, the group balance cpu is the groups's first CPU. But with
> overlapping groups, two different groups can have the same first CPU.
> 
> This patch uses the group mask to mark all the CPUs that have a
> particular group as its main sched group. The group balance cpu is
> the
> first group CPU that is also in the mask.
> 

This is not your fault, but this code is really hard
to understand.

Your comments tell me what the code does, but not
really why. 

> +++ b/kernel/sched/topology.c
> @@ -477,27 +477,31 @@ enum s_alloc {
>  };
>  
>  /*
> - * Build an iteration mask that can exclude certain CPUs from the
> upwards
> - * domain traversal.
> + * An overlap sched group may not be present in all CPUs that
> compose the
> + * group. So build the mask, marking all the group CPUs where it is
> present.
>   *
>   * Asymmetric node setups can result in situations where the domain
> tree is of
>   * unequal depth, make sure to skip domains that already cover the
> entire
>   * range.
> - *
> - * In that case build_sched_domains() will have terminated the
> iteration early
> - * and our sibling sd spans will be empty. Domains should always
> include the
> - * CPU they're built on, so check that.
>   */

Why are we doing this?

Could the comment explain why things need to be
this way?


> -	for_each_cpu(i, span) {
> +	for_each_cpu(i, sg_span) {
>  		sibling = *per_cpu_ptr(sdd->sd, i);
> -		if (!cpumask_test_cpu(i,
> sched_domain_span(sibling)))
> +
> +		/*
> +		 * Asymmetric node setups: skip domains that are
> already
> +		 * done.
> +		 */
> +		if (!sibling->groups)
> +			continue;
> +

What does this mean?

I would really like it if this code was a little
better documented.  This is not something I can
put on you for most of the code, but I can at least
ask it for the code you are adding :)

The same goes for the rest of this patch.

  reply	other threads:[~2017-04-13 15:28 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-13 13:56 [RFC 0/3] sched/topology: fix sched groups on NUMA machines with mesh topology Lauro Ramos Venancio
2017-04-13 13:56 ` [RFC 1/3] sched/topology: Refactor function build_overlap_sched_groups() Lauro Ramos Venancio
2017-04-13 14:50   ` Rik van Riel
2017-05-15  9:02   ` [tip:sched/core] " tip-bot for Lauro Ramos Venancio
2017-04-13 13:56 ` [RFC 2/3] sched/topology: fix sched groups on NUMA machines with mesh topology Lauro Ramos Venancio
2017-04-13 15:16   ` Rik van Riel
2017-04-13 15:48   ` Peter Zijlstra
2017-04-13 20:21     ` Lauro Venancio
2017-04-13 21:06       ` Lauro Venancio
2017-04-13 23:38         ` Rik van Riel
2017-04-14 10:48           ` Peter Zijlstra
2017-04-14 11:38   ` Peter Zijlstra
2017-04-14 12:20     ` Peter Zijlstra
2017-05-15  9:03       ` [tip:sched/core] sched/fair, cpumask: Export for_each_cpu_wrap() tip-bot for Peter Zijlstra
2017-05-17 10:53         ` hackbench vs select_idle_sibling; was: " Peter Zijlstra
2017-05-17 12:46           ` Matt Fleming
2017-05-17 14:49           ` Chris Mason
2017-05-19 15:00           ` Matt Fleming
2017-06-05 13:00             ` Matt Fleming
2017-06-06  9:21               ` Peter Zijlstra
2017-06-09 17:52                 ` Chris Mason
2017-06-08  9:22           ` [tip:sched/core] sched/core: Implement new approach to scale select_idle_cpu() tip-bot for Peter Zijlstra
2017-04-14 16:58     ` [RFC 2/3] sched/topology: fix sched groups on NUMA machines with mesh topology Peter Zijlstra
2017-04-17 14:40       ` Lauro Venancio
2017-04-13 13:56 ` [RFC 3/3] sched/topology: Different sched groups must not have the same balance cpu Lauro Ramos Venancio
2017-04-13 15:27   ` Rik van Riel [this message]
2017-04-14 16:49   ` Peter Zijlstra
2017-04-17 15:34     ` Lauro Venancio
2017-04-18 12:32       ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1492097269.2896.8.camel@redhat.com \
    --to=riel@redhat.com \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lvenanci@redhat.com \
    --cc=lwang@redhat.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.