From: Peter Zijlstra <peterz@infradead.org>
To: Lauro Ramos Venancio <lvenanci@redhat.com>
Cc: linux-kernel@vger.kernel.org, lwang@redhat.com, riel@redhat.com,
Mike Galbraith <efault@gmx.de>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@kernel.org>
Subject: Re: [RFC 3/3] sched/topology: Different sched groups must not have the same balance cpu
Date: Fri, 14 Apr 2017 18:49:09 +0200 [thread overview]
Message-ID: <20170414164909.tfybszncwkm4yxap@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <1492091769-19879-4-git-send-email-lvenanci@redhat.com>
On Thu, Apr 13, 2017 at 10:56:09AM -0300, Lauro Ramos Venancio wrote:
> Currently, the group balance cpu is the groups's first CPU. But with
> overlapping groups, two different groups can have the same first CPU.
>
> This patch uses the group mask to mark all the CPUs that have a
> particular group as its main sched group. The group balance cpu is the
> first group CPU that is also in the mask.
Please give a NUMA configuration and CPU number where this goes wrong.
Because only the first group of a domain matters, and with the other
thing fixed, I'm not immediately seeing where we go wobbly.
next prev parent reply other threads:[~2017-04-14 16:49 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-13 13:56 [RFC 0/3] sched/topology: fix sched groups on NUMA machines with mesh topology Lauro Ramos Venancio
2017-04-13 13:56 ` [RFC 1/3] sched/topology: Refactor function build_overlap_sched_groups() Lauro Ramos Venancio
2017-04-13 14:50 ` Rik van Riel
2017-05-15 9:02 ` [tip:sched/core] " tip-bot for Lauro Ramos Venancio
2017-04-13 13:56 ` [RFC 2/3] sched/topology: fix sched groups on NUMA machines with mesh topology Lauro Ramos Venancio
2017-04-13 15:16 ` Rik van Riel
2017-04-13 15:48 ` Peter Zijlstra
2017-04-13 20:21 ` Lauro Venancio
2017-04-13 21:06 ` Lauro Venancio
2017-04-13 23:38 ` Rik van Riel
2017-04-14 10:48 ` Peter Zijlstra
2017-04-14 11:38 ` Peter Zijlstra
2017-04-14 12:20 ` Peter Zijlstra
2017-05-15 9:03 ` [tip:sched/core] sched/fair, cpumask: Export for_each_cpu_wrap() tip-bot for Peter Zijlstra
2017-05-17 10:53 ` hackbench vs select_idle_sibling; was: " Peter Zijlstra
2017-05-17 12:46 ` Matt Fleming
2017-05-17 14:49 ` Chris Mason
2017-05-19 15:00 ` Matt Fleming
2017-06-05 13:00 ` Matt Fleming
2017-06-06 9:21 ` Peter Zijlstra
2017-06-09 17:52 ` Chris Mason
2017-06-08 9:22 ` [tip:sched/core] sched/core: Implement new approach to scale select_idle_cpu() tip-bot for Peter Zijlstra
2017-04-14 16:58 ` [RFC 2/3] sched/topology: fix sched groups on NUMA machines with mesh topology Peter Zijlstra
2017-04-17 14:40 ` Lauro Venancio
2017-04-13 13:56 ` [RFC 3/3] sched/topology: Different sched groups must not have the same balance cpu Lauro Ramos Venancio
2017-04-13 15:27 ` Rik van Riel
2017-04-14 16:49 ` Peter Zijlstra [this message]
2017-04-17 15:34 ` Lauro Venancio
2017-04-18 12:32 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170414164909.tfybszncwkm4yxap@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=lvenanci@redhat.com \
--cc=lwang@redhat.com \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).