linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>, Phil Auld <pauld@redhat.com>,
	Valentin Schneider <valentin.schneider@arm.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Quentin Perret <quentin.perret@arm.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Hillf Danton <hdanton@sina.com>, Parth Shah <parth@linux.ibm.com>,
	Rik van Riel <riel@surriel.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] sched, fair: Allow a small degree of load imbalance between SD_NUMA domains v2
Date: Wed, 8 Jan 2020 18:03:17 +0000	[thread overview]
Message-ID: <20200108180317.GM3466@techsingularity.net> (raw)
In-Reply-To: <20200108164657.GA16425@linaro.org>

On Wed, Jan 08, 2020 at 05:46:57PM +0100, Vincent Guittot wrote:
> > Allowing just 1 extra task would work for netperf in some cases except when
> > softirq is involved. It would partially work for IO on ext4 as it's only
> > communicating with one journal thread but a bit more borderline for XFS
> > due to workqueue usage. XFS is not a massive concern in this context as
> > the workqueue is close to the IO issuer and short-lived so I don't think
> > it would crop up much for load balancing unlike ext4 where jbd2 could be
> > very active.
> > 
> > If v4 of the patch fails to meet approval then I'll try a patch that
> 
> My main concern with v4 was the mismatch between the computed value and the goal to not overload the LLCs
> 

Fair enough.

> > allows a hard-coded imbalance of 2 tasks (one communicating task and
> 
> If there is no good way to compute the allowed imbalance, a hard coded
> value of 2 is probably simple value to start with

Indeed.

> 
> > one kthread) regardless of NUMA domain span up to 50% of utilisation
> 
> Are you sure that it's necessary ? This degree of imbalance already applies only if the group has spare capacity
> 
> something like
> 
> +               /* Consider allowing a small imbalance between NUMA groups */
> +               if (env->sd->flags & SD_NUMA) {
> +
> +                       /*
> +                        * Until we found a good way to compute an acceptable
> +						 * degree of imbalance linked to the system topology
> +						 * and that will not impatc mem bandwith and latency,
> +						 * let start with a fixed small value.
> +						 */
> +                       imbalance_adj = 2;
> +
> +                       /*
> +                        * Ignore small imbalances when the busiest group has
> +                        * low utilisation.
> +                        */
> +                        env->imbalance -= min(env->imbalance, imbalance_adj);
> +               }
> 

This is more or less what I had in mind with the exception that the "low
utilisation" part of the comment would go away. The 50% utilisation may
be unnecessary and was based simply on the idea that at that point memory
bandwidth, HT considerations or both would also be dominating factors. I
can leave out the check and add it in as a separate patch if proven to
be necessary.

-- 
Mel Gorman
SUSE Labs

  reply	other threads:[~2020-01-08 18:03 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-20  8:42 [PATCH] sched, fair: Allow a small degree of load imbalance between SD_NUMA domains v2 Mel Gorman
2019-12-20 12:40 ` Valentin Schneider
2019-12-20 14:22   ` Mel Gorman
2019-12-20 15:32     ` Valentin Schneider
2019-12-21 11:25   ` Mel Gorman
2019-12-22 12:00 ` Srikar Dronamraju
2019-12-23 13:31 ` Vincent Guittot
2019-12-23 13:41   ` Vincent Guittot
2020-01-03 14:31   ` Mel Gorman
2020-01-06 13:55     ` Vincent Guittot
2020-01-06 14:52       ` Mel Gorman
2020-01-07  8:38         ` Vincent Guittot
2020-01-07  9:56           ` Mel Gorman
2020-01-07 11:17             ` Vincent Guittot
2020-01-07 11:56               ` Mel Gorman
2020-01-07 16:00                 ` Vincent Guittot
2020-01-07 20:24                   ` Mel Gorman
2020-01-08  8:25                     ` Vincent Guittot
2020-01-08  8:49                       ` Mel Gorman
2020-01-08 13:18                     ` Peter Zijlstra
2020-01-08 14:03                       ` Mel Gorman
2020-01-08 16:46                         ` Vincent Guittot
2020-01-08 18:03                           ` Mel Gorman [this message]
2020-01-07 11:22             ` Peter Zijlstra
2020-01-07 11:42               ` Mel Gorman
2020-01-07 12:29                 ` Peter Zijlstra
2020-01-07 12:28               ` Peter Zijlstra
2020-01-07 19:26             ` Phil Auld

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200108180317.GM3466@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=Morten.Rasmussen@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=parth@linux.ibm.com \
    --cc=pauld@redhat.com \
    --cc=peterz@infradead.org \
    --cc=quentin.perret@arm.com \
    --cc=riel@surriel.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).