All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Valentin Schneider <valentin.schneider@arm.com>,
	K Prateek Nayak <kprateek.nayak@amd.com>,
	Aubrey Li <aubrey.li@linux.intel.com>,
	Ying Huang <ying.huang@intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels
Date: Fri, 20 May 2022 11:35:19 +0100	[thread overview]
Message-ID: <20220520103519.1863-5-mgorman@techsingularity.net> (raw)
In-Reply-To: <20220520103519.1863-1-mgorman@techsingularity.net>

For a single LLC per node, a NUMA imbalance is allowed up until 25%
of CPUs sharing a node could be active. One intent of the cut-off is
to avoid an imbalance of memory channels but there is no topological
information based on active memory channels. Furthermore, there can
be differences between nodes depending on the number of populated
DIMMs.

A cut-off of 25% was arbitrary but generally worked. It does have a severe
corner cases though when an parallel workload is using 25% of all available
CPUs over-saturates memory channels. This can happen due to the initial
forking of tasks that get pulled more to one node after early wakeups
(e.g. a barrier synchronisation) that is not quickly corrected by the
load balancer. The LB may fail to act quickly as the parallel tasks are
considered to be poor migrate candidates due to locality or cache hotness.

On a range of modern Intel CPUs, 12.5% appears to be a better cut-off
assuming all memory channels are populated and is used as the new cut-off
point. A minimum of 1 is specified to allow a communicating pair to
remain local even for CPUs with low numbers of cores. For modern AMDs,
there are multiple LLCs and are not affected.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 kernel/sched/topology.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 810750e62118..2740e245cb37 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2295,23 +2295,30 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 
 				/*
 				 * For a single LLC per node, allow an
-				 * imbalance up to 25% of the node. This is an
-				 * arbitrary cutoff based on SMT-2 to balance
-				 * between memory bandwidth and avoiding
-				 * premature sharing of HT resources and SMT-4
-				 * or SMT-8 *may* benefit from a different
-				 * cutoff.
+				 * imbalance up to 12.5% of the node. This is
+				 * arbitrary cutoff based two factors -- SMT and
+				 * memory channels. For SMT-2, the intent is to
+				 * avoid premature sharing of HT resources but
+				 * SMT-4 or SMT-8 *may* benefit from a different
+				 * cutoff. For memory channels, this is a very
+				 * rough estimate of how many channels may be
+				 * active and is based on recent CPUs with
+				 * many cores.
 				 *
 				 * For multiple LLCs, allow an imbalance
 				 * until multiple tasks would share an LLC
 				 * on one node while LLCs on another node
-				 * remain idle.
+				 * remain idle. This assumes that there are
+				 * enough logical CPUs per LLC to avoid SMT
+				 * factors and that there is a correlation
+				 * between LLCs and memory channels.
 				 */
 				nr_llcs = sd->span_weight / child->span_weight;
 				if (nr_llcs == 1)
-					imb = sd->span_weight >> 2;
+					imb = sd->span_weight >> 3;
 				else
 					imb = nr_llcs;
+				imb = max(1U, imb);
 				sd->imb_numa_nr = imb;
 
 				/* Set span based on the first NUMA domain. */
-- 
2.34.1


  parent reply	other threads:[~2022-05-20 10:36 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-20 10:35 [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour Mel Gorman
2022-05-20 10:35 ` [PATCH 1/4] sched/numa: Initialise numa_migrate_retry Mel Gorman
2022-06-13  8:43   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2022-05-20 10:35 ` [PATCH 2/4] sched/numa: Do not swap tasks between nodes when spare capacity is available Mel Gorman
2022-06-13  8:43   ` [tip: sched/core] " tip-bot2 for Mel Gorman
2022-05-20 10:35 ` [PATCH 3/4] sched/numa: Apply imbalance limitations consistently Mel Gorman
2022-05-27  6:21   ` [sched/numa] 5278ba412f: unixbench.score -2.9% regression kernel test robot
2022-05-27  6:21     ` kernel test robot
2022-06-13  8:43   ` [tip: sched/core] sched/numa: Apply imbalance limitations consistently tip-bot2 for Mel Gorman
2022-05-20 10:35 ` Mel Gorman [this message]
2022-06-13  8:43   ` [tip: sched/core] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels tip-bot2 for Mel Gorman
2022-05-24 16:01 ` [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour Vincent Guittot
2022-05-25 12:49   ` Mel Gorman
2022-05-31 10:26     ` Vincent Guittot
2022-05-31 14:09       ` Mel Gorman
2022-06-08  9:53 ` K Prateek Nayak
  -- strict thread matches above, loose matches on Subject: below --
2022-05-11 14:30 [PATCH " Mel Gorman
2022-05-11 14:30 ` [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels Mel Gorman
2022-05-18  9:41   ` Peter Zijlstra
2022-05-18 11:15     ` Mel Gorman
2022-05-18 14:05       ` Peter Zijlstra
2022-05-18 17:06         ` Mel Gorman
2022-05-19  9:29           ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220520103519.1863-5-mgorman@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=aubrey.li@linux.intel.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=valentin.schneider@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.