All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Mitigate inconsistent NUMA imbalance behaviour
@ 2022-05-11 14:30 Mel Gorman
  2022-05-11 14:30 ` [PATCH 1/4] sched/numa: Initialise numa_migrate_retry Mel Gorman
                   ` (4 more replies)
  0 siblings, 5 replies; 26+ messages in thread
From: Mel Gorman @ 2022-05-11 14:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Vincent Guittot, Valentin Schneider, Aubrey Li,
	LKML, Mel Gorman

A problem was reported privately related to inconsistent performance of
NAS when parallelised with MPICH. The root of the problem is that the
initial placement is unpredictable and there can be a larger imbalance
than expected between NUMA nodes. As there is spare capacity and the faults
are local, the imbalance persists for a long time and performance suffers.

This is not 100% an "allowed imbalance" problem as setting the allowed
imbalance to 0 does not fix the issue but the allowed imbalance contributes
the the performance problem. The unpredictable behaviour was most recently
introduced by commit c6f886546cb8 ("sched/fair: Trigger the update of
blocked load on newly idle cpu").

mpirun forks hydra_pmi_proxy helpers with MPICH that go to sleep before the
execing the target workload. As the new tasks are sleeping, the potential
imbalance is not observed as idle_cpus does not reflect the tasks that
will be running in the near future. How bad the problem depends on the
timing of when fork happens and whether the new tasks are still running.
Consequently, a large initial imbalance may not be detected until the
workload is fully running. Once running, NUMA Balancing picks the preferred
node based on locality and runtime load balancing often ignores the tasks
as can_migrate_task() fails for either locality or task_hot reasons and
instead picks unrelated tasks.

This is the min, max and range of run time for mg.D parallelised with ~25%
of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
16 active for mg.D due to limitations of mg.D).

v5.3                         Min  95.84 Max  96.55 Range   0.71 Mean  96.16
v5.7                         Min  95.44 Max  96.51 Range   1.07 Mean  96.14
v5.8                         Min  96.02 Max 197.08 Range 101.06 Mean 154.70
v5.12                        Min 104.45 Max 111.03 Range   6.58 Mean 105.94
v5.13                        Min 104.38 Max 170.37 Range  65.99 Mean 117.35
v5.13-revert-c6f886546cb8    Min 104.40 Max 110.70 Range   6.30 Mean 105.68 
v5.18rc4-baseline            Min 104.46 Max 169.04 Range  64.58 Mean 130.49
v5.18rc4-revert-c6f886546cb8 Min 113.98 Max 117.29 Range   3.31 Mean 114.71
v5.18rc4-this_series         Min  95.24 Max 175.33 Range  80.09 Mean 108.91
v5.18rc4-this_series+revert  Min  95.24 Max  99.87 Range   4.63 Mean  96.54

This shows that we've had unpredictable performance for a long time for
this load. Instability was introduced somewhere between v5.7 and v5.8,
fixed in v5.12 and broken again since v5.13.  The revert against 5.13
and 5.18-rc4 shows that c6f886546cb8 is the primary source of instability
although the best case is still worse than 5.7.

This series addresses the allowed imbalance problems to get the peak
performance back to 5.7 although only some of the time due to the
instability problem. The series plus the revert is both stable and has
slightly better peak performance and similar average performance. I'm
not convinced commit c6f886546cb8 is wrong but haven't isolated exactly
why it's unstable so for now, I'm just noting it has an issue.

Patch 1 initialises numa_migrate_retry. While this resolves itself
	eventually, it is unpredictable early in the lifetime of
	a task.

Patch 2 will not swap NUMA tasks in the same NUMA group or without
	a NUMA group if there is spare capacity. Swapping is just
	punishing one task to help another.

Patch 3 fixes an issue where a larger imbalance can be created at
	fork time than would be allowed at run time. This behaviour
	can help some workloads that are short lived and prefer
	to remain local but it punishes long-lived tasks that are
	memory intensive.

Patch 4 adjusts the threshold where a NUMA imbalance is allowed to
	better approximate the number of memory channels, at least
	for x86-64.

 kernel/sched/fair.c     | 59 ++++++++++++++++++++++++++---------------
 kernel/sched/topology.c | 23 ++++++++++------
 2 files changed, 53 insertions(+), 29 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 26+ messages in thread
* [PATCH v2 0/4] Mitigate inconsistent NUMA imbalance behaviour
@ 2022-05-20 10:35 Mel Gorman
  2022-05-20 10:35 ` [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels Mel Gorman
  0 siblings, 1 reply; 26+ messages in thread
From: Mel Gorman @ 2022-05-20 10:35 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Vincent Guittot, Valentin Schneider,
	K Prateek Nayak, Aubrey Li, Ying Huang, LKML, Mel Gorman

Changes since V1
o Consolidate [allow|adjust]_numa_imbalance			(peterz)
o #ifdefs around NUMA-specific pieces to build arc-allyesconfig	(lkp)

A problem was reported privately related to inconsistent performance of
NAS when parallelised with MPICH. The root of the problem is that the
initial placement is unpredictable and there can be a larger imbalance
than expected between NUMA nodes. As there is spare capacity and the faults
are local, the imbalance persists for a long time and performance suffers.

This is not 100% an "allowed imbalance" problem as setting the allowed
imbalance to 0 does not fix the issue but the allowed imbalance contributes
the the performance problem. The unpredictable behaviour was most recently
introduced by commit c6f886546cb8 ("sched/fair: Trigger the update of
blocked load on newly idle cpu").

mpirun forks hydra_pmi_proxy helpers with MPICH that go to sleep before the
execing the target workload. As the new tasks are sleeping, the potential
imbalance is not observed as idle_cpus does not reflect the tasks that
will be running in the near future. How bad the problem depends on the
timing of when fork happens and whether the new tasks are still running.
Consequently, a large initial imbalance may not be detected until the
workload is fully running. Once running, NUMA Balancing picks the preferred
node based on locality and runtime load balancing often ignores the tasks
as can_migrate_task() fails for either locality or task_hot reasons and
instead picks unrelated tasks.

This is the min, max and range of run time for mg.D parallelised with ~25%
of the CPUs parallelised by MPICH running on a 2-socket machine (80 CPUs,
16 active for mg.D due to limitations of mg.D).

v5.3                                     Min  95.84 Max  96.55 Range   0.71 Mean  96.16
v5.7                                     Min  95.44 Max  96.51 Range   1.07 Mean  96.14
v5.8                                     Min  96.02 Max 197.08 Range 101.06 Mean 154.70
v5.12                                    Min 104.45 Max 111.03 Range   6.58 Mean 105.94
v5.13                                    Min 104.38 Max 170.37 Range  65.99 Mean 117.35
v5.13-revert-c6f886546cb8                Min 104.40 Max 110.70 Range   6.30 Mean 105.68
v5.18rc4-baseline                        Min 110.78 Max 169.84 Range  59.06 Mean 131.22
v5.18rc4-revert-c6f886546cb8             Min 113.98 Max 117.29 Range   3.31 Mean 114.71
v5.18rc4-this_series                     Min  95.56 Max 163.97 Range  68.41 Mean 105.39
v5.18rc4-this_series-revert-c6f886546cb8 Min  95.56 Max 104.86 Range   9.30 Mean  97.00

This shows that we've had unpredictable performance for a long time for
this load. Instability was introduced somewhere between v5.7 and v5.8,
fixed in v5.12 and broken again since v5.13.  The revert against 5.13
and 5.18-rc4 shows that c6f886546cb8 is the primary source of instability
although the best case is still worse than 5.7.

This series addresses the allowed imbalance problems to get the peak
performance back to 5.7 although only some of the time due to the
instability problem. The series plus the revert is both stable and has
slightly better peak performance and similar average performance. I'm
not convinced commit c6f886546cb8 is wrong but haven't isolated exactly
why it's unstable. I'm just noting it has an issue for now.

Patch 1 initialises numa_migrate_retry. While this resolves itself
	eventually, it is unpredictable early in the lifetime of
	a task.

Patch 2 will not swap NUMA tasks in the same NUMA group or without
	a NUMA group if there is spare capacity. Swapping is just
	punishing one task to help another.

Patch 3 fixes an issue where a larger imbalance can be created at
	fork time than would be allowed at run time. This behaviour
	can help some workloads that are short lived and prefer
	to remain local but it punishes long-lived tasks that are
	memory intensive.

Patch 4 adjusts the threshold where a NUMA imbalance is allowed to
	better approximate the number of memory channels, at least
	for x86-64.

 kernel/sched/fair.c     | 91 +++++++++++++++++++++++++----------------
 kernel/sched/topology.c | 23 +++++++----
 2 files changed, 70 insertions(+), 44 deletions(-)

-- 
2.34.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-05-20 15:18 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-11 14:30 [PATCH 0/4] Mitigate inconsistent NUMA imbalance behaviour Mel Gorman
2022-05-11 14:30 ` [PATCH 1/4] sched/numa: Initialise numa_migrate_retry Mel Gorman
2022-05-11 14:30 ` [PATCH 2/4] sched/numa: Do not swap tasks between nodes when spare capacity is available Mel Gorman
2022-05-11 14:30 ` [PATCH 3/4] sched/numa: Apply imbalance limitations consistently Mel Gorman
2022-05-18  9:24   ` [sched/numa] bb2dee337b: unixbench.score -11.2% regression kernel test robot
2022-05-18  9:24     ` kernel test robot
2022-05-18 15:22     ` Mel Gorman
2022-05-18 15:22       ` Mel Gorman
2022-05-19  7:54       ` ying.huang
2022-05-19  7:54         ` ying.huang
2022-05-20  6:44         ` [LKP] " Ying Huang
2022-05-20  6:44           ` Ying Huang
2022-05-18  9:31   ` [PATCH 3/4] sched/numa: Apply imbalance limitations consistently Peter Zijlstra
2022-05-18 10:46     ` Mel Gorman
2022-05-18 13:59       ` Peter Zijlstra
2022-05-18 15:39         ` Mel Gorman
2022-05-11 14:30 ` [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels Mel Gorman
2022-05-18  9:41   ` Peter Zijlstra
2022-05-18 11:15     ` Mel Gorman
2022-05-18 14:05       ` Peter Zijlstra
2022-05-18 17:06         ` Mel Gorman
2022-05-19  9:29           ` Mel Gorman
2022-05-20  4:58 ` [PATCH 0/4] Mitigate inconsistent NUMA imbalance behaviour K Prateek Nayak
2022-05-20 10:18   ` Mel Gorman
2022-05-20 15:17     ` K Prateek Nayak
2022-05-20 10:35 [PATCH v2 " Mel Gorman
2022-05-20 10:35 ` [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels Mel Gorman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.