linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations
@ 2023-12-14 18:07 Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor Srikar Dronamraju
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Srikar Dronamraju @ 2023-12-14 18:07 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	linux-kernel, Aneesh, Rohan McLure, Nicholas Piggin,
	linuxppc-dev, Josh Poimboeuf

PowerVM systems configured in shared processors mode have some unique
challenges. Some device-tree properties will be missing on a shared
processor. Hence some sched domains may not make sense for shared processor
systems.

Most shared processor systems are over-provisioned. Underlying PowerVM
Hypervisor would schedule at a Big Core granularity. The most recent power
processors support two almost independent cores. In a lightly loaded
condition, it helps the overall system performance if we pack to lesser
number of Big Cores.

Since each thread-group is independent, running threads on both the
thread-groups of a SMT8 core, should have a minimal adverse impact in
non over provisioned scenarios. These changes in this patchset will not
affect in the over provisioned scenario.  If there are more threads than
SMT domains, then asym_packing will not kick-in

System Configuration
type=Shared mode=Uncapped smt=8 lcpu=96 mem=1066409344 kB cpus=96 ent=64.00
So *64 Entitled cores/ 96 Virtual processor* Scenario

lscpu
Architecture:                       ppc64le
Byte Order:                         Little Endian
CPU(s):                             768
On-line CPU(s) list:                0-767
Model name:                         POWER10 (architected), altivec supported
Model:                              2.0 (pvr 0080 0200)
Thread(s) per core:                 8
Core(s) per socket:                 16
Socket(s):                          6
Hypervisor vendor:                  pHyp
Virtualization type:                para
L1d cache:                          6 MiB (192 instances)
L1i cache:                          9 MiB (192 instances)
NUMA node(s):                       6
NUMA node0 CPU(s):                  0-7,32-39,80-87,128-135,176-183,224-231,272-279,320-327,368-375,416-423,464-471,512-519,560-567,608-615,656-663,704-711,752-759
NUMA node1 CPU(s):                  8-15,40-47,88-95,136-143,184-191,232-239,280-287,328-335,376-383,424-431,472-479,520-527,568-575,616-623,664-671,712-719,760-767
NUMA node4 CPU(s):                  64-71,112-119,160-167,208-215,256-263,304-311,352-359,400-407,448-455,496-503,544-551,592-599,640-647,688-695,736-743
NUMA node5 CPU(s):                  16-23,48-55,96-103,144-151,192-199,240-247,288-295,336-343,384-391,432-439,480-487,528-535,576-583,624-631,672-679,720-727
NUMA node6 CPU(s):                  72-79,120-127,168-175,216-223,264-271,312-319,360-367,408-415,456-463,504-511,552-559,600-607,648-655,696-703,744-751
NUMA node7 CPU(s):                  24-31,56-63,104-111,152-159,200-207,248-255,296-303,344-351,392-399,440-447,488-495,536-543,584-591,632-639,680-687,728-735

ebizzy -t 32 -S 200 (5 iterations) Records per second. (Higher is better)
Kernel     N  Min      Max      Median   Avg        Stddev     %Change
6.6.0-rc3  5  3840178  4059268  3978042  3973936.6  84264.456
+patch     5  3768393  3927901  3874994  3854046    71532.926  -3.01692

>From lparstat (when the workload stabilized)
Kernel     %user  %sys  %wait  %idle  physc  %entc  lbusy  app    vcsw       phint
6.6.0-rc3  4.16   0.00  0.00   95.84  26.06  40.72  4.16   69.88  276906989  578
+patch     4.16   0.00  0.00   95.83  17.70  27.66  4.17   78.26  70436663   119

ebizzy -t 128 -S 200 (5 iterations) Records per second. (Higher is better)
Kernel     N Min      Max      Median   Avg        Stddev     %Change
6.6.0-rc3  5 5520692  5981856  5717709  5727053.2  176093.2
+patch     5 5305888  6259610  5854590  5843311    375917.03  2.02998

>From lparstat (when the workload stabilized)
Kernel     %user  %sys  %wait  %idle  physc  %entc  lbusy  app    vcsw       phint
6.6.0-rc3  16.66  0.00  0.00   83.33  45.49  71.08  16.67  50.50  288778533  581
+patch     16.65  0.00  0.00   83.35  30.15  47.11  16.65  65.76  85196150   133

ebizzy -t 512 -S 200 (5 iterations) Records per second. (Higher is better)
Kernel     N  Min       Max       Median    Avg       Stddev     %Change
6.6.0-rc3  5  19563921  20049955  19701510  19728733  198295.18
+patch     5  19455992  20176445  19718427  19832017  304094.05  0.523521

>From lparstat (when the workload stabilized)
%Kernel     user  %sys  %wait  %idle  physc  %entc   lbusy  app   vcsw       phint
66.6.0-rc3  6.44  0.01  0.00   33.55  94.14  147.09  66.45  1.33  313345175  621
6+patch     6.44  0.01  0.00   33.55  94.15  147.11  66.45  1.33  109193889  309

System Configuration
type=Shared mode=Uncapped smt=8 lcpu=40 mem=1067539392 kB cpus=96 ent=40.00
So *40 Entitled cores/ 40 Virtual processor* Scenario

lscpu
Architecture:                       ppc64le
Byte Order:                         Little Endian
CPU(s):                             320
On-line CPU(s) list:                0-319
Model name:                         POWER10 (architected), altivec supported
Model:                              2.0 (pvr 0080 0200)
Thread(s) per core:                 8
Core(s) per socket:                 10
Socket(s):                          4
Hypervisor vendor:                  pHyp
Virtualization type:                para
L1d cache:                          2.5 MiB (80 instances)
L1i cache:                          3.8 MiB (80 instances)
NUMA node(s):                       4
NUMA node0 CPU(s):                  0-7,32-39,64-71,96-103,128-135,160-167,192-199,224-231,256-263,288-295
NUMA node1 CPU(s):                  8-15,40-47,72-79,104-111,136-143,168-175,200-207,232-239,264-271,296-303
NUMA node4 CPU(s):                  16-23,48-55,80-87,112-119,144-151,176-183,208-215,240-247,272-279,304-311
NUMA node5 CPU(s):                  24-31,56-63,88-95,120-127,152-159,184-191,216-223,248-255,280-287,312-319

ebizzy -t 32 -S 200 (5 iterations) Records per second. (Higher is better)
Kernel     N   Min      Max      Median   Avg        Stddev     %Change
6.6.0-rc3  5   3535518  3864532  3745967  3704233.2  130216.76
+patch     5   3608385  3708026  3649379  3651596.6  37862.163  -1.42099

%Kernel    user   %sys  %wait  %idle  physc  %entc  lbusy  app    vcsw     phint
6.6.0-rc3  10.00  0.01  0.00   89.99  22.98  57.45  10.01  41.01  1135139  262
+patch     10.00  0.00  0.00   90.00  16.95  42.37  10.00  47.05  925561   19

ebizzy -t 64 -S 200 (5 iterations) Records per second. (Higher is better)
Kernel     N   Min      Max      Median   Avg        Stddev     %Change
6.6.0-rc3  5   4434984  4957281  4548786  4591298.2  211770.2
+patch     5   4461115  4835167  4544716  4607795.8  151474.85  0.359323

%Kernel    user   %sys  %wait  %idle  physc  %entc  lbusy  app    vcsw     phint
6.6.0-rc3  20.01  0.00  0.00   79.99  38.22  95.55  20.01  25.77  1287553  265
+patch     19.99  0.00  0.00   80.01  25.55  63.88  19.99  38.44  1077341  20

ebizzy -t 256 -S 200 (5 iterations) Records per second. (Higher is better)
Kernel     N   Min      Max      Median   Avg        Stddev     %Change
6.6.0-rc3  5   8850648  8982659  8951911  8936869.2  52278.031
+patch     5   8751038  9060510  8981409  8942268.4  117070.6   0.0604149

%Kernel    user   %sys  %wait  %idle  physc  %entc   lbusy  app    vcsw     phint
6.6.0-rc3  80.02  0.01  0.01   19.96  40.00  100.00  80.03  24.00  1597665  276
+patch     80.02  0.01  0.01   19.96  40.00  100.00  80.03  23.99  1383921  63

Observation:
We are able to see Improvement in ebizzy throughput even with lesser
core utilization (almost half the core utilization) in low utilization
scenarios while still retaining throughput in mid and higher utilization
scenarios.
Note: The numbers are with Uncapped + no-noise case. In the Capped and/or
noise case, due to contention on the Cores, the numbers are expected to
further improve.

Note: The numbers included (sched/fair: Enable group_asym_packing in find_idlest_group)
https://lore.kernel.org/all/20231018155036.2314342-1-srikar@linux.vnet.ibm.com/

Changelog
v4
1. Updated commit msg of patch 1 based on comments from Aneesh

v3 (https://lore.kernel.org/all/20231026101843.56784-1-srikar@linux.vnet.ibm.com) ->v4:
1. SPLAR specific Asym packing only for MC and DIE domains.
2. Changes due to rebase (DIE became PKG)

v2 (https://lore.kernel.org/all/20231018163751.2423181-1-srikar@linux.vnet.ibm.com) ->v3:
1. Handle comments from Peter Zijlstra / Michael Ellerman
2. Use __ro_after_init attribute instead of read_mostly
3. Use cpu_has_feature static_key instead of a new one.
4. Build topology dynamically patch added to this patchset.

v1 (https://lore.kernel.org/all/20230830105244.62477-1-srikar@linux.vnet.ibm.com) -> v2:
1. Last two patches were added in this version
2. This version uses static keys

Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rohan McLure <rmclure@linux.ibm.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
CC: Aneesh <aneesh.kumar@kernel.org>

Srikar Dronamraju (5):
  powerpc/smp: Enable Asym packing for cores on shared processor
  powerpc/smp: Disable MC domain for shared processor
  powerpc/smp: Add __ro_after_init attribute
  powerpc/smp: Avoid asym packing within thread_group of a core
  powerpc/smp: Dynamically build Powerpc topology

 arch/powerpc/kernel/smp.c | 124 +++++++++++++++++++++-----------------
 1 file changed, 70 insertions(+), 54 deletions(-)


base-commit: 3c0fd4382b584d4bdc9564526841df32e9b6d817
-- 
2.35.3


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor
  2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
@ 2023-12-14 18:07 ` Srikar Dronamraju
  2023-12-15  5:01   ` Aneesh Kumar K.V
  2023-12-14 18:07 ` [PATCH v5 2/5] powerpc/smp: Disable MC domain for " Srikar Dronamraju
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 8+ messages in thread
From: Srikar Dronamraju @ 2023-12-14 18:07 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Christophe Leroy
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	ndesaulniers, linux-kernel, Aneesh, Rohan McLure, linuxppc-dev,
	Josh Poimboeuf

If there are shared processor LPARs, underlying Hypervisor can have more
virtual cores to handle than actual physical cores.

Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
independent thread groups. On a shared processors LPARs, it helps to
pack threads to lesser number of cores so that the overall system
performance and utilization improves. PowerVM schedules at a big core
level. Hence packing to fewer cores helps.

Since each thread-group is independent, running threads on both the
thread-groups of a SMT8 core, should have a minimal adverse impact in
non over provisioned scenarios. These changes in this patchset will not
affect in the over provisioned scenario. If there are more threads than
SMT domains, then asym_packing will not kick-in

For example: Lets says there are two 8-core Shared LPARs that are
actually sharing a 8 Core shared physical pool, each running 8 threads
each. Then Consolidating 8 threads to 4 cores on each LPAR would help
them to perform better. This is because each of the LPAR will get
100% time to run applications and there will no switching required by
the Hypervisor.

To achieve this, enable SD_ASYM_PACKING flag at CACHE, MC and DIE level
when the system is running in shared processor mode and has big cores.

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
Changelog:
v4 -> v5:
- commit msg update
v3 -> v4:
- Dont use splpar_asym_pack with SMT
- Conflict resolution due to rebase
	(DIE changed to PKG)
v2 -> v3:
- Handle comments from Michael Ellerman.
- Rework using existing cpu_has_features static key
v1->v2: Using Jump label instead of a variable.

 arch/powerpc/kernel/smp.c | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index ab691c89d787..3fc8ad9646a4 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1003,6 +1003,13 @@ static int powerpc_smt_flags(void)
 }
 #endif
 
+/*
+ * On shared processor LPARs scheduled on a big core (which has two or more
+ * independent thread groups per core), prefer lower numbered CPUs, so
+ * that workload consolidates to lesser number of cores.
+ */
+static __ro_after_init DEFINE_STATIC_KEY_FALSE(splpar_asym_pack);
+
 /*
  * P9 has a slightly odd architecture where pairs of cores share an L2 cache.
  * This topology makes it *much* cheaper to migrate tasks between adjacent cores
@@ -1011,9 +1018,20 @@ static int powerpc_smt_flags(void)
  */
 static int powerpc_shared_cache_flags(void)
 {
+	if (static_branch_unlikely(&splpar_asym_pack))
+		return SD_SHARE_PKG_RESOURCES | SD_ASYM_PACKING;
+
 	return SD_SHARE_PKG_RESOURCES;
 }
 
+static int powerpc_shared_proc_flags(void)
+{
+	if (static_branch_unlikely(&splpar_asym_pack))
+		return SD_ASYM_PACKING;
+
+	return 0;
+}
+
 /*
  * We can't just pass cpu_l2_cache_mask() directly because
  * returns a non-const pointer and the compiler barfs on that.
@@ -1050,8 +1068,8 @@ static struct sched_domain_topology_level powerpc_topology[] = {
 	{ cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
 #endif
 	{ shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) },
-	{ cpu_mc_mask, SD_INIT_NAME(MC) },
-	{ cpu_cpu_mask, SD_INIT_NAME(PKG) },
+	{ cpu_mc_mask, powerpc_shared_proc_flags, SD_INIT_NAME(MC) },
+	{ cpu_cpu_mask, powerpc_shared_proc_flags, SD_INIT_NAME(PKG) },
 	{ NULL, },
 };
 
@@ -1686,6 +1704,9 @@ static void __init fixup_topology(void)
 {
 	int i;
 
+	if (is_shared_processor() && has_big_cores)
+		static_branch_enable(&splpar_asym_pack);
+
 #ifdef CONFIG_SCHED_SMT
 	if (has_big_cores) {
 		pr_info("Big cores detected but using small core scheduling\n");
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 2/5] powerpc/smp: Disable MC domain for shared processor
  2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor Srikar Dronamraju
@ 2023-12-14 18:07 ` Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 3/5] powerpc/smp: Add __ro_after_init attribute Srikar Dronamraju
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Srikar Dronamraju @ 2023-12-14 18:07 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Christophe Leroy
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	ndesaulniers, linux-kernel, Aneesh, Rohan McLure, linuxppc-dev,
	Josh Poimboeuf

Like L2-cache info, coregroup information which is used to determine MC
sched domains is only present on dedicated LPARs. i.e PowerVM doesn't
export coregroup information for shared processor LPARs. Hence disable
creating MC domains on shared LPAR Systems.

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/smp.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 3fc8ad9646a4..2cebc53e97f9 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1055,6 +1055,10 @@ static struct cpumask *cpu_coregroup_mask(int cpu)
 
 static bool has_coregroup_support(void)
 {
+	/* Coregroup identification not available on shared systems */
+	if (is_shared_processor())
+		return 0;
+
 	return coregroup_enabled;
 }
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 3/5] powerpc/smp: Add __ro_after_init attribute
  2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 2/5] powerpc/smp: Disable MC domain for " Srikar Dronamraju
@ 2023-12-14 18:07 ` Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 4/5] powerpc/smp: Avoid asym packing within thread_group of a core Srikar Dronamraju
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Srikar Dronamraju @ 2023-12-14 18:07 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Christophe Leroy
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	ndesaulniers, linux-kernel, Aneesh, Rohan McLure, linuxppc-dev,
	Josh Poimboeuf

There are some variables that are only updated at boot time.
So add __ro_after_init attribute to such variables

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
Changelog:
v2 -> v3:
Use __ro_after_init instead of __read_mostly
Suggested by : Peter Zijlstra and Michael Ellerman

 arch/powerpc/kernel/smp.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 2cebc53e97f9..aea149627209 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -77,10 +77,10 @@ static DEFINE_PER_CPU(int, cpu_state) = { 0 };
 #endif
 
 struct task_struct *secondary_current;
-bool has_big_cores;
-bool coregroup_enabled;
-bool thread_group_shares_l2;
-bool thread_group_shares_l3;
+bool has_big_cores __ro_after_init;
+bool coregroup_enabled __ro_after_init;
+bool thread_group_shares_l2 __ro_after_init;
+bool thread_group_shares_l3 __ro_after_init;
 
 DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map);
 DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map);
@@ -987,7 +987,7 @@ static int __init init_thread_group_cache_map(int cpu, int cache_property)
 	return 0;
 }
 
-static bool shared_caches;
+static bool shared_caches __ro_after_init;
 
 #ifdef CONFIG_SCHED_SMT
 /* cpumask of CPUs with asymmetric SMT dependency */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 4/5] powerpc/smp: Avoid asym packing within thread_group of a core
  2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
                   ` (2 preceding siblings ...)
  2023-12-14 18:07 ` [PATCH v5 3/5] powerpc/smp: Add __ro_after_init attribute Srikar Dronamraju
@ 2023-12-14 18:07 ` Srikar Dronamraju
  2023-12-14 18:07 ` [PATCH v5 5/5] powerpc/smp: Dynamically build Powerpc topology Srikar Dronamraju
  2023-12-21 10:38 ` [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Michael Ellerman
  5 siblings, 0 replies; 8+ messages in thread
From: Srikar Dronamraju @ 2023-12-14 18:07 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Christophe Leroy
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	ndesaulniers, linux-kernel, Aneesh, Rohan McLure, linuxppc-dev,
	Josh Poimboeuf

PowerVM Hypervisor will schedule at a core granularity. However each
core can have more than one thread_groups. For better utilization in
case of a shared processor, its preferable for the scheduler to pack to
the lowest core. However there is no benefit of moving a thread between
two thread groups of the same core.

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/smp.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index aea149627209..9d8bb9a084bd 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1763,6 +1763,19 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	set_sched_topology(powerpc_topology);
 }
 
+/*
+ * For asym packing, by default lower numbered CPU has higher priority.
+ * On shared processors, pack to lower numbered core. However avoid moving
+ * between thread_groups within the same core.
+ */
+int arch_asym_cpu_priority(int cpu)
+{
+	if (static_branch_unlikely(&splpar_asym_pack))
+		return -cpu / threads_per_core;
+
+	return -cpu;
+}
+
 #ifdef CONFIG_HOTPLUG_CPU
 int __cpu_disable(void)
 {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 5/5] powerpc/smp: Dynamically build Powerpc topology
  2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
                   ` (3 preceding siblings ...)
  2023-12-14 18:07 ` [PATCH v5 4/5] powerpc/smp: Avoid asym packing within thread_group of a core Srikar Dronamraju
@ 2023-12-14 18:07 ` Srikar Dronamraju
  2023-12-21 10:38 ` [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Michael Ellerman
  5 siblings, 0 replies; 8+ messages in thread
From: Srikar Dronamraju @ 2023-12-14 18:07 UTC (permalink / raw)
  To: Michael Ellerman, Nicholas Piggin, Christophe Leroy
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	ndesaulniers, linux-kernel, Aneesh, Rohan McLure, linuxppc-dev,
	Josh Poimboeuf

Currently there are four Powerpc specific sched topologies.  These are
all statically defined.  However not all these topologies are used by
all Powerpc systems.

To avoid unnecessary degenerations by the scheduler, masks and flags
are compared. However if the sched topologies are build dynamically then
the code is simpler and there are greater chances of avoiding
degenerations.

Note:
Even X86 builds its sched topologies dynamically and proposed changes
are very similar to the way X86 is building its topologies.

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
Changelog:
v3 -> v4:
- Conflict resolution due to rebase
	(DIE changed to PKG)

 arch/powerpc/kernel/smp.c | 78 ++++++++++++++-------------------------
 1 file changed, 28 insertions(+), 50 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 9d8bb9a084bd..693334c20d07 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -93,15 +93,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map);
 EXPORT_PER_CPU_SYMBOL(cpu_core_map);
 EXPORT_SYMBOL_GPL(has_big_cores);
 
-enum {
-#ifdef CONFIG_SCHED_SMT
-	smt_idx,
-#endif
-	cache_idx,
-	mc_idx,
-	die_idx,
-};
-
 #define MAX_THREAD_LIST_SIZE	8
 #define THREAD_GROUP_SHARE_L1   1
 #define THREAD_GROUP_SHARE_L2_L3 2
@@ -1067,16 +1058,6 @@ static const struct cpumask *cpu_mc_mask(int cpu)
 	return cpu_coregroup_mask(cpu);
 }
 
-static struct sched_domain_topology_level powerpc_topology[] = {
-#ifdef CONFIG_SCHED_SMT
-	{ cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
-#endif
-	{ shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) },
-	{ cpu_mc_mask, powerpc_shared_proc_flags, SD_INIT_NAME(MC) },
-	{ cpu_cpu_mask, powerpc_shared_proc_flags, SD_INIT_NAME(PKG) },
-	{ NULL, },
-};
-
 static int __init init_big_cores(void)
 {
 	int cpu;
@@ -1704,9 +1685,11 @@ void start_secondary(void *unused)
 	BUG();
 }
 
-static void __init fixup_topology(void)
+static struct sched_domain_topology_level powerpc_topology[6];
+
+static void __init build_sched_topology(void)
 {
-	int i;
+	int i = 0;
 
 	if (is_shared_processor() && has_big_cores)
 		static_branch_enable(&splpar_asym_pack);
@@ -1714,36 +1697,33 @@ static void __init fixup_topology(void)
 #ifdef CONFIG_SCHED_SMT
 	if (has_big_cores) {
 		pr_info("Big cores detected but using small core scheduling\n");
-		powerpc_topology[smt_idx].mask = smallcore_smt_mask;
+		powerpc_topology[i++] = (struct sched_domain_topology_level){
+			smallcore_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT)
+		};
+	} else {
+		powerpc_topology[i++] = (struct sched_domain_topology_level){
+			cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT)
+		};
 	}
 #endif
+	if (shared_caches) {
+		powerpc_topology[i++] = (struct sched_domain_topology_level){
+			shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE)
+		};
+	}
+	if (has_coregroup_support()) {
+		powerpc_topology[i++] = (struct sched_domain_topology_level){
+			cpu_mc_mask, powerpc_shared_proc_flags, SD_INIT_NAME(MC)
+		};
+	}
+	powerpc_topology[i++] = (struct sched_domain_topology_level){
+		cpu_cpu_mask, powerpc_shared_proc_flags, SD_INIT_NAME(PKG)
+	};
 
-	if (!has_coregroup_support())
-		powerpc_topology[mc_idx].mask = powerpc_topology[cache_idx].mask;
-
-	/*
-	 * Try to consolidate topology levels here instead of
-	 * allowing scheduler to degenerate.
-	 * - Dont consolidate if masks are different.
-	 * - Dont consolidate if sd_flags exists and are different.
-	 */
-	for (i = 1; i <= die_idx; i++) {
-		if (powerpc_topology[i].mask != powerpc_topology[i - 1].mask)
-			continue;
-
-		if (powerpc_topology[i].sd_flags && powerpc_topology[i - 1].sd_flags &&
-				powerpc_topology[i].sd_flags != powerpc_topology[i - 1].sd_flags)
-			continue;
-
-		if (!powerpc_topology[i - 1].sd_flags)
-			powerpc_topology[i - 1].sd_flags = powerpc_topology[i].sd_flags;
+	/* There must be one trailing NULL entry left.  */
+	BUG_ON(i >= ARRAY_SIZE(powerpc_topology) - 1);
 
-		powerpc_topology[i].mask = powerpc_topology[i + 1].mask;
-		powerpc_topology[i].sd_flags = powerpc_topology[i + 1].sd_flags;
-#ifdef CONFIG_SCHED_DEBUG
-		powerpc_topology[i].name = powerpc_topology[i + 1].name;
-#endif
-	}
+	set_sched_topology(powerpc_topology);
 }
 
 void __init smp_cpus_done(unsigned int max_cpus)
@@ -1758,9 +1738,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 		smp_ops->bringup_done();
 
 	dump_numa_cpu_topology();
-
-	fixup_topology();
-	set_sched_topology(powerpc_topology);
+	build_sched_topology();
 }
 
 /*
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor
  2023-12-14 18:07 ` [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor Srikar Dronamraju
@ 2023-12-15  5:01   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 8+ messages in thread
From: Aneesh Kumar K.V @ 2023-12-15  5:01 UTC (permalink / raw)
  To: Srikar Dronamraju, Michael Ellerman, Nicholas Piggin, Christophe Leroy
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Srikar Dronamraju, Paul E. McKenney, Peter Zijlstra,
	ndesaulniers, linux-kernel, Rohan McLure, linuxppc-dev,
	Josh Poimboeuf

Srikar Dronamraju <srikar@linux.vnet.ibm.com> writes:

> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
> independent thread groups. On a shared processors LPARs, it helps to
> pack threads to lesser number of cores so that the overall system
> performance and utilization improves. PowerVM schedules at a big core
> level. Hence packing to fewer cores helps.
>

....

> +/*
> + * On shared processor LPARs scheduled on a big core (which has two or more
> + * independent thread groups per core), prefer lower numbered CPUs, so
> + * that workload consolidates to lesser number of cores.
> + */
> +static __ro_after_init DEFINE_STATIC_KEY_FALSE(splpar_asym_pack);


DEFINE_STATIC_KEY_FALSE_RO ?

> +
>  /*
>   * P9 has a slightly odd architecture where pairs of cores share an L2 cache.
>   * This topology makes it *much* cheaper to migrate tasks between adjacent cores
> @@ -1011,9 +1018,20 @@ static int powerpc_smt_flags(void)
>   */

-aneesh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations
  2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
                   ` (4 preceding siblings ...)
  2023-12-14 18:07 ` [PATCH v5 5/5] powerpc/smp: Dynamically build Powerpc topology Srikar Dronamraju
@ 2023-12-21 10:38 ` Michael Ellerman
  5 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2023-12-21 10:38 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Mark Rutland, Valentin Schneider, Vincent Guittot,
	Paul E. McKenney, Peter Zijlstra, linux-kernel, Nicholas Piggin,
	Aneesh, Rohan McLure, linuxppc-dev, Josh Poimboeuf

On Thu, 14 Dec 2023 23:37:10 +0530, Srikar Dronamraju wrote:
> PowerVM systems configured in shared processors mode have some unique
> challenges. Some device-tree properties will be missing on a shared
> processor. Hence some sched domains may not make sense for shared processor
> systems.
> 
> Most shared processor systems are over-provisioned. Underlying PowerVM
> Hypervisor would schedule at a Big Core granularity. The most recent power
> processors support two almost independent cores. In a lightly loaded
> condition, it helps the overall system performance if we pack to lesser
> number of Big Cores.
> 
> [...]

Applied to powerpc/next.

[1/5] powerpc/smp: Enable Asym packing for cores on shared processor
      https://git.kernel.org/powerpc/c/aa80c6343fcf53cbc29f84ba9f89ca87d4e41350
[2/5] powerpc/smp: Disable MC domain for shared processor
      https://git.kernel.org/powerpc/c/0e1c1986e0e65746daa05405d7747ce882f83cf1
[3/5] powerpc/smp: Add __ro_after_init attribute
      https://git.kernel.org/powerpc/c/fd535a858ebeb1f478b1d065b6c057f52aad483a
[4/5] powerpc/smp: Avoid asym packing within thread_group of a core
      https://git.kernel.org/powerpc/c/0e93f1c780e8fd315f1262467b7d35eb6f766d2f
[5/5] powerpc/smp: Dynamically build Powerpc topology
      https://git.kernel.org/powerpc/c/c46975715f5a7b941aa09bc0539a8dbe297f308f

cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-12-21 10:47 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-14 18:07 [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
2023-12-14 18:07 ` [PATCH v5 1/5] powerpc/smp: Enable Asym packing for cores on shared processor Srikar Dronamraju
2023-12-15  5:01   ` Aneesh Kumar K.V
2023-12-14 18:07 ` [PATCH v5 2/5] powerpc/smp: Disable MC domain for " Srikar Dronamraju
2023-12-14 18:07 ` [PATCH v5 3/5] powerpc/smp: Add __ro_after_init attribute Srikar Dronamraju
2023-12-14 18:07 ` [PATCH v5 4/5] powerpc/smp: Avoid asym packing within thread_group of a core Srikar Dronamraju
2023-12-14 18:07 ` [PATCH v5 5/5] powerpc/smp: Dynamically build Powerpc topology Srikar Dronamraju
2023-12-21 10:38 ` [PATCH v5 0/5] powerpc/smp: Topology and shared processor optimizations Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).