linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 0/2] sched/fair: Scan cluster before scanning LLC in wake-up path
@ 2023-07-19  9:28 Yicong Yang
  2023-07-19  9:28 ` [PATCH v9 1/2] sched: Add cpus_share_resources API Yicong Yang
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Yicong Yang @ 2023-07-19  9:28 UTC (permalink / raw)
  To: peterz, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	tim.c.chen, yu.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel
  Cc: rostedt, bsegall, bristot, prime.zeng, yangyicong,
	jonathan.cameron, ego, srikar, linuxarm, 21cnbao, kprateek.nayak,
	wuyun.abel

From: Yicong Yang <yangyicong@hisilicon.com>

This is the follow-up work to support cluster scheduler. Previously
we have added cluster level in the scheduler for both ARM64[1] and
X86[2] to support load balance between clusters to bring more memory
bandwidth and decrease cache contention. This patchset, on the other
hand, takes care of wake-up path by giving CPUs within the same cluster
a try before scanning the whole LLC to benefit those tasks communicating
with each other.

[1] 778c558f49a2 ("sched: Add cluster scheduler level in core and related Kconfig for ARM64")
[2] 66558b730f25 ("sched: Add cluster scheduler level for x86")

Since we're using sd->groups->flags to determine a cluster, core should ensure
the flags set correctly on domain generation. This is done by [*].

[*] https://lore.kernel.org/all/20230713013133.2314153-1-yu.c.chen@intel.com/

Change since v8:
- Peter find cpus_share_lowest_cache() is weired so fallback to cpus_share_resources()
  suggested in v4
- Use sd->groups->flags to find the cluster when scanning, save one per-cpu pointer
- Fix sched_cluster_active enabled incorrectly on domain degeneration
- Use sched_cluster_active to avoid repeated check on non-cluster machines, per Gautham
Link: https://lore.kernel.org/all/20230530070253.33306-1-yangyicong@huawei.com/

Change since v7:
- Optimize by choosing prev_cpu/recent_used_cpu when possible after failed to
  scanning for an idle CPU in cluster/LLC. Thanks Chen Yu for testing on Jacobsville
Link: https://lore.kernel.org/all/20220915073423.25535-1-yangyicong@huawei.com/

Change for RESEND:
- Collect tag from Chen Yu and rebase on the latest tip/sched/core. Thanks.
Link: https://lore.kernel.org/lkml/20220822073610.27205-1-yangyicong@huawei.com/

Change since v6:
- rebase on 6.0-rc1
Link: https://lore.kernel.org/lkml/20220726074758.46686-1-yangyicong@huawei.com/

Change since v5:
- Improve patch 2 according to Peter's suggestion:
  - use sched_cluster_active to indicate whether cluster is active
  - consider SMT case and use wrap iteration when scanning cluster
- Add Vincent's tag
Thanks.
Link: https://lore.kernel.org/lkml/20220720081150.22167-1-yangyicong@hisilicon.com/

Change since v4:
- rename cpus_share_resources to cpus_share_lowest_cache to be more informative, per Tim
- return -1 when nr==0 in scan_cluster(), per Abel
Thanks!
Link: https://lore.kernel.org/lkml/20220609120622.47724-1-yangyicong@hisilicon.com/

Change since v3:
- fix compile error when !CONFIG_SCHED_CLUSTER, reported by lkp test.
Link: https://lore.kernel.org/lkml/20220608095758.60504-1-yangyicong@hisilicon.com/

Change since v2:
- leverage SIS_PROP to suspend redundant scanning when LLC is overloaded
- remove the ping-pong suppression
- address the comment from Tim, thanks.
Link: https://lore.kernel.org/lkml/20220126080947.4529-1-yangyicong@hisilicon.com/

Change since v1:
- regain the performance data based on v5.17-rc1
- rename cpus_share_cluster to cpus_share_resources per Vincent and Gautham, thanks!
Link: https://lore.kernel.org/lkml/20211215041149.73171-1-yangyicong@hisilicon.com/

Barry Song (2):
  sched: Add cpus_share_resources API
  sched/fair: Scan cluster before scanning LLC in wake-up path

 include/linux/sched/sd_flags.h |  7 ++++
 include/linux/sched/topology.h |  8 ++++-
 kernel/sched/core.c            | 12 +++++++
 kernel/sched/fair.c            | 59 +++++++++++++++++++++++++++++++---
 kernel/sched/sched.h           |  2 ++
 kernel/sched/topology.c        | 25 ++++++++++++++
 6 files changed, 107 insertions(+), 6 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v9 1/2] sched: Add cpus_share_resources API
  2023-07-19  9:28 [PATCH v9 0/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
@ 2023-07-19  9:28 ` Yicong Yang
  2023-07-19  9:28 ` [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
  2023-08-15 11:16 ` [PATCH v9 0/2] " Yicong Yang
  2 siblings, 0 replies; 8+ messages in thread
From: Yicong Yang @ 2023-07-19  9:28 UTC (permalink / raw)
  To: peterz, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	tim.c.chen, yu.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel
  Cc: rostedt, bsegall, bristot, prime.zeng, yangyicong,
	jonathan.cameron, ego, srikar, linuxarm, 21cnbao, kprateek.nayak,
	wuyun.abel, Barry Song

From: Barry Song <song.bao.hua@hisilicon.com>

Add cpus_share_resources() API. This is the preparation for the
optimization of select_idle_cpu() on platforms with cluster scheduler
level.

On a machine with clusters cpus_share_resources() will test whether
two cpus are within the same cluster. On a non-cluster machine it
will behaves the same as cpus_share_cache(). So we use "resources"
here for cache resources.

Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 include/linux/sched/sd_flags.h |  7 +++++++
 include/linux/sched/topology.h |  8 +++++++-
 kernel/sched/core.c            | 12 ++++++++++++
 kernel/sched/sched.h           |  1 +
 kernel/sched/topology.c        | 13 +++++++++++++
 5 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
index fad77b5172e2..a8b28647aafc 100644
--- a/include/linux/sched/sd_flags.h
+++ b/include/linux/sched/sd_flags.h
@@ -109,6 +109,13 @@ SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
  */
 SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
 
+/*
+ * Domain members share CPU cluster (LLC tags or L2 cache)
+ *
+ * NEEDS_GROUPS: Clusters are shared between groups.
+ */
+SD_FLAG(SD_CLUSTER, SDF_NEEDS_GROUPS)
+
 /*
  * Domain members share CPU package resources (i.e. caches)
  *
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 67b573d5bf28..4c14fe127223 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -45,7 +45,7 @@ static inline int cpu_smt_flags(void)
 #ifdef CONFIG_SCHED_CLUSTER
 static inline int cpu_cluster_flags(void)
 {
-	return SD_SHARE_PKG_RESOURCES;
+	return SD_CLUSTER | SD_SHARE_PKG_RESOURCES;
 }
 #endif
 
@@ -179,6 +179,7 @@ cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
 bool cpus_share_cache(int this_cpu, int that_cpu);
+bool cpus_share_resources(int this_cpu, int that_cpu);
 
 typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
 typedef int (*sched_domain_flags_f)(void);
@@ -232,6 +233,11 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu)
 	return true;
 }
 
+static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+{
+	return true;
+}
+
 #endif	/* !CONFIG_SMP */
 
 #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c52c2eba7c73..4e88643dc48c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3953,6 +3953,18 @@ bool cpus_share_cache(int this_cpu, int that_cpu)
 	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
 }
 
+/*
+ * Whether CPUs are share cache resources, which means LLC on non-cluster
+ * machines and LLC tag or L2 on machines with clusters.
+ */
+bool cpus_share_resources(int this_cpu, int that_cpu)
+{
+	if (this_cpu == that_cpu)
+		return true;
+
+	return per_cpu(sd_share_id, this_cpu) == per_cpu(sd_share_id, that_cpu);
+}
+
 static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
 {
 	/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e93e006a942b..4ff8cdc5a55a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1847,6 +1847,7 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DECLARE_PER_CPU(int, sd_llc_size);
 DECLARE_PER_CPU(int, sd_llc_id);
+DECLARE_PER_CPU(int, sd_share_id);
 DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index d3a3b2646ec4..ce1fd8e00346 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -666,6 +666,7 @@ static void destroy_sched_domains(struct sched_domain *sd)
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DEFINE_PER_CPU(int, sd_llc_size);
 DEFINE_PER_CPU(int, sd_llc_id);
+DEFINE_PER_CPU(int, sd_share_id);
 DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
@@ -691,6 +692,17 @@ static void update_top_cache_domain(int cpu)
 	per_cpu(sd_llc_id, cpu) = id;
 	rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
 
+	sd = lowest_flag_domain(cpu, SD_CLUSTER);
+	if (sd)
+		id = cpumask_first(sched_domain_span(sd));
+
+	/*
+	 * This assignment should be placed after the sd_llc_id as
+	 * we want this id equals to cluster id on cluster machines
+	 * but equals to LLC id on non-Cluster machines.
+	 */
+	per_cpu(sd_share_id, cpu) = id;
+
 	sd = lowest_flag_domain(cpu, SD_NUMA);
 	rcu_assign_pointer(per_cpu(sd_numa, cpu), sd);
 
@@ -1539,6 +1551,7 @@ static struct cpumask		***sched_domains_numa_masks;
  */
 #define TOPOLOGY_SD_FLAGS		\
 	(SD_SHARE_CPUCAPACITY	|	\
+	 SD_CLUSTER		|	\
 	 SD_SHARE_PKG_RESOURCES |	\
 	 SD_NUMA		|	\
 	 SD_ASYM_PACKING)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path
  2023-07-19  9:28 [PATCH v9 0/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
  2023-07-19  9:28 ` [PATCH v9 1/2] sched: Add cpus_share_resources API Yicong Yang
@ 2023-07-19  9:28 ` Yicong Yang
  2023-07-21  9:52   ` Chen Yu
  2023-08-15 11:16 ` [PATCH v9 0/2] " Yicong Yang
  2 siblings, 1 reply; 8+ messages in thread
From: Yicong Yang @ 2023-07-19  9:28 UTC (permalink / raw)
  To: peterz, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	tim.c.chen, yu.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel
  Cc: rostedt, bsegall, bristot, prime.zeng, yangyicong,
	jonathan.cameron, ego, srikar, linuxarm, 21cnbao, kprateek.nayak,
	wuyun.abel, Barry Song

From: Barry Song <song.bao.hua@hisilicon.com>

For platforms having clusters like Kunpeng920, CPUs within the same cluster
have lower latency when synchronizing and accessing shared resources like
cache. Thus, this patch tries to find an idle cpu within the cluster of the
target CPU before scanning the whole LLC to gain lower latency. This
will be implemented in 3 steps in select_idle_sibling():
1. When the prev_cpu/recent_used_cpu are good wakeup candidates, use them
   if they're sharing cluster with the target CPU. Otherwise record them
   and do the scanning first.
2. Scanning the cluster prior to the LLC of the target CPU for an
   idle CPU to wakeup.
3. If no idle CPU found after scanning and the prev_cpu/recent_used_cpu
   can be used, use them.

Testing has been done on Kunpeng920 by pinning tasks to one numa and two
numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs.

With this patch, We noticed enhancement on tbench and netperf within one
numa or cross two numa on 6.5-rc1:
tbench results (node 0):
             baseline                    patched
  1:        325.9673        378.9117 (   16.24%)
  4:       1311.9667       1501.5033 (   14.45%)
  8:       2629.4667       2961.9100 (   12.64%)
 16:       5259.1633       5928.0833 (   12.72%)
 32:      10368.6333      10566.8667 (    1.91%)
 64:       7868.7700       8182.0100 (    3.98%)
128:       6528.5733       6801.8000 (    4.19%)
tbench results (node 0-1):
              vanilla                    patched
  1:        329.2757        380.8907 (   15.68%)
  4:       1327.7900       1494.5300 (   12.56%)
  8:       2627.2133       2917.1233 (   11.03%)
 16:       5201.3367       5835.9233 (   12.20%)
 32:       8811.8500      11154.2000 (   26.58%)
 64:      15832.4000      19643.7667 (   24.07%)
128:      12605.5667      14639.5667 (   16.14%)
netperf results TCP_RR (node 0):
             baseline                    patched
  1:      77302.8667      92172.2100 (   19.24%)
  4:      78724.9200      91581.3100 (   16.33%)
  8:      79168.1296      91091.7942 (   15.06%)
 16:      81079.4200      90546.5225 (   11.68%)
 32:      82201.5799      78910.4982 (   -4.00%)
 64:      29539.3509      29131.4698 (   -1.38%)
128:      12082.7522      11956.7705 (   -1.04%)
netperf results TCP_RR (node 0-1):
             baseline                    patched
  1:      78340.5233      92101.8733 (   17.57%)
  4:      79644.2483      91326.7517 (   14.67%)
  8:      79557.4313      90737.8096 (   14.05%)
 16:      79215.5304      90568.4542 (   14.33%)
 32:      78999.3983      85460.6044 (    8.18%)
 64:      74198.9494      74325.4361 (    0.17%)
128:      27397.4810      27757.5471 (    1.31%)
netperf results UDP_RR (node 0):
             baseline                    patched
  1:      95721.9367     111546.1367 (   16.53%)
  4:      96384.2250     110036.1408 (   14.16%)
  8:      97460.6546     109968.0883 (   12.83%)
 16:      98876.1687     109387.8065 (   10.63%)
 32:     104364.6417     105241.6767 (    0.84%)
 64:      37502.6246      37451.1204 (   -0.14%)
128:      14496.1780      14610.5538 (    0.79%)
netperf results UDP_RR (node 0-1):
             baseline                    patched
  1:      96176.1633     111397.5333 (   15.83%)
  4:      94758.5575     105681.7833 (   11.53%)
  8:      94340.2200     104138.3613 (   10.39%)
 16:      95208.5285     106714.0396 (   12.08%)
 32:      74745.9028     100713.8764 (   34.74%)
 64:      59351.4977      73536.1434 (   23.90%)
128:      23755.4971      26648.7413 (   12.18%)

Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so the SMT branch
in the code has not been tested but it supposed to work.

Chen Yu also noticed this will improve the performance of tbench and
netperf on a 24 CPUs Jacobsville machine, there are 4 CPUs in one
cluster sharing L2 Cache.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
[https://lore.kernel.org/lkml/Ytfjs+m1kUs0ScSn@worktop.programming.kicks-ass.net]
Tested-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
---
 kernel/sched/fair.c     | 59 +++++++++++++++++++++++++++++++++++++----
 kernel/sched/sched.h    |  1 +
 kernel/sched/topology.c | 12 +++++++++
 3 files changed, 67 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b3e25be58e2b..d91bf64f81f5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7012,6 +7012,30 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 		}
 	}
 
+	if (static_branch_unlikely(&sched_cluster_active)) {
+		struct sched_group *sg = sd->groups;
+
+		if (sg->flags & SD_CLUSTER) {
+			for_each_cpu_wrap(cpu, sched_group_span(sg), target + 1) {
+				if (!cpumask_test_cpu(cpu, cpus))
+					continue;
+
+				if (has_idle_core) {
+					i = select_idle_core(p, cpu, cpus, &idle_cpu);
+					if ((unsigned int)i < nr_cpumask_bits)
+						return i;
+				} else {
+					if (--nr <= 0)
+						return -1;
+					idle_cpu = __select_idle_cpu(cpu, p);
+					if ((unsigned int)idle_cpu < nr_cpumask_bits)
+						return idle_cpu;
+				}
+			}
+			cpumask_andnot(cpus, cpus, sched_group_span(sg));
+		}
+	}
+
 	for_each_cpu_wrap(cpu, cpus, target + 1) {
 		if (has_idle_core) {
 			i = select_idle_core(p, cpu, cpus, &idle_cpu);
@@ -7019,7 +7043,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 				return i;
 
 		} else {
-			if (!--nr)
+			if (--nr <= 0)
 				return -1;
 			idle_cpu = __select_idle_cpu(cpu, p);
 			if ((unsigned int)idle_cpu < nr_cpumask_bits)
@@ -7121,7 +7145,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	bool has_idle_core = false;
 	struct sched_domain *sd;
 	unsigned long task_util, util_min, util_max;
-	int i, recent_used_cpu;
+	int i, recent_used_cpu, prev_aff = -1;
 
 	/*
 	 * On asymmetric system, update task utilization because we will check
@@ -7148,8 +7172,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	 */
 	if (prev != target && cpus_share_cache(prev, target) &&
 	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
-	    asym_fits_cpu(task_util, util_min, util_max, prev))
-		return prev;
+	    asym_fits_cpu(task_util, util_min, util_max, prev)) {
+		if (!static_branch_unlikely(&sched_cluster_active))
+			return prev;
+
+		if (cpus_share_resources(prev, target))
+			return prev;
+		prev_aff = prev;
+	}
 
 	/*
 	 * Allow a per-cpu kthread to stack with the wakee if the
@@ -7176,7 +7206,13 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
 	    cpumask_test_cpu(recent_used_cpu, p->cpus_ptr) &&
 	    asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) {
-		return recent_used_cpu;
+		if (!static_branch_unlikely(&sched_cluster_active))
+			return recent_used_cpu;
+
+		if (cpus_share_resources(recent_used_cpu, target))
+			return recent_used_cpu;
+	} else {
+		recent_used_cpu = -1;
 	}
 
 	/*
@@ -7217,6 +7253,19 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
+	/*
+	 * For cluster machines which have lower sharing cache like L2 or
+	 * LLC Tag, we tend to find an idle CPU in the target's cluster
+	 * first. But prev_cpu or recent_used_cpu may also be a good candidate,
+	 * use them if possible when no idle CPU found in select_idle_cpu().
+	 */
+	if ((unsigned int)prev_aff < nr_cpumask_bits &&
+	    (available_idle_cpu(prev_aff) || sched_idle_cpu(prev_aff)))
+		return prev_aff;
+	if ((unsigned int)recent_used_cpu < nr_cpumask_bits &&
+	    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)))
+		return recent_used_cpu;
+
 	return target;
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4ff8cdc5a55a..ebf53f98f5f7 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1853,6 +1853,7 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
 extern struct static_key_false sched_asym_cpucapacity;
+extern struct static_key_false sched_cluster_active;
 
 static __always_inline bool sched_asym_cpucap_active(void)
 {
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index ce1fd8e00346..2b8f419179d3 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -671,7 +671,9 @@ DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
+
 DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
+DEFINE_STATIC_KEY_FALSE(sched_cluster_active);
 
 static void update_top_cache_domain(int cpu)
 {
@@ -2366,6 +2368,7 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	struct rq *rq = NULL;
 	int i, ret = -ENOMEM;
 	bool has_asym = false;
+	bool has_cluster = false;
 
 	if (WARN_ON(cpumask_empty(cpu_map)))
 		goto error;
@@ -2491,12 +2494,18 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 			WRITE_ONCE(d.rd->max_cpu_capacity, rq->cpu_capacity_orig);
 
 		cpu_attach_domain(sd, d.rd, i);
+
+		if (lowest_flag_domain(i, SD_CLUSTER))
+			has_cluster = true;
 	}
 	rcu_read_unlock();
 
 	if (has_asym)
 		static_branch_inc_cpuslocked(&sched_asym_cpucapacity);
 
+	if (has_cluster)
+		static_branch_inc_cpuslocked(&sched_cluster_active);
+
 	if (rq && sched_debug_verbose) {
 		pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n",
 			cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
@@ -2596,6 +2605,9 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
 	if (rcu_access_pointer(per_cpu(sd_asym_cpucapacity, cpu)))
 		static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
 
+	if (static_branch_unlikely(&sched_cluster_active))
+		static_branch_dec_cpuslocked(&sched_cluster_active);
+
 	rcu_read_lock();
 	for_each_cpu(i, cpu_map)
 		cpu_attach_domain(NULL, &def_root_domain, i);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path
  2023-07-19  9:28 ` [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
@ 2023-07-21  9:52   ` Chen Yu
  2023-08-01 12:06     ` Yicong Yang
  0 siblings, 1 reply; 8+ messages in thread
From: Chen Yu @ 2023-07-21  9:52 UTC (permalink / raw)
  To: Yicong Yang
  Cc: peterz, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	tim.c.chen, gautham.shenoy, mgorman, vschneid, linux-kernel,
	linux-arm-kernel, rostedt, bsegall, bristot, prime.zeng,
	yangyicong, jonathan.cameron, ego, srikar, linuxarm, 21cnbao,
	kprateek.nayak, wuyun.abel, Barry Song

Hi Yicong,

Thanks for sending this version!

On 2023-07-19 at 17:28:38 +0800, Yicong Yang wrote:
> From: Barry Song <song.bao.hua@hisilicon.com>
> 
> For platforms having clusters like Kunpeng920, CPUs within the same cluster
> have lower latency when synchronizing and accessing shared resources like
> cache. Thus, this patch tries to find an idle cpu within the cluster of the
> target CPU before scanning the whole LLC to gain lower latency. This
> will be implemented in 3 steps in select_idle_sibling():
> 1. When the prev_cpu/recent_used_cpu are good wakeup candidates, use them
>    if they're sharing cluster with the target CPU. Otherwise record them
>    and do the scanning first.
> 2. Scanning the cluster prior to the LLC of the target CPU for an
>    idle CPU to wakeup.
> 3. If no idle CPU found after scanning and the prev_cpu/recent_used_cpu
>    can be used, use them.
> 
> Testing has been done on Kunpeng920 by pinning tasks to one numa and two
> numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs.
> 
> With this patch, We noticed enhancement on tbench and netperf within one
> numa or cross two numa on 6.5-rc1:
> tbench results (node 0):
>              baseline                    patched
>   1:        325.9673        378.9117 (   16.24%)
>   4:       1311.9667       1501.5033 (   14.45%)
>   8:       2629.4667       2961.9100 (   12.64%)
>  16:       5259.1633       5928.0833 (   12.72%)
>  32:      10368.6333      10566.8667 (    1.91%)
>  64:       7868.7700       8182.0100 (    3.98%)
> 128:       6528.5733       6801.8000 (    4.19%)
> tbench results (node 0-1):
>               vanilla                    patched
>   1:        329.2757        380.8907 (   15.68%)
>   4:       1327.7900       1494.5300 (   12.56%)
>   8:       2627.2133       2917.1233 (   11.03%)
>  16:       5201.3367       5835.9233 (   12.20%)
>  32:       8811.8500      11154.2000 (   26.58%)
>  64:      15832.4000      19643.7667 (   24.07%)
> 128:      12605.5667      14639.5667 (   16.14%)
> netperf results TCP_RR (node 0):
>              baseline                    patched
>   1:      77302.8667      92172.2100 (   19.24%)
>   4:      78724.9200      91581.3100 (   16.33%)
>   8:      79168.1296      91091.7942 (   15.06%)
>  16:      81079.4200      90546.5225 (   11.68%)
>  32:      82201.5799      78910.4982 (   -4.00%)
>  64:      29539.3509      29131.4698 (   -1.38%)
> 128:      12082.7522      11956.7705 (   -1.04%)
> netperf results TCP_RR (node 0-1):
>              baseline                    patched
>   1:      78340.5233      92101.8733 (   17.57%)
>   4:      79644.2483      91326.7517 (   14.67%)
>   8:      79557.4313      90737.8096 (   14.05%)
>  16:      79215.5304      90568.4542 (   14.33%)
>  32:      78999.3983      85460.6044 (    8.18%)
>  64:      74198.9494      74325.4361 (    0.17%)
> 128:      27397.4810      27757.5471 (    1.31%)
> netperf results UDP_RR (node 0):
>              baseline                    patched
>   1:      95721.9367     111546.1367 (   16.53%)
>   4:      96384.2250     110036.1408 (   14.16%)
>   8:      97460.6546     109968.0883 (   12.83%)
>  16:      98876.1687     109387.8065 (   10.63%)
>  32:     104364.6417     105241.6767 (    0.84%)
>  64:      37502.6246      37451.1204 (   -0.14%)
> 128:      14496.1780      14610.5538 (    0.79%)
> netperf results UDP_RR (node 0-1):
>              baseline                    patched
>   1:      96176.1633     111397.5333 (   15.83%)
>   4:      94758.5575     105681.7833 (   11.53%)
>   8:      94340.2200     104138.3613 (   10.39%)
>  16:      95208.5285     106714.0396 (   12.08%)
>  32:      74745.9028     100713.8764 (   34.74%)
>  64:      59351.4977      73536.1434 (   23.90%)
> 128:      23755.4971      26648.7413 (   12.18%)
> 
> Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so the SMT branch
> in the code has not been tested but it supposed to work.
> 
> Chen Yu also noticed this will improve the performance of tbench and
> netperf on a 24 CPUs Jacobsville machine, there are 4 CPUs in one
> cluster sharing L2 Cache.
> 
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> [https://lore.kernel.org/lkml/Ytfjs+m1kUs0ScSn@worktop.programming.kicks-ass.net]
> Tested-by: Yicong Yang <yangyicong@hisilicon.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
> Reviewed-by: Chen Yu <yu.c.chen@intel.com>
> ---
>  kernel/sched/fair.c     | 59 +++++++++++++++++++++++++++++++++++++----
>  kernel/sched/sched.h    |  1 +
>  kernel/sched/topology.c | 12 +++++++++
>  3 files changed, 67 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b3e25be58e2b..d91bf64f81f5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7012,6 +7012,30 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>  		}
>  	}
>  
> +	if (static_branch_unlikely(&sched_cluster_active)) {
> +		struct sched_group *sg = sd->groups;
> +
> +		if (sg->flags & SD_CLUSTER) {
> +			for_each_cpu_wrap(cpu, sched_group_span(sg), target + 1) {
> +				if (!cpumask_test_cpu(cpu, cpus))
> +					continue;
> +
> +				if (has_idle_core) {
> +					i = select_idle_core(p, cpu, cpus, &idle_cpu);
> +					if ((unsigned int)i < nr_cpumask_bits)
> +						return i;
> +				} else {
> +					if (--nr <= 0)
> +						return -1;
> +					idle_cpu = __select_idle_cpu(cpu, p);
> +					if ((unsigned int)idle_cpu < nr_cpumask_bits)
> +						return idle_cpu;
> +				}
> +			}
> +			cpumask_andnot(cpus, cpus, sched_group_span(sg));
> +		}
> +	}
> +
>  	for_each_cpu_wrap(cpu, cpus, target + 1) {
>  		if (has_idle_core) {
>  			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> @@ -7019,7 +7043,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>  				return i;
>  
>  		} else {
> -			if (!--nr)
> +			if (--nr <= 0)
>  				return -1;
>  			idle_cpu = __select_idle_cpu(cpu, p);
>  			if ((unsigned int)idle_cpu < nr_cpumask_bits)
> @@ -7121,7 +7145,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>  	bool has_idle_core = false;
>  	struct sched_domain *sd;
>  	unsigned long task_util, util_min, util_max;
> -	int i, recent_used_cpu;
> +	int i, recent_used_cpu, prev_aff = -1;
>  
>  	/*
>  	 * On asymmetric system, update task utilization because we will check
> @@ -7148,8 +7172,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>  	 */
>  	if (prev != target && cpus_share_cache(prev, target) &&
>  	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
> -	    asym_fits_cpu(task_util, util_min, util_max, prev))
> -		return prev;
> +	    asym_fits_cpu(task_util, util_min, util_max, prev)) {
> +		if (!static_branch_unlikely(&sched_cluster_active))
> +			return prev;
> +
> +		if (cpus_share_resources(prev, target))
> +			return prev;

I have one minor question, previously Peter mentioned that he wants to get rid of the
percpu sd_share_id, not sure if he means that not using it in select_idle_cpu()
or remove that variable completely to not introduce extra space? 
Hi Peter, could you please give us more hints on this? thanks.

If we wants to get rid of this variable, would this work?

	if ((sd->groups->flags & SD_CLUSTER) &&
	    cpumask_test_cpu(prev, sched_group_span(sd->groups))
		return prev

Anyway, I'll queue a test job on top of your version and see what the result is.


thanks,
Chenyu

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path
  2023-07-21  9:52   ` Chen Yu
@ 2023-08-01 12:06     ` Yicong Yang
  2023-08-04 16:29       ` Chen Yu
  0 siblings, 1 reply; 8+ messages in thread
From: Yicong Yang @ 2023-08-01 12:06 UTC (permalink / raw)
  To: Chen Yu
  Cc: yangyicong, peterz, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, tim.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel, rostedt, bsegall, bristot,
	prime.zeng, jonathan.cameron, ego, srikar, linuxarm, 21cnbao,
	kprateek.nayak, wuyun.abel

Hi Chenyu,

Sorry for the late reply. Something's wrong and cause this didn't appear
in my mail box. I check it out on the LKML.

On 2023/7/21 17:52, Chen Yu wrote:
> Hi Yicong,
> 
> Thanks for sending this version!
> 
> On 2023-07-19 at 17:28:38 +0800, Yicong Yang wrote:
>> From: Barry Song <song.bao.hua@hisilicon.com>
>>
>> For platforms having clusters like Kunpeng920, CPUs within the same cluster
>> have lower latency when synchronizing and accessing shared resources like
>> cache. Thus, this patch tries to find an idle cpu within the cluster of the
>> target CPU before scanning the whole LLC to gain lower latency. This
>> will be implemented in 3 steps in select_idle_sibling():
>> 1. When the prev_cpu/recent_used_cpu are good wakeup candidates, use them
>>    if they're sharing cluster with the target CPU. Otherwise record them
>>    and do the scanning first.
>> 2. Scanning the cluster prior to the LLC of the target CPU for an
>>    idle CPU to wakeup.
>> 3. If no idle CPU found after scanning and the prev_cpu/recent_used_cpu
>>    can be used, use them.
>>
>> Testing has been done on Kunpeng920 by pinning tasks to one numa and two
>> numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs.
>>
>> With this patch, We noticed enhancement on tbench and netperf within one
>> numa or cross two numa on 6.5-rc1:
>> tbench results (node 0):
>>              baseline                    patched
>>   1:        325.9673        378.9117 (   16.24%)
>>   4:       1311.9667       1501.5033 (   14.45%)
>>   8:       2629.4667       2961.9100 (   12.64%)
>>  16:       5259.1633       5928.0833 (   12.72%)
>>  32:      10368.6333      10566.8667 (    1.91%)
>>  64:       7868.7700       8182.0100 (    3.98%)
>> 128:       6528.5733       6801.8000 (    4.19%)
>> tbench results (node 0-1):
>>               vanilla                    patched
>>   1:        329.2757        380.8907 (   15.68%)
>>   4:       1327.7900       1494.5300 (   12.56%)
>>   8:       2627.2133       2917.1233 (   11.03%)
>>  16:       5201.3367       5835.9233 (   12.20%)
>>  32:       8811.8500      11154.2000 (   26.58%)
>>  64:      15832.4000      19643.7667 (   24.07%)
>> 128:      12605.5667      14639.5667 (   16.14%)
>> netperf results TCP_RR (node 0):
>>              baseline                    patched
>>   1:      77302.8667      92172.2100 (   19.24%)
>>   4:      78724.9200      91581.3100 (   16.33%)
>>   8:      79168.1296      91091.7942 (   15.06%)
>>  16:      81079.4200      90546.5225 (   11.68%)
>>  32:      82201.5799      78910.4982 (   -4.00%)
>>  64:      29539.3509      29131.4698 (   -1.38%)
>> 128:      12082.7522      11956.7705 (   -1.04%)
>> netperf results TCP_RR (node 0-1):
>>              baseline                    patched
>>   1:      78340.5233      92101.8733 (   17.57%)
>>   4:      79644.2483      91326.7517 (   14.67%)
>>   8:      79557.4313      90737.8096 (   14.05%)
>>  16:      79215.5304      90568.4542 (   14.33%)
>>  32:      78999.3983      85460.6044 (    8.18%)
>>  64:      74198.9494      74325.4361 (    0.17%)
>> 128:      27397.4810      27757.5471 (    1.31%)
>> netperf results UDP_RR (node 0):
>>              baseline                    patched
>>   1:      95721.9367     111546.1367 (   16.53%)
>>   4:      96384.2250     110036.1408 (   14.16%)
>>   8:      97460.6546     109968.0883 (   12.83%)
>>  16:      98876.1687     109387.8065 (   10.63%)
>>  32:     104364.6417     105241.6767 (    0.84%)
>>  64:      37502.6246      37451.1204 (   -0.14%)
>> 128:      14496.1780      14610.5538 (    0.79%)
>> netperf results UDP_RR (node 0-1):
>>              baseline                    patched
>>   1:      96176.1633     111397.5333 (   15.83%)
>>   4:      94758.5575     105681.7833 (   11.53%)
>>   8:      94340.2200     104138.3613 (   10.39%)
>>  16:      95208.5285     106714.0396 (   12.08%)
>>  32:      74745.9028     100713.8764 (   34.74%)
>>  64:      59351.4977      73536.1434 (   23.90%)
>> 128:      23755.4971      26648.7413 (   12.18%)
>>
>> Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so the SMT branch
>> in the code has not been tested but it supposed to work.
>>
>> Chen Yu also noticed this will improve the performance of tbench and
>> netperf on a 24 CPUs Jacobsville machine, there are 4 CPUs in one
>> cluster sharing L2 Cache.
>>
>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>> [https://lore.kernel.org/lkml/Ytfjs+m1kUs0ScSn@worktop.programming.kicks-ass.net]
>> Tested-by: Yicong Yang <yangyicong@hisilicon.com>
>> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
>> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
>> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
>> Reviewed-by: Chen Yu <yu.c.chen@intel.com>
>> ---
>>  kernel/sched/fair.c     | 59 +++++++++++++++++++++++++++++++++++++----
>>  kernel/sched/sched.h    |  1 +
>>  kernel/sched/topology.c | 12 +++++++++
>>  3 files changed, 67 insertions(+), 5 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index b3e25be58e2b..d91bf64f81f5 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -7012,6 +7012,30 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>>  		}
>>  	}
>>  
>> +	if (static_branch_unlikely(&sched_cluster_active)) {
>> +		struct sched_group *sg = sd->groups;
>> +
>> +		if (sg->flags & SD_CLUSTER) {
>> +			for_each_cpu_wrap(cpu, sched_group_span(sg), target + 1) {
>> +				if (!cpumask_test_cpu(cpu, cpus))
>> +					continue;
>> +
>> +				if (has_idle_core) {
>> +					i = select_idle_core(p, cpu, cpus, &idle_cpu);
>> +					if ((unsigned int)i < nr_cpumask_bits)
>> +						return i;
>> +				} else {
>> +					if (--nr <= 0)
>> +						return -1;
>> +					idle_cpu = __select_idle_cpu(cpu, p);
>> +					if ((unsigned int)idle_cpu < nr_cpumask_bits)
>> +						return idle_cpu;
>> +				}
>> +			}
>> +			cpumask_andnot(cpus, cpus, sched_group_span(sg));
>> +		}
>> +	}
>> +
>>  	for_each_cpu_wrap(cpu, cpus, target + 1) {
>>  		if (has_idle_core) {
>>  			i = select_idle_core(p, cpu, cpus, &idle_cpu);
>> @@ -7019,7 +7043,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>>  				return i;
>>  
>>  		} else {
>> -			if (!--nr)
>> +			if (--nr <= 0)
>>  				return -1;
>>  			idle_cpu = __select_idle_cpu(cpu, p);
>>  			if ((unsigned int)idle_cpu < nr_cpumask_bits)
>> @@ -7121,7 +7145,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>>  	bool has_idle_core = false;
>>  	struct sched_domain *sd;
>>  	unsigned long task_util, util_min, util_max;
>> -	int i, recent_used_cpu;
>> +	int i, recent_used_cpu, prev_aff = -1;
>>  
>>  	/*
>>  	 * On asymmetric system, update task utilization because we will check
>> @@ -7148,8 +7172,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>>  	 */
>>  	if (prev != target && cpus_share_cache(prev, target) &&
>>  	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
>> -	    asym_fits_cpu(task_util, util_min, util_max, prev))
>> -		return prev;
>> +	    asym_fits_cpu(task_util, util_min, util_max, prev)) {
>> +		if (!static_branch_unlikely(&sched_cluster_active))
>> +			return prev;
>> +
>> +		if (cpus_share_resources(prev, target))
>> +			return prev;
> 
> I have one minor question, previously Peter mentioned that he wants to get rid of the
> percpu sd_share_id, not sure if he means that not using it in select_idle_cpu()
> or remove that variable completely to not introduce extra space? 
> Hi Peter, could you please give us more hints on this? thanks.
> 
> If we wants to get rid of this variable, would this work?
> 
> 	if ((sd->groups->flags & SD_CLUSTER) &&
> 	    cpumask_test_cpu(prev, sched_group_span(sd->groups))
> 		return prev
> 

In the current implementation, nop, we haven't deferenced the @sd yet and we don't
need to if scanning is not needed.

Since we're on the quick path without scanning here, I wonder it'll be a bit more
efficient to use a per-cpu id rather than deference the rcu and do the bitmap
computation.

Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path
  2023-08-01 12:06     ` Yicong Yang
@ 2023-08-04 16:29       ` Chen Yu
  2023-08-07 13:15         ` Yicong Yang
  0 siblings, 1 reply; 8+ messages in thread
From: Chen Yu @ 2023-08-04 16:29 UTC (permalink / raw)
  To: Yicong Yang
  Cc: yangyicong, peterz, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, tim.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel, rostedt, bsegall, bristot,
	prime.zeng, jonathan.cameron, ego, srikar, linuxarm, 21cnbao,
	kprateek.nayak, wuyun.abel

Hi Yicong,

On 2023-08-01 at 20:06:56 +0800, Yicong Yang wrote:
> Hi Chenyu,
> 
> Sorry for the late reply. Something's wrong and cause this didn't appear
> in my mail box. I check it out on the LKML.
>
 
No worries : )

> On 2023/7/21 17:52, Chen Yu wrote:
> > Hi Yicong,
> > 
> > Thanks for sending this version!
> > 
> > On 2023-07-19 at 17:28:38 +0800, Yicong Yang wrote:
> >> From: Barry Song <song.bao.hua@hisilicon.com>
> >>
> >> For platforms having clusters like Kunpeng920, CPUs within the same cluster
> >> have lower latency when synchronizing and accessing shared resources like
> >> cache. Thus, this patch tries to find an idle cpu within the cluster of the
> >> target CPU before scanning the whole LLC to gain lower latency. This
> >> will be implemented in 3 steps in select_idle_sibling():
> >> 1. When the prev_cpu/recent_used_cpu are good wakeup candidates, use them
> >>    if they're sharing cluster with the target CPU. Otherwise record them
> >>    and do the scanning first.
> >> 2. Scanning the cluster prior to the LLC of the target CPU for an
> >>    idle CPU to wakeup.
> >> 3. If no idle CPU found after scanning and the prev_cpu/recent_used_cpu
> >>    can be used, use them.
> >>
> >> Testing has been done on Kunpeng920 by pinning tasks to one numa and two
> >> numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs.
> >>
> >> With this patch, We noticed enhancement on tbench and netperf within one
> >> numa or cross two numa on 6.5-rc1:
> >> tbench results (node 0):
> >>              baseline                    patched
> >>   1:        325.9673        378.9117 (   16.24%)
> >>   4:       1311.9667       1501.5033 (   14.45%)
> >>   8:       2629.4667       2961.9100 (   12.64%)
> >>  16:       5259.1633       5928.0833 (   12.72%)
> >>  32:      10368.6333      10566.8667 (    1.91%)
> >>  64:       7868.7700       8182.0100 (    3.98%)
> >> 128:       6528.5733       6801.8000 (    4.19%)
> >> tbench results (node 0-1):
> >>               vanilla                    patched
> >>   1:        329.2757        380.8907 (   15.68%)
> >>   4:       1327.7900       1494.5300 (   12.56%)
> >>   8:       2627.2133       2917.1233 (   11.03%)
> >>  16:       5201.3367       5835.9233 (   12.20%)
> >>  32:       8811.8500      11154.2000 (   26.58%)
> >>  64:      15832.4000      19643.7667 (   24.07%)
> >> 128:      12605.5667      14639.5667 (   16.14%)
> >> netperf results TCP_RR (node 0):
> >>              baseline                    patched
> >>   1:      77302.8667      92172.2100 (   19.24%)
> >>   4:      78724.9200      91581.3100 (   16.33%)
> >>   8:      79168.1296      91091.7942 (   15.06%)
> >>  16:      81079.4200      90546.5225 (   11.68%)
> >>  32:      82201.5799      78910.4982 (   -4.00%)
> >>  64:      29539.3509      29131.4698 (   -1.38%)
> >> 128:      12082.7522      11956.7705 (   -1.04%)
> >> netperf results TCP_RR (node 0-1):
> >>              baseline                    patched
> >>   1:      78340.5233      92101.8733 (   17.57%)
> >>   4:      79644.2483      91326.7517 (   14.67%)
> >>   8:      79557.4313      90737.8096 (   14.05%)
> >>  16:      79215.5304      90568.4542 (   14.33%)
> >>  32:      78999.3983      85460.6044 (    8.18%)
> >>  64:      74198.9494      74325.4361 (    0.17%)
> >> 128:      27397.4810      27757.5471 (    1.31%)
> >> netperf results UDP_RR (node 0):
> >>              baseline                    patched
> >>   1:      95721.9367     111546.1367 (   16.53%)
> >>   4:      96384.2250     110036.1408 (   14.16%)
> >>   8:      97460.6546     109968.0883 (   12.83%)
> >>  16:      98876.1687     109387.8065 (   10.63%)
> >>  32:     104364.6417     105241.6767 (    0.84%)
> >>  64:      37502.6246      37451.1204 (   -0.14%)
> >> 128:      14496.1780      14610.5538 (    0.79%)
> >> netperf results UDP_RR (node 0-1):
> >>              baseline                    patched
> >>   1:      96176.1633     111397.5333 (   15.83%)
> >>   4:      94758.5575     105681.7833 (   11.53%)
> >>   8:      94340.2200     104138.3613 (   10.39%)
> >>  16:      95208.5285     106714.0396 (   12.08%)
> >>  32:      74745.9028     100713.8764 (   34.74%)
> >>  64:      59351.4977      73536.1434 (   23.90%)
> >> 128:      23755.4971      26648.7413 (   12.18%)
> >>
> >> Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so the SMT branch
> >> in the code has not been tested but it supposed to work.
> >>
> >> Chen Yu also noticed this will improve the performance of tbench and
> >> netperf on a 24 CPUs Jacobsville machine, there are 4 CPUs in one
> >> cluster sharing L2 Cache.
> >>
> >> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> >> [https://lore.kernel.org/lkml/Ytfjs+m1kUs0ScSn@worktop.programming.kicks-ass.net]
> >> Tested-by: Yicong Yang <yangyicong@hisilicon.com>
> >> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> >> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
> >> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
> >> Reviewed-by: Chen Yu <yu.c.chen@intel.com>
> >> ---
> >>  kernel/sched/fair.c     | 59 +++++++++++++++++++++++++++++++++++++----
> >>  kernel/sched/sched.h    |  1 +
> >>  kernel/sched/topology.c | 12 +++++++++
> >>  3 files changed, 67 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index b3e25be58e2b..d91bf64f81f5 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -7012,6 +7012,30 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >>  		}
> >>  	}
> >>  
> >> +	if (static_branch_unlikely(&sched_cluster_active)) {
> >> +		struct sched_group *sg = sd->groups;
> >> +
> >> +		if (sg->flags & SD_CLUSTER) {
> >> +			for_each_cpu_wrap(cpu, sched_group_span(sg), target + 1) {
> >> +				if (!cpumask_test_cpu(cpu, cpus))
> >> +					continue;
> >> +
> >> +				if (has_idle_core) {
> >> +					i = select_idle_core(p, cpu, cpus, &idle_cpu);
> >> +					if ((unsigned int)i < nr_cpumask_bits)
> >> +						return i;
> >> +				} else {
> >> +					if (--nr <= 0)
> >> +						return -1;
> >> +					idle_cpu = __select_idle_cpu(cpu, p);
> >> +					if ((unsigned int)idle_cpu < nr_cpumask_bits)
> >> +						return idle_cpu;
> >> +				}
> >> +			}
> >> +			cpumask_andnot(cpus, cpus, sched_group_span(sg));
> >> +		}
> >> +	}
> >> +
> >>  	for_each_cpu_wrap(cpu, cpus, target + 1) {
> >>  		if (has_idle_core) {
> >>  			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> >> @@ -7019,7 +7043,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >>  				return i;
> >>  
> >>  		} else {
> >> -			if (!--nr)
> >> +			if (--nr <= 0)
> >>  				return -1;
> >>  			idle_cpu = __select_idle_cpu(cpu, p);
> >>  			if ((unsigned int)idle_cpu < nr_cpumask_bits)
> >> @@ -7121,7 +7145,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> >>  	bool has_idle_core = false;
> >>  	struct sched_domain *sd;
> >>  	unsigned long task_util, util_min, util_max;
> >> -	int i, recent_used_cpu;
> >> +	int i, recent_used_cpu, prev_aff = -1;
> >>  
> >>  	/*
> >>  	 * On asymmetric system, update task utilization because we will check
> >> @@ -7148,8 +7172,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> >>  	 */
> >>  	if (prev != target && cpus_share_cache(prev, target) &&
> >>  	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
> >> -	    asym_fits_cpu(task_util, util_min, util_max, prev))
> >> -		return prev;
> >> +	    asym_fits_cpu(task_util, util_min, util_max, prev)) {
> >> +		if (!static_branch_unlikely(&sched_cluster_active))
> >> +			return prev;
> >> +
> >> +		if (cpus_share_resources(prev, target))
> >> +			return prev;
> > 
> > I have one minor question, previously Peter mentioned that he wants to get rid of the
> > percpu sd_share_id, not sure if he means that not using it in select_idle_cpu()
> > or remove that variable completely to not introduce extra space? 
> > Hi Peter, could you please give us more hints on this? thanks.
> > 
> > If we wants to get rid of this variable, would this work?
> > 
> > 	if ((sd->groups->flags & SD_CLUSTER) &&
> > 	    cpumask_test_cpu(prev, sched_group_span(sd->groups))
> > 		return prev
> > 
> 
> In the current implementation, nop, we haven't deferenced the @sd yet and we don't
> need to if scanning is not needed.
> 
> Since we're on the quick path without scanning here, I wonder it'll be a bit more
> efficient to use a per-cpu id rather than deference the rcu and do the bitmap
> computation.
>

Dereference is a memory barrier and the bitmap is of one operation/instruction which
should not have too much overhead. But anyway I've tested this patch on Jacobsville
and the data looks OK to me:


netperf
=======
case            	load    	baseline(std%)	compare%( std%)
TCP_RR          	6-threads	 1.00 (  0.84)	 -0.32 (  0.71)
TCP_RR          	12-threads	 1.00 (  0.35)	 +1.52 (  0.42)
TCP_RR          	18-threads	 1.00 (  0.31)	 +3.89 (  0.38)
TCP_RR          	24-threads	 1.00 (  0.87)	 -0.34 (  0.75)
TCP_RR          	30-threads	 1.00 (  5.84)	 +0.71 (  4.85)
TCP_RR          	36-threads	 1.00 (  4.84)	 +0.24 (  3.30)
TCP_RR          	42-threads	 1.00 (  3.75)	 +0.26 (  3.56)
TCP_RR          	48-threads	 1.00 (  1.51)	 +0.45 (  1.28)
UDP_RR          	6-threads	 1.00 (  0.65)	+10.12 (  0.63)
UDP_RR          	12-threads	 1.00 (  0.20)	 +9.91 (  0.25)
UDP_RR          	18-threads	 1.00 ( 11.13)	+16.77 (  0.49)
UDP_RR          	24-threads	 1.00 ( 12.38)	 +2.52 (  0.98)
UDP_RR          	30-threads	 1.00 (  5.63)	 -0.34 (  4.38)
UDP_RR          	36-threads	 1.00 ( 19.12)	 -0.89 (  3.30)
UDP_RR          	42-threads	 1.00 (  2.96)	 -1.41 (  3.17)
UDP_RR          	48-threads	 1.00 ( 14.08)	 -0.77 ( 10.77)

Good improvement in several cases. No regression is detected.

tbench
======
case            	load    	baseline(std%)	compare%( std%)
loopback        	6-threads	 1.00 (  0.41)	 +1.63 (  0.17)
loopback        	12-threads	 1.00 (  0.18)	 +4.39 (  0.12)
loopback        	18-threads	 1.00 (  0.43)	+10.42 (  0.18)
loopback        	24-threads	 1.00 (  0.38)	 +1.24 (  0.38)
loopback        	30-threads	 1.00 (  0.24)	 +0.60 (  0.14)
loopback        	36-threads	 1.00 (  0.17)	 +0.63 (  0.17)
loopback        	42-threads	 1.00 (  0.26)	 +0.76 (  0.08)
loopback        	48-threads	 1.00 (  0.23)	 +0.91 (  0.10)

Good improvement in 18-threads case. No regression is detected.

hackbench
=========
case            	load    	baseline(std%)	compare%( std%)
process-pipe    	1-groups	 1.00 (  0.52)	 +9.26 (  0.57)
process-pipe    	2-groups	 1.00 (  1.55)	 +6.92 (  0.56)
process-pipe    	4-groups	 1.00 (  1.36)	 +4.80 (  3.78)
process-sockets 	1-groups	 1.00 (  2.16)	 -6.35 (  1.10)
process-sockets 	2-groups	 1.00 (  2.34)	 -6.35 (  5.52)
process-sockets 	4-groups	 1.00 (  0.35)	 -5.64 (  1.19)
threads-pipe    	1-groups	 1.00 (  0.82)	 +8.00 (  0.00)
threads-pipe    	2-groups	 1.00 (  0.47)	 +6.91 (  0.50)
threads-pipe    	4-groups	 1.00 (  0.45)	 +8.92 (  2.27)
threads-sockets 	1-groups	 1.00 (  1.02)	 -4.13 (  2.30)
threads-sockets 	2-groups	 1.00 (  0.34)	 -1.86 (  2.39)
threads-sockets 	4-groups	 1.00 (  1.51)	 -3.99 (  1.59)

Pros and cons for hackbench. There is improvement for pipe mode, but
slight regression on sockets mode. I think this is within acceptable range.

schbench
========
case            	load    	baseline(std%)	compare%( std%)
normal          	1-mthreads	 1.00 (  0.00)	 +0.00 (  0.00)
normal          	2-mthreads	 1.00 (  0.00)	 +0.00 (  0.00)
normal          	4-mthreads	 1.00 (  3.82)	 +0.00 (  3.82)

There is impact to schbench at all, and the results are quite stable.

For the whole series:

Tested-by: Chen Yu <yu.c.chen@intel.com>

thanks,
Chenyu 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path
  2023-08-04 16:29       ` Chen Yu
@ 2023-08-07 13:15         ` Yicong Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Yicong Yang @ 2023-08-07 13:15 UTC (permalink / raw)
  To: Chen Yu
  Cc: yangyicong, peterz, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, tim.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel, rostedt, bsegall, bristot,
	prime.zeng, jonathan.cameron, ego, srikar, linuxarm, 21cnbao,
	kprateek.nayak, wuyun.abel

On 2023/8/5 0:29, Chen Yu wrote:
> Hi Yicong,
> 
> On 2023-08-01 at 20:06:56 +0800, Yicong Yang wrote:
>> Hi Chenyu,
>>
>> Sorry for the late reply. Something's wrong and cause this didn't appear
>> in my mail box. I check it out on the LKML.
>>
>  
> No worries : )
> 
>> On 2023/7/21 17:52, Chen Yu wrote:
>>> Hi Yicong,
>>>
>>> Thanks for sending this version!
>>>
>>> On 2023-07-19 at 17:28:38 +0800, Yicong Yang wrote:
>>>> From: Barry Song <song.bao.hua@hisilicon.com>
>>>>
>>>> For platforms having clusters like Kunpeng920, CPUs within the same cluster
>>>> have lower latency when synchronizing and accessing shared resources like
>>>> cache. Thus, this patch tries to find an idle cpu within the cluster of the
>>>> target CPU before scanning the whole LLC to gain lower latency. This
>>>> will be implemented in 3 steps in select_idle_sibling():
>>>> 1. When the prev_cpu/recent_used_cpu are good wakeup candidates, use them
>>>>    if they're sharing cluster with the target CPU. Otherwise record them
>>>>    and do the scanning first.
>>>> 2. Scanning the cluster prior to the LLC of the target CPU for an
>>>>    idle CPU to wakeup.
>>>> 3. If no idle CPU found after scanning and the prev_cpu/recent_used_cpu
>>>>    can be used, use them.
>>>>
>>>> Testing has been done on Kunpeng920 by pinning tasks to one numa and two
>>>> numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs.
>>>>
>>>> With this patch, We noticed enhancement on tbench and netperf within one
>>>> numa or cross two numa on 6.5-rc1:
>>>> tbench results (node 0):
>>>>              baseline                    patched
>>>>   1:        325.9673        378.9117 (   16.24%)
>>>>   4:       1311.9667       1501.5033 (   14.45%)
>>>>   8:       2629.4667       2961.9100 (   12.64%)
>>>>  16:       5259.1633       5928.0833 (   12.72%)
>>>>  32:      10368.6333      10566.8667 (    1.91%)
>>>>  64:       7868.7700       8182.0100 (    3.98%)
>>>> 128:       6528.5733       6801.8000 (    4.19%)
>>>> tbench results (node 0-1):
>>>>               vanilla                    patched
>>>>   1:        329.2757        380.8907 (   15.68%)
>>>>   4:       1327.7900       1494.5300 (   12.56%)
>>>>   8:       2627.2133       2917.1233 (   11.03%)
>>>>  16:       5201.3367       5835.9233 (   12.20%)
>>>>  32:       8811.8500      11154.2000 (   26.58%)
>>>>  64:      15832.4000      19643.7667 (   24.07%)
>>>> 128:      12605.5667      14639.5667 (   16.14%)
>>>> netperf results TCP_RR (node 0):
>>>>              baseline                    patched
>>>>   1:      77302.8667      92172.2100 (   19.24%)
>>>>   4:      78724.9200      91581.3100 (   16.33%)
>>>>   8:      79168.1296      91091.7942 (   15.06%)
>>>>  16:      81079.4200      90546.5225 (   11.68%)
>>>>  32:      82201.5799      78910.4982 (   -4.00%)
>>>>  64:      29539.3509      29131.4698 (   -1.38%)
>>>> 128:      12082.7522      11956.7705 (   -1.04%)
>>>> netperf results TCP_RR (node 0-1):
>>>>              baseline                    patched
>>>>   1:      78340.5233      92101.8733 (   17.57%)
>>>>   4:      79644.2483      91326.7517 (   14.67%)
>>>>   8:      79557.4313      90737.8096 (   14.05%)
>>>>  16:      79215.5304      90568.4542 (   14.33%)
>>>>  32:      78999.3983      85460.6044 (    8.18%)
>>>>  64:      74198.9494      74325.4361 (    0.17%)
>>>> 128:      27397.4810      27757.5471 (    1.31%)
>>>> netperf results UDP_RR (node 0):
>>>>              baseline                    patched
>>>>   1:      95721.9367     111546.1367 (   16.53%)
>>>>   4:      96384.2250     110036.1408 (   14.16%)
>>>>   8:      97460.6546     109968.0883 (   12.83%)
>>>>  16:      98876.1687     109387.8065 (   10.63%)
>>>>  32:     104364.6417     105241.6767 (    0.84%)
>>>>  64:      37502.6246      37451.1204 (   -0.14%)
>>>> 128:      14496.1780      14610.5538 (    0.79%)
>>>> netperf results UDP_RR (node 0-1):
>>>>              baseline                    patched
>>>>   1:      96176.1633     111397.5333 (   15.83%)
>>>>   4:      94758.5575     105681.7833 (   11.53%)
>>>>   8:      94340.2200     104138.3613 (   10.39%)
>>>>  16:      95208.5285     106714.0396 (   12.08%)
>>>>  32:      74745.9028     100713.8764 (   34.74%)
>>>>  64:      59351.4977      73536.1434 (   23.90%)
>>>> 128:      23755.4971      26648.7413 (   12.18%)
>>>>
>>>> Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so the SMT branch
>>>> in the code has not been tested but it supposed to work.
>>>>
>>>> Chen Yu also noticed this will improve the performance of tbench and
>>>> netperf on a 24 CPUs Jacobsville machine, there are 4 CPUs in one
>>>> cluster sharing L2 Cache.
>>>>
>>>> Suggested-by: Peter Zijlstra <peterz@infradead.org>
>>>> [https://lore.kernel.org/lkml/Ytfjs+m1kUs0ScSn@worktop.programming.kicks-ass.net]
>>>> Tested-by: Yicong Yang <yangyicong@hisilicon.com>
>>>> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
>>>> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
>>>> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
>>>> Reviewed-by: Chen Yu <yu.c.chen@intel.com>
>>>> ---
>>>>  kernel/sched/fair.c     | 59 +++++++++++++++++++++++++++++++++++++----
>>>>  kernel/sched/sched.h    |  1 +
>>>>  kernel/sched/topology.c | 12 +++++++++
>>>>  3 files changed, 67 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index b3e25be58e2b..d91bf64f81f5 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -7012,6 +7012,30 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>>>>  		}
>>>>  	}
>>>>  
>>>> +	if (static_branch_unlikely(&sched_cluster_active)) {
>>>> +		struct sched_group *sg = sd->groups;
>>>> +
>>>> +		if (sg->flags & SD_CLUSTER) {
>>>> +			for_each_cpu_wrap(cpu, sched_group_span(sg), target + 1) {
>>>> +				if (!cpumask_test_cpu(cpu, cpus))
>>>> +					continue;
>>>> +
>>>> +				if (has_idle_core) {
>>>> +					i = select_idle_core(p, cpu, cpus, &idle_cpu);
>>>> +					if ((unsigned int)i < nr_cpumask_bits)
>>>> +						return i;
>>>> +				} else {
>>>> +					if (--nr <= 0)
>>>> +						return -1;
>>>> +					idle_cpu = __select_idle_cpu(cpu, p);
>>>> +					if ((unsigned int)idle_cpu < nr_cpumask_bits)
>>>> +						return idle_cpu;
>>>> +				}
>>>> +			}
>>>> +			cpumask_andnot(cpus, cpus, sched_group_span(sg));
>>>> +		}
>>>> +	}
>>>> +
>>>>  	for_each_cpu_wrap(cpu, cpus, target + 1) {
>>>>  		if (has_idle_core) {
>>>>  			i = select_idle_core(p, cpu, cpus, &idle_cpu);
>>>> @@ -7019,7 +7043,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>>>>  				return i;
>>>>  
>>>>  		} else {
>>>> -			if (!--nr)
>>>> +			if (--nr <= 0)
>>>>  				return -1;
>>>>  			idle_cpu = __select_idle_cpu(cpu, p);
>>>>  			if ((unsigned int)idle_cpu < nr_cpumask_bits)
>>>> @@ -7121,7 +7145,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>>>>  	bool has_idle_core = false;
>>>>  	struct sched_domain *sd;
>>>>  	unsigned long task_util, util_min, util_max;
>>>> -	int i, recent_used_cpu;
>>>> +	int i, recent_used_cpu, prev_aff = -1;
>>>>  
>>>>  	/*
>>>>  	 * On asymmetric system, update task utilization because we will check
>>>> @@ -7148,8 +7172,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>>>>  	 */
>>>>  	if (prev != target && cpus_share_cache(prev, target) &&
>>>>  	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
>>>> -	    asym_fits_cpu(task_util, util_min, util_max, prev))
>>>> -		return prev;
>>>> +	    asym_fits_cpu(task_util, util_min, util_max, prev)) {
>>>> +		if (!static_branch_unlikely(&sched_cluster_active))
>>>> +			return prev;
>>>> +
>>>> +		if (cpus_share_resources(prev, target))
>>>> +			return prev;
>>>
>>> I have one minor question, previously Peter mentioned that he wants to get rid of the
>>> percpu sd_share_id, not sure if he means that not using it in select_idle_cpu()
>>> or remove that variable completely to not introduce extra space? 
>>> Hi Peter, could you please give us more hints on this? thanks.
>>>
>>> If we wants to get rid of this variable, would this work?
>>>
>>> 	if ((sd->groups->flags & SD_CLUSTER) &&
>>> 	    cpumask_test_cpu(prev, sched_group_span(sd->groups))
>>> 		return prev
>>>
>>
>> In the current implementation, nop, we haven't deferenced the @sd yet and we don't
>> need to if scanning is not needed.
>>
>> Since we're on the quick path without scanning here, I wonder it'll be a bit more
>> efficient to use a per-cpu id rather than deference the rcu and do the bitmap
>> computation.
>>
> 
> Dereference is a memory barrier and the bitmap is of one operation/instruction which
> should not have too much overhead. But anyway I've tested this patch on Jacobsville
> and the data looks OK to me:
> 
> 
> netperf
> =======
> case            	load    	baseline(std%)	compare%( std%)
> TCP_RR          	6-threads	 1.00 (  0.84)	 -0.32 (  0.71)
> TCP_RR          	12-threads	 1.00 (  0.35)	 +1.52 (  0.42)
> TCP_RR          	18-threads	 1.00 (  0.31)	 +3.89 (  0.38)
> TCP_RR          	24-threads	 1.00 (  0.87)	 -0.34 (  0.75)
> TCP_RR          	30-threads	 1.00 (  5.84)	 +0.71 (  4.85)
> TCP_RR          	36-threads	 1.00 (  4.84)	 +0.24 (  3.30)
> TCP_RR          	42-threads	 1.00 (  3.75)	 +0.26 (  3.56)
> TCP_RR          	48-threads	 1.00 (  1.51)	 +0.45 (  1.28)
> UDP_RR          	6-threads	 1.00 (  0.65)	+10.12 (  0.63)
> UDP_RR          	12-threads	 1.00 (  0.20)	 +9.91 (  0.25)
> UDP_RR          	18-threads	 1.00 ( 11.13)	+16.77 (  0.49)
> UDP_RR          	24-threads	 1.00 ( 12.38)	 +2.52 (  0.98)
> UDP_RR          	30-threads	 1.00 (  5.63)	 -0.34 (  4.38)
> UDP_RR          	36-threads	 1.00 ( 19.12)	 -0.89 (  3.30)
> UDP_RR          	42-threads	 1.00 (  2.96)	 -1.41 (  3.17)
> UDP_RR          	48-threads	 1.00 ( 14.08)	 -0.77 ( 10.77)
> 
> Good improvement in several cases. No regression is detected.
> 
> tbench
> ======
> case            	load    	baseline(std%)	compare%( std%)
> loopback        	6-threads	 1.00 (  0.41)	 +1.63 (  0.17)
> loopback        	12-threads	 1.00 (  0.18)	 +4.39 (  0.12)
> loopback        	18-threads	 1.00 (  0.43)	+10.42 (  0.18)
> loopback        	24-threads	 1.00 (  0.38)	 +1.24 (  0.38)
> loopback        	30-threads	 1.00 (  0.24)	 +0.60 (  0.14)
> loopback        	36-threads	 1.00 (  0.17)	 +0.63 (  0.17)
> loopback        	42-threads	 1.00 (  0.26)	 +0.76 (  0.08)
> loopback        	48-threads	 1.00 (  0.23)	 +0.91 (  0.10)
> 
> Good improvement in 18-threads case. No regression is detected.
> 
> hackbench
> =========
> case            	load    	baseline(std%)	compare%( std%)
> process-pipe    	1-groups	 1.00 (  0.52)	 +9.26 (  0.57)
> process-pipe    	2-groups	 1.00 (  1.55)	 +6.92 (  0.56)
> process-pipe    	4-groups	 1.00 (  1.36)	 +4.80 (  3.78)
> process-sockets 	1-groups	 1.00 (  2.16)	 -6.35 (  1.10)
> process-sockets 	2-groups	 1.00 (  2.34)	 -6.35 (  5.52)
> process-sockets 	4-groups	 1.00 (  0.35)	 -5.64 (  1.19)
> threads-pipe    	1-groups	 1.00 (  0.82)	 +8.00 (  0.00)
> threads-pipe    	2-groups	 1.00 (  0.47)	 +6.91 (  0.50)
> threads-pipe    	4-groups	 1.00 (  0.45)	 +8.92 (  2.27)
> threads-sockets 	1-groups	 1.00 (  1.02)	 -4.13 (  2.30)
> threads-sockets 	2-groups	 1.00 (  0.34)	 -1.86 (  2.39)
> threads-sockets 	4-groups	 1.00 (  1.51)	 -3.99 (  1.59)
> 
> Pros and cons for hackbench. There is improvement for pipe mode, but
> slight regression on sockets mode. I think this is within acceptable range.
> 
> schbench
> ========
> case            	load    	baseline(std%)	compare%( std%)
> normal          	1-mthreads	 1.00 (  0.00)	 +0.00 (  0.00)
> normal          	2-mthreads	 1.00 (  0.00)	 +0.00 (  0.00)
> normal          	4-mthreads	 1.00 (  3.82)	 +0.00 (  3.82)
> 
> There is impact to schbench at all, and the results are quite stable.
> 
> For the whole series:
> 
> Tested-by: Chen Yu <yu.c.chen@intel.com>
> 

Thanks for testing.

Yicong.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v9 0/2] sched/fair: Scan cluster before scanning LLC in wake-up path
  2023-07-19  9:28 [PATCH v9 0/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
  2023-07-19  9:28 ` [PATCH v9 1/2] sched: Add cpus_share_resources API Yicong Yang
  2023-07-19  9:28 ` [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
@ 2023-08-15 11:16 ` Yicong Yang
  2 siblings, 0 replies; 8+ messages in thread
From: Yicong Yang @ 2023-08-15 11:16 UTC (permalink / raw)
  To: peterz
  Cc: yangyicong, rostedt, bsegall, bristot, prime.zeng,
	jonathan.cameron, ego, srikar, linuxarm, 21cnbao, kprateek.nayak,
	wuyun.abel, juri.lelli, vincent.guittot, dietmar.eggemann,
	yu.c.chen, tim.c.chen, gautham.shenoy, mgorman, vschneid,
	linux-kernel, linux-arm-kernel, mingo

Hi Peter,

A gentle ping for this. Any further comment?

Thanks.

On 2023/7/19 17:28, Yicong Yang wrote:
> From: Yicong Yang <yangyicong@hisilicon.com>
> 
> This is the follow-up work to support cluster scheduler. Previously
> we have added cluster level in the scheduler for both ARM64[1] and
> X86[2] to support load balance between clusters to bring more memory
> bandwidth and decrease cache contention. This patchset, on the other
> hand, takes care of wake-up path by giving CPUs within the same cluster
> a try before scanning the whole LLC to benefit those tasks communicating
> with each other.
> 
> [1] 778c558f49a2 ("sched: Add cluster scheduler level in core and related Kconfig for ARM64")
> [2] 66558b730f25 ("sched: Add cluster scheduler level for x86")
> 
> Since we're using sd->groups->flags to determine a cluster, core should ensure
> the flags set correctly on domain generation. This is done by [*].
> 
> [*] https://lore.kernel.org/all/20230713013133.2314153-1-yu.c.chen@intel.com/
> 
> Change since v8:
> - Peter find cpus_share_lowest_cache() is weired so fallback to cpus_share_resources()
>   suggested in v4
> - Use sd->groups->flags to find the cluster when scanning, save one per-cpu pointer
> - Fix sched_cluster_active enabled incorrectly on domain degeneration
> - Use sched_cluster_active to avoid repeated check on non-cluster machines, per Gautham
> Link: https://lore.kernel.org/all/20230530070253.33306-1-yangyicong@huawei.com/
> 
> Change since v7:
> - Optimize by choosing prev_cpu/recent_used_cpu when possible after failed to
>   scanning for an idle CPU in cluster/LLC. Thanks Chen Yu for testing on Jacobsville
> Link: https://lore.kernel.org/all/20220915073423.25535-1-yangyicong@huawei.com/
> 
> Change for RESEND:
> - Collect tag from Chen Yu and rebase on the latest tip/sched/core. Thanks.
> Link: https://lore.kernel.org/lkml/20220822073610.27205-1-yangyicong@huawei.com/
> 
> Change since v6:
> - rebase on 6.0-rc1
> Link: https://lore.kernel.org/lkml/20220726074758.46686-1-yangyicong@huawei.com/
> 
> Change since v5:
> - Improve patch 2 according to Peter's suggestion:
>   - use sched_cluster_active to indicate whether cluster is active
>   - consider SMT case and use wrap iteration when scanning cluster
> - Add Vincent's tag
> Thanks.
> Link: https://lore.kernel.org/lkml/20220720081150.22167-1-yangyicong@hisilicon.com/
> 
> Change since v4:
> - rename cpus_share_resources to cpus_share_lowest_cache to be more informative, per Tim
> - return -1 when nr==0 in scan_cluster(), per Abel
> Thanks!
> Link: https://lore.kernel.org/lkml/20220609120622.47724-1-yangyicong@hisilicon.com/
> 
> Change since v3:
> - fix compile error when !CONFIG_SCHED_CLUSTER, reported by lkp test.
> Link: https://lore.kernel.org/lkml/20220608095758.60504-1-yangyicong@hisilicon.com/
> 
> Change since v2:
> - leverage SIS_PROP to suspend redundant scanning when LLC is overloaded
> - remove the ping-pong suppression
> - address the comment from Tim, thanks.
> Link: https://lore.kernel.org/lkml/20220126080947.4529-1-yangyicong@hisilicon.com/
> 
> Change since v1:
> - regain the performance data based on v5.17-rc1
> - rename cpus_share_cluster to cpus_share_resources per Vincent and Gautham, thanks!
> Link: https://lore.kernel.org/lkml/20211215041149.73171-1-yangyicong@hisilicon.com/
> 
> Barry Song (2):
>   sched: Add cpus_share_resources API
>   sched/fair: Scan cluster before scanning LLC in wake-up path
> 
>  include/linux/sched/sd_flags.h |  7 ++++
>  include/linux/sched/topology.h |  8 ++++-
>  kernel/sched/core.c            | 12 +++++++
>  kernel/sched/fair.c            | 59 +++++++++++++++++++++++++++++++---
>  kernel/sched/sched.h           |  2 ++
>  kernel/sched/topology.c        | 25 ++++++++++++++
>  6 files changed, 107 insertions(+), 6 deletions(-)
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-08-15 11:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-19  9:28 [PATCH v9 0/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
2023-07-19  9:28 ` [PATCH v9 1/2] sched: Add cpus_share_resources API Yicong Yang
2023-07-19  9:28 ` [PATCH v9 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path Yicong Yang
2023-07-21  9:52   ` Chen Yu
2023-08-01 12:06     ` Yicong Yang
2023-08-04 16:29       ` Chen Yu
2023-08-07 13:15         ` Yicong Yang
2023-08-15 11:16 ` [PATCH v9 0/2] " Yicong Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).