linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler
@ 2021-03-19  4:16 Barry Song
  2021-03-19  4:16 ` [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Barry Song
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Barry Song @ 2021-03-19  4:16 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

ARM64 server chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster has
local L3 tag. On the other hand, each cluster will share some internal system
bus. This means cache is much more affine inside one cluster than across
clusters.

    +-----------------------------------+                          +---------+
    |  +------+    +------+            +---------------------------+         |
    |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+   cluster   |    |    tag    |         |         |
    |  | CPU2 |    | CPU3 |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   |    |    L3     |         |         |
    |  +------+    +------+             +----+    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |   L3    |
                                                                   |   data  |
    +-----------------------------------+                          |         |
    |  +------+    +------+             |    +-----------+         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             +----+    L3     |         |         |
    |                                   |    |    tag    |         |         |
    |  +------+    +------+             |    |           |         |         |
    |  |      |    |      |            ++    +-----------+         |         |
    |  +------+    +------+            |---------------------------+         |
    +-----------------------------------|                          |         |
    +-----------------------------------|                          |         |
    |  +------+    +------+            +---------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+             |    |    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |   +-----------+          |         |
    |  +------+    +------+             |   |           |          |         |


There is a similar need for clustering in x86.  Some x86 cores could share L2 caches
that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).  

Having a sched_domain for clusters will bring two aspects of improvement:
1. spreading unrelated tasks among clusters, which decreases the contention of resources
and improve the throughput.
unrelated tasks might be put randomly without cluster sched_domain:
+-------------------+            +-----------------+
| +----+   +----+   |            |                 |
| |task|   |task|   |            |                 |
| |1   |   |2   |   |            |                 |
| +----+   +----+   |            |                 |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

but with cluster sched_domain, they are likely to spread due to LB:
+-------------------+            +-----------------+
| +----+            |            | +----+          |
| |task|            |            | |task|          |
| |1   |            |            | |2   |          |
| +----+            |            | +----+          |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

2. gathering related tasks within a cluster, which improves the cache affinity of tasks
talking with each other.
Without cluster sched_domain, related tasks might be put randomly. In case task1-8 have
relationship as below:
Task1 wakes up task4
Task2 wakes up task5
Task3 wakes up task6
Task4 wakes up task7
With the tuning of select_idle_cpu() to scan local cluster first, those tasks might
get a chance to be gathered like:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 4   |     |    | |2   |      |5    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  6   |    |    | |4    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+
Otherwise, the result might be:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 2   |     |    | |5   |      |6    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  4   |    |    | |7    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+

-v5:
  * split "add scheduler level for clusters" into two patches to evaluate the
    impact of spreading and gathering separately;
  * add a tracepoint of select_idle_cpu for debug purpose; add bcc script in
    commit log;
  * add cluster_id = -1 in reset_cpu_topology()
  * rebased to tip/sched/core

-v4:
  * rebased to tip/sched/core with the latest unified code of select_idle_cpu
  * added Tim's patch for x86 Jacobsville
  * also added benchmark data of spreading unrelated tasks
  * avoided the iteration of sched_domain by moving to static_key(addressing
    Vincent's comment
  * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)

Barry Song (2):
  scheduler: add scheduler level for clusters
  scheduler: scan idle cpu in cluster before scanning the whole llc

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die

Tim Chen (1):
  scheduler: Add cluster scheduler level for x86

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/topology.c              |  2 +
 arch/x86/Kconfig                          |  8 ++++
 arch/x86/include/asm/smp.h                |  7 ++++
 arch/x86/include/asm/topology.h           |  1 +
 arch/x86/kernel/cpu/cacheinfo.c           |  1 +
 arch/x86/kernel/cpu/common.c              |  3 ++
 arch/x86/kernel/smpboot.c                 | 43 ++++++++++++++++++++-
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 15 ++++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/sched/cluster.h             | 19 ++++++++++
 include/linux/sched/topology.h            |  7 ++++
 include/linux/topology.h                  | 13 +++++++
 include/trace/events/sched.h              | 22 +++++++++++
 kernel/sched/core.c                       | 20 ++++++++++
 kernel/sched/fair.c                       | 36 +++++++++++++++++-
 kernel/sched/sched.h                      |  1 +
 kernel/sched/topology.c                   |  5 +++
 22 files changed, 313 insertions(+), 6 deletions(-)
 create mode 100644 include/linux/sched/cluster.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19  4:16 [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
@ 2021-03-19  4:16 ` Barry Song
  2021-03-19  6:35   ` Greg KH
  2021-03-19  4:16 ` [RFC PATCH v5 2/4] scheduler: add scheduler level for clusters Barry Song
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Barry Song @ 2021-03-19  4:16 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Jonathan Cameron, Barry Song

From: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.

For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.

This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].

Note this patch only handle the ACPI case.

Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).

Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT.  Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.

Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.

[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
    structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
 arch/arm64/kernel/topology.c              |  2 +
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 15 ++++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/topology.h                  |  6 +++
 8 files changed, 128 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
 	identifier (rather than the kernel's).  The actual value is
 	architecture and platform dependent.
 
+cluster_id:
+
+	the Cluster ID of cpuX.  Typically it is the hardware platform's
+	identifier (rather than the kernel's).  The actual value is
+	architecture and platform dependent.
+
 book_id:
 
 	the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
 	human-readable list of CPUs sharing the same physical_package_id.
 	(deprecated name: "core_siblings_list")
 
+cluster_cpus:
+
+	internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+	human-readable list of CPUs within the same cluster.
+
 die_cpus:
 
 	internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
 
 	#define topology_physical_package_id(cpu)
 	#define topology_die_id(cpu)
+	#define topology_cluster_id(cpu)
 	#define topology_core_id(cpu)
 	#define topology_book_id(cpu)
 	#define topology_drawer_id(cpu)
 	#define topology_sibling_cpumask(cpu)
 	#define topology_core_cpumask(cpu)
+	#define topology_cluster_cpumask(cpu)
 	#define topology_die_cpumask(cpu)
 	#define topology_book_cpumask(cpu)
 	#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
 
 1) topology_physical_package_id: -1
 2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
 
 For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
 default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index e08a412..d72eb8d 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
 			cpu_topology[cpu].thread_id  = -1;
 			cpu_topology[cpu].core_id    = topology_id;
 		}
+		topology_id = find_acpi_cpu_topology_cluster(cpu);
+		cpu_topology[cpu].cluster_id = topology_id;
 		topology_id = find_acpi_cpu_topology_package(cpu);
 		cpu_topology[cpu].package_id = topology_id;
 
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..11f8b02 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
 }
 
 /**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	struct acpi_pptt_processor *cpu_node, *cluster_node;
+	u32 acpi_cpu_id;
+	int retval;
+	int is_thread;
+
+	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+	if (ACPI_FAILURE(status)) {
+		acpi_pptt_warn_missing();
+		return -ENOENT;
+	}
+
+	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+	if (cpu_node == NULL || !cpu_node->parent) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+
+	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+	cluster_node = fetch_pptt_node(table, cpu_node->parent);
+	if (cluster_node == NULL) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+	if (is_thread) {
+		if (!cluster_node->parent) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+		cluster_node = fetch_pptt_node(table, cluster_node->parent);
+		if (cluster_node == NULL) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+	}
+	retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+	acpi_put_table(table);
+
+	return retval;
+}
+
+/**
  * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
  * @cpu: Kernel logical CPU number
  *
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..ca3b8c1 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return core_mask;
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].cluster_sibling;
+}
+
 void update_siblings_masks(unsigned int cpuid)
 {
 	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;
 
+		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+		}
+
 		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
 		cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
 
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
 	cpumask_clear(&cpu_topo->llc_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
 
+	cpumask_clear(&cpu_topo->cluster_sibling);
+	cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
 	cpumask_clear(&cpu_topo->core_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
 	cpumask_clear(&cpu_topo->thread_sibling);
@@ -556,6 +569,7 @@ void __init reset_cpu_topology(void)
 
 		cpu_topo->thread_id = -1;
 		cpu_topo->core_id = -1;
+		cpu_topo->cluster_id = -1;
 		cpu_topo->package_id = -1;
 		cpu_topo->llc_id = -1;
 
@@ -571,6 +585,7 @@ void remove_cpu_topology(unsigned int cpu)
 		cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
 	for_each_cpu(sibling, topology_sibling_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
 	for_each_cpu(sibling, topology_llc_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
 
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@
 define_id_show_func(die_id);
 static DEVICE_ATTR_RO(die_id);
 
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
 define_id_show_func(core_id);
 static DEVICE_ATTR_RO(core_id);
 
@@ -61,6 +64,10 @@
 static DEVICE_ATTR_RO(core_siblings);
 static DEVICE_ATTR_RO(core_siblings_list);
 
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
 define_siblings_show_func(die_cpus, die_cpumask);
 static DEVICE_ATTR_RO(die_cpus);
 static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@
 static struct attribute *default_attrs[] = {
 	&dev_attr_physical_package_id.attr,
 	&dev_attr_die_id.attr,
+	&dev_attr_cluster_id.attr,
 	&dev_attr_core_id.attr,
 	&dev_attr_thread_siblings.attr,
 	&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@
 	&dev_attr_core_cpus_list.attr,
 	&dev_attr_core_siblings.attr,
 	&dev_attr_core_siblings_list.attr,
+	&dev_attr_cluster_cpus.attr,
+	&dev_attr_cluster_cpus_list.attr,
 	&dev_attr_die_cpus.attr,
 	&dev_attr_die_cpus_list.attr,
 	&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 9f43241..138b779 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1307,6 +1307,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
 #ifdef CONFIG_ACPI_PPTT
 int acpi_pptt_cpu_is_thread(unsigned int cpu);
 int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
 int find_acpi_cpu_topology_package(unsigned int cpu);
 int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
 int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1319,6 +1320,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
 {
 	return -EINVAL;
 }
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	return -EINVAL;
+}
 static inline int find_acpi_cpu_topology_package(unsigned int cpu)
 {
 	return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
 struct cpu_topology {
 	int thread_id;
 	int core_id;
+	int cluster_id;
 	int package_id;
 	int llc_id;
 	cpumask_t thread_sibling;
 	cpumask_t core_sibling;
+	cpumask_t cluster_sibling;
 	cpumask_t llc_sibling;
 };
 
@@ -60,13 +62,16 @@ struct cpu_topology {
 extern struct cpu_topology cpu_topology[NR_CPUS];
 
 #define topology_physical_package_id(cpu)	(cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu)	(cpu_topology[cpu].cluster_id)
 #define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
 #define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
 #define topology_sibling_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu)	(&cpu_topology[cpu].cluster_sibling)
 #define topology_llc_cpumask(cpu)	(&cpu_topology[cpu].llc_sibling)
 void init_cpu_topology(void);
 void store_cpu_topology(unsigned int cpuid);
 const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
 void update_siblings_masks(unsigned int cpu);
 void remove_cpu_topology(unsigned int cpuid);
 void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7634cd7..80d27d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_die_id
 #define topology_die_id(cpu)			((void)(cpu), -1)
 #endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu)		((void)(cpu), -1)
+#endif
 #ifndef topology_core_id
 #define topology_core_id(cpu)			((void)(cpu), 0)
 #endif
@@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_core_cpumask
 #define topology_core_cpumask(cpu)		cpumask_of(cpu)
 #endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu)		cpumask_of(cpu)
+#endif
 #ifndef topology_die_cpumask
 #define topology_die_cpumask(cpu)		cpumask_of(cpu)
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH v5 2/4] scheduler: add scheduler level for clusters
  2021-03-19  4:16 [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
  2021-03-19  4:16 ` [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Barry Song
@ 2021-03-19  4:16 ` Barry Song
  2021-03-19  4:16 ` [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc Barry Song
  2021-03-19  4:16 ` [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 Barry Song
  3 siblings, 0 replies; 17+ messages in thread
From: Barry Song @ 2021-03-19  4:16 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus. This means cache coherence overhead inside one
cluster is much less than the overhead across clusters.

This patch adds the sched_domain for clusters. On kunpeng 920, without
this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.

This will help spread unrelated tasks among clusters, thus decrease the
contention and improve the throughput, for example, stream benchmark can
improve 20%+ while parallelism is 6 and improve around 5% while paralle-
lism is 12:

(1) -P <parallelism> 6
$ numactl -N 0 /usr/lib/lmbench/bin/stream -P 6 -M 1024M -N 5

w/o patch:
STREAM copy latency: 2.46 nanoseconds
STREAM copy bandwidth: 39096.28 MB/sec
STREAM scale latency: 2.46 nanoseconds
STREAM scale bandwidth: 38970.26 MB/sec
STREAM add latency: 4.45 nanoseconds
STREAM add bandwidth: 32332.04 MB/sec
STREAM triad latency: 4.07 nanoseconds
STREAM triad bandwidth: 35387.69 MB/sec

w/ patch:
STREAM copy latency: 2.02 nanoseconds
STREAM copy bandwidth: 47604.47 MB/sec   +21.7%
STREAM scale latency: 2.04 nanoseconds
STREAM scale bandwidth: 47066.84 MB/sec  +20.8%
STREAM add latency: 3.35 nanoseconds
STREAM add bandwidth: 42942.15 MB/sec    +32.8%
STREAM triad latency: 3.16 nanoseconds
STREAM triad bandwidth: 45619.18 MB/sec  +28.9%

On the other hand,stream result could change significantly during different
tests without the patch, eg:
a.
STREAM copy latency: 2.16 nanoseconds
STREAM copy bandwidth: 44448.45 MB/sec
STREAM scale latency: 2.17 nanoseconds
STREAM scale bandwidth: 44320.77 MB/sec
STREAM add latency: 3.77 nanoseconds
STREAM add bandwidth: 38230.54 MB/sec
STREAM triad latency: 3.88 nanoseconds
STREAM triad bandwidth: 37072.10 MB/sec

b.
STREAM copy latency: 2.16 nanoseconds
STREAM copy bandwidth: 44403.22 MB/sec
STREAM scale latency: 2.39 nanoseconds
STREAM scale bandwidth: 40173.69 MB/sec
STREAM add latency: 3.77 nanoseconds
STREAM add bandwidth: 38232.56 MB/sec
STREAM triad latency: 3.38 nanoseconds
STREAM triad bandwidth: 42592.04 MB/sec

Obviously it is because the 6 threads are put randomly in 6 cores. Sometimes
they are packed in clusters, sometimes they are spread widely.

(2) -P <parallelism> 12
$ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5

w/o patch:
STREAM copy latency: 3.37 nanoseconds
STREAM copy bandwidth: 57008.80 MB/sec
STREAM scale latency: 3.38 nanoseconds
STREAM scale bandwidth: 56848.47 MB/sec
STREAM add latency: 5.50 nanoseconds
STREAM add bandwidth: 52398.62 MB/sec
STREAM triad latency: 5.09 nanoseconds
STREAM triad bandwidth: 56591.60 MB/sec

w/ patch:
STREAM copy latency: 3.24 nanoseconds
STREAM copy bandwidth: 59338.60 MB/sec  +4.1%
STREAM scale latency: 3.25 nanoseconds
STREAM scale bandwidth: 58993.23 MB/sec +3.7%
STREAM add latency: 5.19 nanoseconds
STREAM add bandwidth: 55517.45 MB/sec   +5.9%
STREAM triad latency: 4.86 nanoseconds
STREAM triad bandwidth: 59245.34 MB/sec +4.7%

To evaluate the performance impact to related tasks talking with each
other, we run the below hackbench with different -g parameter from 2
to 14, for each different g, we run the command 10 times and get the
average time:
$ numactl -N 0 hackbench -p -T -l 20000 -g $1

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 20000 -g 10
Running in threaded mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 20000 messages of 100 bytes

The below is the result of hackbench w/ and w/o the patch:
g=    2      4     6       8      10     12      14
w/o: 1.8151 3.8499 5.5142 7.2491 9.0340 10.7345 12.0929
w/ : 1.8396 3.8250 5.4780 7.3442 9.0172 10.5950 11.9113

Obviously this patch doesn't impact hackbench too much.

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/arm64/Kconfig             |  7 +++++++
 include/linux/sched/cluster.h  | 19 +++++++++++++++++++
 include/linux/sched/topology.h |  7 +++++++
 include/linux/topology.h       |  7 +++++++
 kernel/sched/core.c            | 20 ++++++++++++++++++++
 kernel/sched/fair.c            |  4 ++++
 kernel/sched/sched.h           |  1 +
 kernel/sched/topology.c        |  5 +++++
 8 files changed, 70 insertions(+)
 create mode 100644 include/linux/sched/cluster.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1f212b4..9432a30 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -977,6 +977,13 @@ config SCHED_MC
 	  making when dealing with multi-core CPU chips at a cost of slightly
 	  increased overhead in some places. If unsure say N here.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	help
+	  Cluster scheduler support improves the CPU scheduler's decision
+	  making when dealing with machines that have clusters(sharing internal
+	  bus or sharing LLC cache tag). If unsure say N here.
+
 config SCHED_SMT
 	bool "SMT scheduler support"
 	help
diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
new file mode 100644
index 0000000..ea6c475
--- /dev/null
+++ b/include/linux/sched/cluster.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_SCHED_CLUSTER_H
+#define _LINUX_SCHED_CLUSTER_H
+
+#include <linux/static_key.h>
+
+#ifdef CONFIG_SCHED_CLUSTER
+extern struct static_key_false sched_cluster_present;
+
+static __always_inline bool sched_cluster_active(void)
+{
+	return static_branch_likely(&sched_cluster_present);
+}
+#else
+static inline bool sched_cluster_active(void) { return false; }
+
+#endif
+
+#endif
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 8f0f778..2f9166f 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
 }
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+static inline int cpu_cluster_flags(void)
+{
+	return SD_SHARE_PKG_RESOURCES;
+}
+#endif
+
 #ifdef CONFIG_SCHED_MC
 static inline int cpu_core_flags(void)
 {
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 80d27d7..0b3704a 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
 }
 #endif
 
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
+static inline const struct cpumask *cpu_cluster_mask(int cpu)
+{
+	return topology_cluster_cpumask(cpu);
+}
+#endif
+
 static inline const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 28c4df6..19e2536 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7840,6 +7840,17 @@ int sched_cpu_activate(unsigned int cpu)
 	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
 		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going up, increment the number of cluster cpus with
+	 * cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > cpumask_weight(cpu_smt_mask(cpu)) &&
+	    cpumask_weight(cpu_cluster_mask(cpu)) < cpumask_weight(cpu_coregroup_mask(cpu)))
+		static_branch_inc_cpuslocked(&sched_cluster_present);
+#endif
+
 	set_cpu_active(cpu, true);
 
 	if (sched_smp_initialized) {
@@ -7916,6 +7927,15 @@ int sched_cpu_deactivate(unsigned int cpu)
 		static_branch_dec_cpuslocked(&sched_smt_present);
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going down, decrement the number of cpus with cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > cpumask_weight(cpu_smt_mask(cpu)) &&
+	    cpumask_weight(cpu_cluster_mask(cpu)) < cpumask_weight(cpu_coregroup_mask(cpu)))
+		static_branch_dec_cpuslocked(&sched_cluster_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2e2ab1e..c92ad9f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6021,6 +6021,10 @@ static inline int __select_idle_cpu(int cpu)
 	return -1;
 }
 
+#ifdef CONFIG_SCHED_CLUSTER
+DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
+#endif
+
 #ifdef CONFIG_SCHED_SMT
 DEFINE_STATIC_KEY_FALSE(sched_smt_present);
 EXPORT_SYMBOL_GPL(sched_smt_present);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d2e09a6..73f7406 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -6,6 +6,7 @@
 
 #include <linux/sched/autogroup.h>
 #include <linux/sched/clock.h>
+#include <linux/sched/cluster.h>
 #include <linux/sched/coredump.h>
 #include <linux/sched/cpufreq.h>
 #include <linux/sched/cputime.h>
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 12f8058..ae1fa00 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1511,6 +1511,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
+
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc
  2021-03-19  4:16 [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
  2021-03-19  4:16 ` [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Barry Song
  2021-03-19  4:16 ` [RFC PATCH v5 2/4] scheduler: add scheduler level for clusters Barry Song
@ 2021-03-19  4:16 ` Barry Song
  2021-03-19 21:39   ` Song Bao Hua (Barry Song)
  2021-03-19  4:16 ` [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 Barry Song
  3 siblings, 1 reply; 17+ messages in thread
From: Barry Song @ 2021-03-19  4:16 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

On kunpeng920, cpus within one cluster can communicate wit each other
much faster than cpus across different clusters. A simple hackbench
can prove that.
hackbench running on 4 cpus in single one cluster and 4 cpus in
different clusters shows a large contrast:
(1) within a cluster:
root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 4.285

(2) across clusters:
root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 5.524

This inspires us to change the wake_affine path to scan cluster before
scanning the whole LLC to try to gatter related tasks in one cluster,
which is done by this patch.

To evaluate the performance impact to related tasks talking with each
other, we run the below hackbench with different -g parameter from 2
to 14, for each different g, we run the command 10 times and get the
average time:
$ numactl -N 0 hackbench -p -T -l 20000 -g $1

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 20000 -g 10
Running in threaded mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 20000 messages of 100 bytes

The below is the result of hackbench w/ and w/o cluster patch:
g=    2      4     6       8      10     12      14
w/o: 1.8151 3.8499 5.5142 7.2491 9.0340 10.7345 12.0929
w/ : 1.7881 3.7371 5.3301 6.9747 8.6909  9.9235 11.2608

Obviously some recent commits have improved the hackbench. So the change
in wake_affine path brings less increase on hackbench compared to what
we got in RFC v4.
And obviously it is much more tricky to leverage wake_affine compared to
leveraging the scatter of tasks in the previous patch as load balance
might pull tasks which have been compact in a cluster so alternative
suggestions welcome.

In order to figure out how many times cpu is picked from the cluster and
how many times cpu is picked out of the cluster, a tracepoint for debug
purpose is added in this patch. And an userspace bcc script to print the
histogram of the result of select_idle_cpu():
#!/usr/bin/python
#
# selectidlecpu.py	select idle cpu histogram.
#
# A Ctrl-C will print the gathered histogram then exit.
#
# 18-March-2021 Barry Song Created this.

from __future__ import print_function
from bcc import BPF
from time import sleep

# load BPF program
b = BPF(text="""

BPF_HISTOGRAM(dist);

TRACEPOINT_PROBE(sched, sched_select_idle_cpu)
{
	u32 e;
	if (args->idle / 4 == args->target/4)
		e = 0; /* idle cpu from same cluster */
	else if (args->idle != -1)
		e = 1; /* idle cpu from different clusters */
	else
		e = 2; /* no idle cpu */

	dist.increment(e);
	return 0;
}
""")

# header
print("Tracing... Hit Ctrl-C to end.")

# trace until Ctrl-C
try:
	sleep(99999999)
except KeyboardInterrupt:
	print()

# output

print("\nlinear histogram")
print("~~~~~~~~~~~~~~~~")
b["dist"].print_linear_hist("idle")

Even while g=14 and the system is quite busy, we can see there are some
chances idle cpu is picked from local cluster:
linear histogram
~~~~~~~~~~~~~~
     idle          : count     distribution
        0          : 15234281 |***********                             |
        1          : 18494    |                                        |
        2          : 53066152 |****************************************|

0: local cluster
1: out of the cluster
2: select_idle_cpu() returns -1

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 include/trace/events/sched.h | 22 ++++++++++++++++++++++
 kernel/sched/fair.c          | 32 +++++++++++++++++++++++++++++++-
 2 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index cbe3e15..86608cf 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -136,6 +136,28 @@
 );
 
 /*
+ * Tracepoint for select_idle_cpu:
+ */
+TRACE_EVENT(sched_select_idle_cpu,
+
+	TP_PROTO(int target, int idle),
+
+	TP_ARGS(target, idle),
+
+	TP_STRUCT__entry(
+		__field(	int,	target			)
+		__field(	int,	idle			)
+	),
+
+	TP_fast_assign(
+		__entry->target	= target;
+		__entry->idle = idle;
+	),
+
+	TP_printk("target=%d idle=%d", __entry->target, __entry->idle)
+);
+
+/*
  * Tracepoint for waking up a task:
  */
 DECLARE_EVENT_CLASS(sched_wakeup_template,
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c92ad9f2..3892d42 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6150,7 +6150,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	if (!this_sd)
 		return -1;
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	if (!sched_cluster_active())
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+#ifdef CONFIG_SCHED_CLUSTER
+	if (sched_cluster_active())
+		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
+#endif
 
 	if (sched_feat(SIS_PROP) && !smt) {
 		u64 avg_cost, avg_idle, span_avg;
@@ -6171,6 +6176,29 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 		time = cpu_clock(this);
 	}
 
+#ifdef CONFIG_SCHED_CLUSTER
+	if (sched_cluster_active()) {
+		for_each_cpu_wrap(cpu, cpus, target) {
+			if (smt) {
+				i = select_idle_core(p, cpu, cpus, &idle_cpu);
+				if ((unsigned int)i < nr_cpumask_bits)
+					return i;
+
+			} else {
+				if (!--nr)
+					return -1;
+				idle_cpu = __select_idle_cpu(cpu);
+				if ((unsigned int)idle_cpu < nr_cpumask_bits) {
+					goto done;
+				}
+			}
+		}
+
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
+	}
+#endif
+
 	for_each_cpu_wrap(cpu, cpus, target) {
 		if (smt) {
 			i = select_idle_core(p, cpu, cpus, &idle_cpu);
@@ -6186,6 +6214,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 		}
 	}
 
+done:
 	if (smt)
 		set_idle_cores(this, false);
 
@@ -6324,6 +6353,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 		return target;
 
 	i = select_idle_cpu(p, sd, target);
+	trace_sched_select_idle_cpu(target, i);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
  2021-03-19  4:16 [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
                   ` (2 preceding siblings ...)
  2021-03-19  4:16 ` [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc Barry Song
@ 2021-03-19  4:16 ` Barry Song
  2021-03-23 22:50   ` Tim Chen
  3 siblings, 1 reply; 17+ messages in thread
From: Barry Song @ 2021-03-19  4:16 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

From: Tim Chen <tim.c.chen@linux.intel.com>

There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a cluster of cores instead of being exclusive
to one single core.

To prevent oversubscription of L2 cache, load should be
balanced between such L2 clusters, especially for tasks with
no shared data.

Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.

Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/x86/Kconfig                |  8 ++++++++
 arch/x86/include/asm/smp.h      |  7 +++++++
 arch/x86/include/asm/topology.h |  1 +
 arch/x86/kernel/cpu/cacheinfo.c |  1 +
 arch/x86/kernel/cpu/common.c    |  3 +++
 arch/x86/kernel/smpboot.c       | 43 ++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2792879..d597de2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1002,6 +1002,14 @@ config NR_CPUS
 	  This is purely to save memory: each supported CPU adds about 8KB
 	  to the kernel image.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	default n
+	help
+	 Cluster scheduler support improves the CPU scheduler's decision
+	 making when dealing with machines that have clusters of CPUs
+	 sharing L2 cache. If unsure say N here.
+
 config SCHED_SMT
 	def_bool y if SMP
 
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index c0538f8..9cbc4ae 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -16,7 +16,9 @@
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map);
 /* cpus sharing the last level cache: */
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
 static inline struct cpumask *cpu_llc_shared_mask(int cpu)
@@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu)
 	return per_cpu(cpu_llc_shared_map, cpu);
 }
 
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+{
+	return per_cpu(cpu_l2c_shared_map, cpu);
+}
+
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 9239399..2a11ccc 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
 #include <asm-generic/topology.h>
 
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
 #define topology_logical_package_id(cpu)	(cpu_data(cpu).logical_proc_id)
 #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3ca9be4..0d03a71 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
 		l2 = new_l2;
 #ifdef CONFIG_SMP
 		per_cpu(cpu_llc_id, cpu) = l2_id;
+		per_cpu(cpu_l2c_id, cpu) = l2_id;
 #endif
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index ab640ab..0ba282d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -78,6 +78,9 @@
 /* Last level cache ID of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
 
+/* L2 cache ID of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID;
+
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 02813a7..c85ffa8 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,8 @@
 
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
 
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
+
 /* Per CPU bogomips and other parameters */
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
@@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 	return topology_sane(c, o, "llc");
 }
 
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+{
+	int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
+
+	/* Do not match if we do not have a valid APICID for cpu: */
+	if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID)
+		return false;
+
+	/* Do not match if L2 cache id does not match: */
+	if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2))
+		return false;
+
+	return topology_sane(c, o, "l2c");
+}
+
 /*
  * Unlike the other levels, we do not enforce keeping a
  * multicore group inside a NUMA node.  If this happens, we will
@@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 }
 
 
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC)
 static inline int x86_sched_itmt_flags(void)
 {
 	return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0;
@@ -540,12 +557,21 @@ static int x86_smt_flags(void)
 	return cpu_smt_flags() | x86_sched_itmt_flags();
 }
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+static int x86_cluster_flags(void)
+{
+	return cpu_cluster_flags() | x86_sched_itmt_flags();
+}
+#endif
 #endif
 
 static struct sched_domain_topology_level x86_numa_in_package_topology[] = {
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -556,6 +582,9 @@ static int x86_smt_flags(void)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu)
 	if (!has_mp) {
 		cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu));
 		cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu));
+		cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu));
 		cpumask_set_cpu(cpu, topology_core_cpumask(cpu));
 		cpumask_set_cpu(cpu, topology_die_cpumask(cpu));
 		c->booted_cores = 1;
@@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu)
 		if ((i == cpu) || (has_mp && match_llc(c, o)))
 			link_mask(cpu_llc_shared_mask, cpu, i);
 
+		if ((i == cpu) || (has_mp && match_l2c(c, o)))
+			link_mask(cpu_l2c_shared_mask, cpu, i);
 	}
 
 	/*
@@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return cpu_llc_shared_mask(cpu);
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return cpu_l2c_shared_mask(cpu);
+}
+
 static void impress_friends(void)
 {
 	int cpu;
@@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 		zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL);
 	}
 
 	/*
@@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu)
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
 	for_each_cpu(sibling, cpu_llc_shared_mask(cpu))
 		cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling));
+	for_each_cpu(sibling, cpu_l2c_shared_mask(cpu))
+		cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling));
 	cpumask_clear(cpu_llc_shared_mask(cpu));
+	cpumask_clear(cpu_l2c_shared_mask(cpu));
 	cpumask_clear(topology_sibling_cpumask(cpu));
 	cpumask_clear(topology_core_cpumask(cpu));
 	cpumask_clear(topology_die_cpumask(cpu));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19  4:16 ` [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Barry Song
@ 2021-03-19  6:35   ` Greg KH
  2021-03-19  6:57     ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 17+ messages in thread
From: Greg KH @ 2021-03-19  6:35 UTC (permalink / raw)
  To: Barry Song
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa

On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:
> diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
> index b90dafc..f9d3745 100644
> --- a/Documentation/admin-guide/cputopology.rst
> +++ b/Documentation/admin-guide/cputopology.rst
> @@ -24,6 +24,12 @@ core_id:
>  	identifier (rather than the kernel's).  The actual value is
>  	architecture and platform dependent.
>  
> +cluster_id:
> +
> +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> +	identifier (rather than the kernel's).  The actual value is
> +	architecture and platform dependent.
> +
>  book_id:
>  
>  	the book ID of cpuX. Typically it is the hardware platform's
> @@ -56,6 +62,14 @@ package_cpus_list:
>  	human-readable list of CPUs sharing the same physical_package_id.
>  	(deprecated name: "core_siblings_list")
>  
> +cluster_cpus:
> +
> +	internal kernel map of CPUs within the same cluster.
> +
> +cluster_cpus_list:
> +
> +	human-readable list of CPUs within the same cluster.
> +
>  die_cpus:
>  
>  	internal kernel map of CPUs within the same die.

Why are these sysfs files in this file, and not in a Documentation/ABI/
file which can be correctly parsed and shown to userspace?

Any chance you can fix that up here as well?

Also note that "list" is not something that goes in sysfs, sysfs is "one
value per file", and a list is not "one value".  How do you prevent
overflowing the buffer of the sysfs file if you have a "list"?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19  6:35   ` Greg KH
@ 2021-03-19  6:57     ` Song Bao Hua (Barry Song)
  2021-03-19  9:36       ` Jonathan Cameron
  0 siblings, 1 reply; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-19  6:57 UTC (permalink / raw)
  To: Greg KH
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Greg KH [mailto:gregkh@linuxfoundation.org]
> Sent: Friday, March 19, 2021 7:35 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; Jonathan
> Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> a die
> 
> On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:
> > diff --git a/Documentation/admin-guide/cputopology.rst
> b/Documentation/admin-guide/cputopology.rst
> > index b90dafc..f9d3745 100644
> > --- a/Documentation/admin-guide/cputopology.rst
> > +++ b/Documentation/admin-guide/cputopology.rst
> > @@ -24,6 +24,12 @@ core_id:
> >  	identifier (rather than the kernel's).  The actual value is
> >  	architecture and platform dependent.
> >
> > +cluster_id:
> > +
> > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > +	identifier (rather than the kernel's).  The actual value is
> > +	architecture and platform dependent.
> > +
> >  book_id:
> >
> >  	the book ID of cpuX. Typically it is the hardware platform's
> > @@ -56,6 +62,14 @@ package_cpus_list:
> >  	human-readable list of CPUs sharing the same physical_package_id.
> >  	(deprecated name: "core_siblings_list")
> >
> > +cluster_cpus:
> > +
> > +	internal kernel map of CPUs within the same cluster.
> > +
> > +cluster_cpus_list:
> > +
> > +	human-readable list of CPUs within the same cluster.
> > +
> >  die_cpus:
> >
> >  	internal kernel map of CPUs within the same die.
> 
> Why are these sysfs files in this file, and not in a Documentation/ABI/
> file which can be correctly parsed and shown to userspace?

Well. Those ABIs have been there for much a long time. It is like:

[root@ceph1 topology]# ls
core_id  core_siblings  core_siblings_list  physical_package_id thread_siblings  thread_siblings_list
[root@ceph1 topology]# pwd
/sys/devices/system/cpu/cpu100/topology
[root@ceph1 topology]# cat core_siblings_list
64-127
[root@ceph1 topology]#

> 
> Any chance you can fix that up here as well?

Yes. we will send a separate patch to address this, which won't
be in this patchset. This patchset will base on that one.

> 
> Also note that "list" is not something that goes in sysfs, sysfs is "one
> value per file", and a list is not "one value".  How do you prevent
> overflowing the buffer of the sysfs file if you have a "list"?
> 

At a glance, the list is using "-" rather than a real list
[root@ceph1 topology]# cat core_siblings_list
64-127

Anyway, I will take a look if it has any chance to overflow.

> thanks,
> 
> greg k-h

Thanks
Barry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19  6:57     ` Song Bao Hua (Barry Song)
@ 2021-03-19  9:36       ` Jonathan Cameron
  2021-03-19 10:01         ` Greg KH
  0 siblings, 1 reply; 17+ messages in thread
From: Jonathan Cameron @ 2021-03-19  9:36 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Greg KH, tim.c.chen, catalin.marinas, will, rjw, vincent.guittot,
	bp, tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt,
	bsegall, mgorman, msys.mizuma, valentin.schneider, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On Fri, 19 Mar 2021 06:57:08 +0000
"Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote:

> > -----Original Message-----
> > From: Greg KH [mailto:gregkh@linuxfoundation.org]
> > Sent: Friday, March 19, 2021 7:35 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; Jonathan
> > Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> > a die
> > 
> > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:  
> > > diff --git a/Documentation/admin-guide/cputopology.rst  
> > b/Documentation/admin-guide/cputopology.rst  
> > > index b90dafc..f9d3745 100644
> > > --- a/Documentation/admin-guide/cputopology.rst
> > > +++ b/Documentation/admin-guide/cputopology.rst
> > > @@ -24,6 +24,12 @@ core_id:
> > >  	identifier (rather than the kernel's).  The actual value is
> > >  	architecture and platform dependent.
> > >
> > > +cluster_id:
> > > +
> > > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > > +	identifier (rather than the kernel's).  The actual value is
> > > +	architecture and platform dependent.
> > > +
> > >  book_id:
> > >
> > >  	the book ID of cpuX. Typically it is the hardware platform's
> > > @@ -56,6 +62,14 @@ package_cpus_list:
> > >  	human-readable list of CPUs sharing the same physical_package_id.
> > >  	(deprecated name: "core_siblings_list")
> > >
> > > +cluster_cpus:
> > > +
> > > +	internal kernel map of CPUs within the same cluster.
> > > +
> > > +cluster_cpus_list:
> > > +
> > > +	human-readable list of CPUs within the same cluster.
> > > +
> > >  die_cpus:
> > >
> > >  	internal kernel map of CPUs within the same die.  
> > 
> > Why are these sysfs files in this file, and not in a Documentation/ABI/
> > file which can be correctly parsed and shown to userspace?  
> 
> Well. Those ABIs have been there for much a long time. It is like:
> 
> [root@ceph1 topology]# ls
> core_id  core_siblings  core_siblings_list  physical_package_id thread_siblings  thread_siblings_list
> [root@ceph1 topology]# pwd
> /sys/devices/system/cpu/cpu100/topology
> [root@ceph1 topology]# cat core_siblings_list
> 64-127
> [root@ceph1 topology]#
> 
> > 
> > Any chance you can fix that up here as well?  
> 
> Yes. we will send a separate patch to address this, which won't
> be in this patchset. This patchset will base on that one.
> 
> > 
> > Also note that "list" is not something that goes in sysfs, sysfs is "one
> > value per file", and a list is not "one value".  How do you prevent
> > overflowing the buffer of the sysfs file if you have a "list"?
> >   
> 
> At a glance, the list is using "-" rather than a real list
> [root@ceph1 topology]# cat core_siblings_list
> 64-127
> 
> Anyway, I will take a look if it has any chance to overflow.

It could in theory be alternate CPUs as comma separated list.
So it's would get interesting around 500-1000 cpus (guessing).

Hopefully no one has that crazy a cpu numbering scheme but it's possible
(note that cluster is fine for this, but I guess it might eventually
happen for core-siblings list (cpus within a package).

Shouldn't crash or anything like that but might terminate early.

On sysfs file conversion, that got mentioned earlier but I forgot
to remind Barry about it when he took this patch into his series.
Sorry about that!

Jonathan


> 
> > thanks,
> > 
> > greg k-h  
> 
> Thanks
> Barry
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19  9:36       ` Jonathan Cameron
@ 2021-03-19 10:01         ` Greg KH
  2021-04-20  3:30           ` Song Bao Hua (Barry Song)
  2021-04-21  4:06           ` Song Bao Hua (Barry Song)
  0 siblings, 2 replies; 17+ messages in thread
From: Greg KH @ 2021-03-19 10:01 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: Song Bao Hua (Barry Song),
	tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote:
> On Fri, 19 Mar 2021 06:57:08 +0000
> "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote:
> 
> > > -----Original Message-----
> > > From: Greg KH [mailto:gregkh@linuxfoundation.org]
> > > Sent: Friday, March 19, 2021 7:35 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; Jonathan
> > > Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> > > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > > linuxarm@openeuler.org; hpa@zytor.com
> > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> > > a die
> > > 
> > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:  
> > > > diff --git a/Documentation/admin-guide/cputopology.rst  
> > > b/Documentation/admin-guide/cputopology.rst  
> > > > index b90dafc..f9d3745 100644
> > > > --- a/Documentation/admin-guide/cputopology.rst
> > > > +++ b/Documentation/admin-guide/cputopology.rst
> > > > @@ -24,6 +24,12 @@ core_id:
> > > >  	identifier (rather than the kernel's).  The actual value is
> > > >  	architecture and platform dependent.
> > > >
> > > > +cluster_id:
> > > > +
> > > > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > > > +	identifier (rather than the kernel's).  The actual value is
> > > > +	architecture and platform dependent.
> > > > +
> > > >  book_id:
> > > >
> > > >  	the book ID of cpuX. Typically it is the hardware platform's
> > > > @@ -56,6 +62,14 @@ package_cpus_list:
> > > >  	human-readable list of CPUs sharing the same physical_package_id.
> > > >  	(deprecated name: "core_siblings_list")
> > > >
> > > > +cluster_cpus:
> > > > +
> > > > +	internal kernel map of CPUs within the same cluster.
> > > > +
> > > > +cluster_cpus_list:
> > > > +
> > > > +	human-readable list of CPUs within the same cluster.
> > > > +
> > > >  die_cpus:
> > > >
> > > >  	internal kernel map of CPUs within the same die.  
> > > 
> > > Why are these sysfs files in this file, and not in a Documentation/ABI/
> > > file which can be correctly parsed and shown to userspace?  
> > 
> > Well. Those ABIs have been there for much a long time. It is like:
> > 
> > [root@ceph1 topology]# ls
> > core_id  core_siblings  core_siblings_list  physical_package_id thread_siblings  thread_siblings_list
> > [root@ceph1 topology]# pwd
> > /sys/devices/system/cpu/cpu100/topology
> > [root@ceph1 topology]# cat core_siblings_list
> > 64-127
> > [root@ceph1 topology]#
> > 
> > > 
> > > Any chance you can fix that up here as well?  
> > 
> > Yes. we will send a separate patch to address this, which won't
> > be in this patchset. This patchset will base on that one.
> > 
> > > 
> > > Also note that "list" is not something that goes in sysfs, sysfs is "one
> > > value per file", and a list is not "one value".  How do you prevent
> > > overflowing the buffer of the sysfs file if you have a "list"?
> > >   
> > 
> > At a glance, the list is using "-" rather than a real list
> > [root@ceph1 topology]# cat core_siblings_list
> > 64-127
> > 
> > Anyway, I will take a look if it has any chance to overflow.
> 
> It could in theory be alternate CPUs as comma separated list.
> So it's would get interesting around 500-1000 cpus (guessing).
> 
> Hopefully no one has that crazy a cpu numbering scheme but it's possible
> (note that cluster is fine for this, but I guess it might eventually
> happen for core-siblings list (cpus within a package).
> 
> Shouldn't crash or anything like that but might terminate early.

We have a broken sysfs api already for listing LED numbers that has had
to be worked around in the past, please do not create a new one with
that same problem, we should learn from them :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc
  2021-03-19  4:16 ` [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc Barry Song
@ 2021-03-19 21:39   ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-19 21:39 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Friday, March 19, 2021 5:16 PM
> To: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com; Song Bao Hua
> (Barry Song) <song.bao.hua@hisilicon.com>
> Subject: [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning
> the whole llc
> 
> On kunpeng920, cpus within one cluster can communicate wit each other
> much faster than cpus across different clusters. A simple hackbench
> can prove that.
> hackbench running on 4 cpus in single one cluster and 4 cpus in
> different clusters shows a large contrast:
> (1) within a cluster:
> root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
> Running in threaded mode with 1 groups using 40 file descriptors each
> (== 40 tasks)
> Each sender will pass 20000 messages of 100 bytes
> Time: 4.285
> 
> (2) across clusters:
> root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
> Running in threaded mode with 1 groups using 40 file descriptors each
> (== 40 tasks)
> Each sender will pass 20000 messages of 100 bytes
> Time: 5.524
> 
> This inspires us to change the wake_affine path to scan cluster before
> scanning the whole LLC to try to gatter related tasks in one cluster,
> which is done by this patch.
> 
> To evaluate the performance impact to related tasks talking with each
> other, we run the below hackbench with different -g parameter from 2
> to 14, for each different g, we run the command 10 times and get the
> average time:
> $ numactl -N 0 hackbench -p -T -l 20000 -g $1
> 
> hackbench will report the time which is needed to complete a certain number
> of messages transmissions between a certain number of tasks, for example:
> $ numactl -N 0 hackbench -p -T -l 20000 -g 10
> Running in threaded mode with 10 groups using 40 file descriptors each
> (== 400 tasks)
> Each sender will pass 20000 messages of 100 bytes
> 
> The below is the result of hackbench w/ and w/o cluster patch:
> g=    2      4     6       8      10     12      14
> w/o: 1.8151 3.8499 5.5142 7.2491 9.0340 10.7345 12.0929
> w/ : 1.7881 3.7371 5.3301 6.9747 8.6909  9.9235 11.2608
> 
> Obviously some recent commits have improved the hackbench. So the change
> in wake_affine path brings less increase on hackbench compared to what
> we got in RFC v4.
> And obviously it is much more tricky to leverage wake_affine compared to
> leveraging the scatter of tasks in the previous patch as load balance
> might pull tasks which have been compact in a cluster so alternative
> suggestions welcome.
> 
> In order to figure out how many times cpu is picked from the cluster and
> how many times cpu is picked out of the cluster, a tracepoint for debug
> purpose is added in this patch. And an userspace bcc script to print the
> histogram of the result of select_idle_cpu():
> #!/usr/bin/python
> #
> # selectidlecpu.py	select idle cpu histogram.
> #
> # A Ctrl-C will print the gathered histogram then exit.
> #
> # 18-March-2021 Barry Song Created this.
> 
> from __future__ import print_function
> from bcc import BPF
> from time import sleep
> 
> # load BPF program
> b = BPF(text="""
> 
> BPF_HISTOGRAM(dist);
> 
> TRACEPOINT_PROBE(sched, sched_select_idle_cpu)
> {
> 	u32 e;
> 	if (args->idle / 4 == args->target/4)
> 		e = 0; /* idle cpu from same cluster */

Oops here, as -1/4 = 1/4 = 2/4 = 3/4 = 0
So a part of -1 is put here(local cluster) incorrectly.

> 	else if (args->idle != -1)
> 		e = 1; /* idle cpu from different clusters */
> 	else
> 		e = 2; /* no idle cpu */
> 
> 	dist.increment(e);
> 	return 0;
> }
> """)

Fixed it to:

TRACEPOINT_PROBE(sched, sched_select_idle_cpu)
{
        u32 e;
        if (args->idle == -1)
                e = 2; /* no idle cpu */
        else if (args->idle / 4 == args->target / 4)
                e = 0; /* idle cpu from same cluster */
        else
                e = 1; /* idle cpu from different clusters */

        dist.increment(e);
        return 0;
}

> 
> # header
> print("Tracing... Hit Ctrl-C to end.")
> 
> # trace until Ctrl-C
> try:
> 	sleep(99999999)
> except KeyboardInterrupt:
> 	print()
> 
> # output
> 
> print("\nlinear histogram")
> print("~~~~~~~~~~~~~~~~")
> b["dist"].print_linear_hist("idle")
> 
> Even while g=14 and the system is quite busy, we can see there are some
> chances idle cpu is picked from local cluster:
> linear histogram
> ~~~~~~~~~~~~~~
>      idle          : count     distribution
>         0          : 15234281 |***********                             |
>         1          : 18494    |                                        |
>         2          : 53066152 |****************************************|
> 
> 0: local cluster
> 1: out of the cluster
> 2: select_idle_cpu() returns -1

After fixing the script, the new histogram is like:
linear histogram
~~~~~~~~~~~~~~~~
     idle          : count     distribution
        0          : 2765930  |*                                       |
        1          : 68934    |                                        |
        2          : 77667475 |****************************************|

We get 
Local cluster: 3.4358%
Out of cluster: 0.0856%
-1(no idle before nr becomes 0): 96.4785%

> 
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
>  include/trace/events/sched.h | 22 ++++++++++++++++++++++
>  kernel/sched/fair.c          | 32 +++++++++++++++++++++++++++++++-
>  2 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> index cbe3e15..86608cf 100644
> --- a/include/trace/events/sched.h
> +++ b/include/trace/events/sched.h
> @@ -136,6 +136,28 @@
>  );
> 
>  /*
> + * Tracepoint for select_idle_cpu:
> + */
> +TRACE_EVENT(sched_select_idle_cpu,
> +
> +	TP_PROTO(int target, int idle),
> +
> +	TP_ARGS(target, idle),
> +
> +	TP_STRUCT__entry(
> +		__field(	int,	target			)
> +		__field(	int,	idle			)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->target	= target;
> +		__entry->idle = idle;
> +	),
> +
> +	TP_printk("target=%d idle=%d", __entry->target, __entry->idle)
> +);
> +
> +/*
>   * Tracepoint for waking up a task:
>   */
>  DECLARE_EVENT_CLASS(sched_wakeup_template,
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c92ad9f2..3892d42 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6150,7 +6150,12 @@ static int select_idle_cpu(struct task_struct *p, struct
> sched_domain *sd, int t
>  	if (!this_sd)
>  		return -1;
> 
> -	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +	if (!sched_cluster_active())
> +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +#ifdef CONFIG_SCHED_CLUSTER
> +	if (sched_cluster_active())
> +		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> +#endif
> 
>  	if (sched_feat(SIS_PROP) && !smt) {
>  		u64 avg_cost, avg_idle, span_avg;
> @@ -6171,6 +6176,29 @@ static int select_idle_cpu(struct task_struct *p, struct
> sched_domain *sd, int t
>  		time = cpu_clock(this);
>  	}
> 
> +#ifdef CONFIG_SCHED_CLUSTER
> +	if (sched_cluster_active()) {
> +		for_each_cpu_wrap(cpu, cpus, target) {
> +			if (smt) {
> +				i = select_idle_core(p, cpu, cpus, &idle_cpu);
> +				if ((unsigned int)i < nr_cpumask_bits)
> +					return i;
> +
> +			} else {
> +				if (!--nr)
> +					return -1;
> +				idle_cpu = __select_idle_cpu(cpu);
> +				if ((unsigned int)idle_cpu < nr_cpumask_bits) {
> +					goto done;
> +				}
> +			}

BTW, if I return -1 here directly and don't fall back to LLC, I
can even get a better benchmark:

g=            2      4     6       8      10     12      14
w/o:        1.8151 3.8499 5.5142 7.2491 9.0340 10.7345 12.0929
w/ :        1.7881 3.7371 5.3301 6.9747 8.6909  9.9235 11.2608
return -1:  1.8324 3.6140 5.1029 6.5016 8.1867  9.7559 10.7716

so it seems the wake-up path change is much more trivial to
get a real and good impact, comparing to the previous 2/4
patch in which we are only spreading tasks.

> +		}
> +
> +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
> +	}
> +#endif
> +
>  	for_each_cpu_wrap(cpu, cpus, target) {
>  		if (smt) {
>  			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> @@ -6186,6 +6214,7 @@ static int select_idle_cpu(struct task_struct *p, struct
> sched_domain *sd, int t
>  		}
>  	}
> 
> +done:
>  	if (smt)
>  		set_idle_cores(this, false);
> 
> @@ -6324,6 +6353,7 @@ static int select_idle_sibling(struct task_struct *p,
> int prev, int target)
>  		return target;
> 
>  	i = select_idle_cpu(p, sd, target);
> +	trace_sched_select_idle_cpu(target, i);
>  	if ((unsigned)i < nr_cpumask_bits)
>  		return i;
> 
> --
> 1.8.3.1

Thanks
Barry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
  2021-03-19  4:16 ` [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 Barry Song
@ 2021-03-23 22:50   ` Tim Chen
  2021-03-23 23:21     ` Song Bao Hua (Barry Song)
  2021-03-31 10:07     ` Song Bao Hua (Barry Song)
  0 siblings, 2 replies; 17+ messages in thread
From: Tim Chen @ 2021-03-23 22:50 UTC (permalink / raw)
  To: Barry Song, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa



On 3/18/21 9:16 PM, Barry Song wrote:
> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> is shared among a cluster of cores instead of being exclusive
> to one single core.
> 
> To prevent oversubscription of L2 cache, load should be
> balanced between such L2 clusters, especially for tasks with
> no shared data.
> 
> Also with cluster scheduling policy where tasks are woken up
> in the same L2 cluster, we will benefit from keeping tasks
> related to each other and likely sharing data in the same L2
> cluster.
> 
> Add CPU masks of CPUs sharing the L2 cache so we can build such
> L2 cluster scheduler domain.
> 
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>


Barry,

Can you also add this chunk to the patch.
Thanks.

Tim


diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 2a11ccc14fb1..800fa48c9fcd 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -115,6 +115,7 @@ extern unsigned int __max_die_per_package;
 
 #ifdef CONFIG_SMP
 #define topology_die_cpumask(cpu)		(per_cpu(cpu_die_map, cpu))
+#define topology_cluster_cpumask(cpu)		(cpu_clustergroup_mask(cpu))
 #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
 #define topology_sibling_cpumask(cpu)		(per_cpu(cpu_sibling_map, cpu))
 

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
  2021-03-23 22:50   ` Tim Chen
@ 2021-03-23 23:21     ` Song Bao Hua (Barry Song)
  2021-04-20 18:31       ` Tim Chen
  2021-03-31 10:07     ` Song Bao Hua (Barry Song)
  1 sibling, 1 reply; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-23 23:21 UTC (permalink / raw)
  To: Tim Chen, catalin.marinas, will, rjw, vincent.guittot, bp, tglx,
	mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall, mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Wednesday, March 24, 2021 11:51 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net;
> vincent.guittot@linaro.org; bp@alien8.de; tglx@linutronix.de;
> mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
> 
> 
> 
> On 3/18/21 9:16 PM, Barry Song wrote:
> > From: Tim Chen <tim.c.chen@linux.intel.com>
> >
> > There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> > is shared among a cluster of cores instead of being exclusive
> > to one single core.
> >
> > To prevent oversubscription of L2 cache, load should be
> > balanced between such L2 clusters, especially for tasks with
> > no shared data.
> >
> > Also with cluster scheduling policy where tasks are woken up
> > in the same L2 cluster, we will benefit from keeping tasks
> > related to each other and likely sharing data in the same L2
> > cluster.
> >
> > Add CPU masks of CPUs sharing the L2 cache so we can build such
> > L2 cluster scheduler domain.
> >
> > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> 
> 
> Barry,
> 
> Can you also add this chunk to the patch.
> Thanks.

Sure, Tim, Thanks. I'll put that into patch 4/4 in v6.

> 
> Tim
> 
> 
> diff --git a/arch/x86/include/asm/topology.h
> b/arch/x86/include/asm/topology.h
> index 2a11ccc14fb1..800fa48c9fcd 100644
> --- a/arch/x86/include/asm/topology.h
> +++ b/arch/x86/include/asm/topology.h
> @@ -115,6 +115,7 @@ extern unsigned int __max_die_per_package;
> 
>  #ifdef CONFIG_SMP
>  #define topology_die_cpumask(cpu)		(per_cpu(cpu_die_map, cpu))
> +#define topology_cluster_cpumask(cpu)		(cpu_clustergroup_mask(cpu))
>  #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
>  #define topology_sibling_cpumask(cpu)		(per_cpu(cpu_sibling_map, cpu))
> 

Thanks
Barry



^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
  2021-03-23 22:50   ` Tim Chen
  2021-03-23 23:21     ` Song Bao Hua (Barry Song)
@ 2021-03-31 10:07     ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-31 10:07 UTC (permalink / raw)
  To: Tim Chen, catalin.marinas, will, rjw, vincent.guittot, bp, tglx,
	mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall, mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Wednesday, March 24, 2021 12:15 PM
> To: 'Tim Chen' <tim.c.chen@linux.intel.com>; catalin.marinas@arm.com;
> will@kernel.org; rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
> 
> 
> 
> > -----Original Message-----
> > From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> > Sent: Wednesday, March 24, 2021 11:51 AM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> > catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net;
> > vincent.guittot@linaro.org; bp@alien8.de; tglx@linutronix.de;
> > mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > mgorman@suse.de
> > Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu
> (Kenneth)
> > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for
> x86
> >
> >
> >
> > On 3/18/21 9:16 PM, Barry Song wrote:
> > > From: Tim Chen <tim.c.chen@linux.intel.com>
> > >
> > > There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> > > is shared among a cluster of cores instead of being exclusive
> > > to one single core.
> > >
> > > To prevent oversubscription of L2 cache, load should be
> > > balanced between such L2 clusters, especially for tasks with
> > > no shared data.
> > >
> > > Also with cluster scheduling policy where tasks are woken up
> > > in the same L2 cluster, we will benefit from keeping tasks
> > > related to each other and likely sharing data in the same L2
> > > cluster.
> > >
> > > Add CPU masks of CPUs sharing the L2 cache so we can build such
> > > L2 cluster scheduler domain.
> > >
> > > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> >
> >
> > Barry,
> >
> > Can you also add this chunk to the patch.
> > Thanks.
> 
> Sure, Tim, Thanks. I'll put that into patch 4/4 in v6.

Hi Tim,
You might want to take a look at this qemu patchset:
https://lore.kernel.org/qemu-devel/20210331095343.12172-1-wangyanan55@huawei.com/T/#t

someone is trying to leverage this cluster topology
to improve KVM virtual machines performance.

> 
> >
> > Tim
> >
> >
> > diff --git a/arch/x86/include/asm/topology.h
> > b/arch/x86/include/asm/topology.h
> > index 2a11ccc14fb1..800fa48c9fcd 100644
> > --- a/arch/x86/include/asm/topology.h
> > +++ b/arch/x86/include/asm/topology.h
> > @@ -115,6 +115,7 @@ extern unsigned int __max_die_per_package;
> >
> >  #ifdef CONFIG_SMP
> >  #define topology_die_cpumask(cpu)		(per_cpu(cpu_die_map, cpu))
> > +#define topology_cluster_cpumask(cpu)		(cpu_clustergroup_mask(cpu))
> >  #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
> >  #define topology_sibling_cpumask(cpu)		(per_cpu(cpu_sibling_map, cpu))
> >
> 

Thanks
Barry

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19 10:01         ` Greg KH
@ 2021-04-20  3:30           ` Song Bao Hua (Barry Song)
  2021-04-21  4:06           ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-04-20  3:30 UTC (permalink / raw)
  To: Greg KH, Jonathan Cameron
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa, tiantao (H)



> -----Original Message-----
> From: Greg KH [mailto:gregkh@linuxfoundation.org]
> Sent: Friday, March 19, 2021 11:02 PM
> To: Jonathan Cameron <jonathan.cameron@huawei.com>
> Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> a die
> 
> On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote:
> > On Fri, 19 Mar 2021 06:57:08 +0000
> > "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote:
> >
> > > > -----Original Message-----
> > > > From: Greg KH [mailto:gregkh@linuxfoundation.org]
> > > > Sent: Friday, March 19, 2021 7:35 PM
> > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com;
> will@kernel.org;
> > > > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > > > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org;
> peterz@infradead.org;
> > > > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > > > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> Jonathan
> > > > Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> > > > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > > > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > > > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O)
> <xuwei5@huawei.com>;
> > > > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org;
> yangyicong
> > > > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > > > linuxarm@openeuler.org; hpa@zytor.com
> > > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> > > > a die
> > > >
> > > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:
> > > > > diff --git a/Documentation/admin-guide/cputopology.rst
> > > > b/Documentation/admin-guide/cputopology.rst
> > > > > index b90dafc..f9d3745 100644
> > > > > --- a/Documentation/admin-guide/cputopology.rst
> > > > > +++ b/Documentation/admin-guide/cputopology.rst
> > > > > @@ -24,6 +24,12 @@ core_id:
> > > > >  	identifier (rather than the kernel's).  The actual value is
> > > > >  	architecture and platform dependent.
> > > > >
> > > > > +cluster_id:
> > > > > +
> > > > > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > > > > +	identifier (rather than the kernel's).  The actual value is
> > > > > +	architecture and platform dependent.
> > > > > +
> > > > >  book_id:
> > > > >
> > > > >  	the book ID of cpuX. Typically it is the hardware platform's
> > > > > @@ -56,6 +62,14 @@ package_cpus_list:
> > > > >  	human-readable list of CPUs sharing the same physical_package_id.
> > > > >  	(deprecated name: "core_siblings_list")
> > > > >
> > > > > +cluster_cpus:
> > > > > +
> > > > > +	internal kernel map of CPUs within the same cluster.
> > > > > +
> > > > > +cluster_cpus_list:
> > > > > +
> > > > > +	human-readable list of CPUs within the same cluster.
> > > > > +
> > > > >  die_cpus:
> > > > >
> > > > >  	internal kernel map of CPUs within the same die.
> > > >
> > > > Why are these sysfs files in this file, and not in a Documentation/ABI/
> > > > file which can be correctly parsed and shown to userspace?
> > >
> > > Well. Those ABIs have been there for much a long time. It is like:
> > >
> > > [root@ceph1 topology]# ls
> > > core_id  core_siblings  core_siblings_list  physical_package_id
> thread_siblings  thread_siblings_list
> > > [root@ceph1 topology]# pwd
> > > /sys/devices/system/cpu/cpu100/topology
> > > [root@ceph1 topology]# cat core_siblings_list
> > > 64-127
> > > [root@ceph1 topology]#
> > >
> > > >
> > > > Any chance you can fix that up here as well?
> > >
> > > Yes. we will send a separate patch to address this, which won't
> > > be in this patchset. This patchset will base on that one.
> > >
> > > >
> > > > Also note that "list" is not something that goes in sysfs, sysfs is "one
> > > > value per file", and a list is not "one value".  How do you prevent
> > > > overflowing the buffer of the sysfs file if you have a "list"?
> > > >
> > >
> > > At a glance, the list is using "-" rather than a real list
> > > [root@ceph1 topology]# cat core_siblings_list
> > > 64-127
> > >
> > > Anyway, I will take a look if it has any chance to overflow.
> >
> > It could in theory be alternate CPUs as comma separated list.
> > So it's would get interesting around 500-1000 cpus (guessing).
> >
> > Hopefully no one has that crazy a cpu numbering scheme but it's possible
> > (note that cluster is fine for this, but I guess it might eventually
> > happen for core-siblings list (cpus within a package).
> >
> > Shouldn't crash or anything like that but might terminate early.
> 
> We have a broken sysfs api already for listing LED numbers that has had
> to be worked around in the past, please do not create a new one with
> that same problem, we should learn from them :)

Another place I am seeing a cpu list is in numa topology:
/sys/devices/system/node/nodex/cpulist.

But the code has a BUILD_BUG_ON to guard the pagebuf:

static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
{
	ssize_t n;
	cpumask_var_t mask;
	struct node *node_dev = to_node(dev);

	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));

	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
		return 0;

	cpumask_and(mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
	n = cpumap_print_to_pagebuf(list, buf, mask);
	free_cpumask_var(mask);

	return n;
}

For lists in cpu topology, I haven't seen this while I believe we need it.
Or am I missing something?

> 
> thanks,
> 
> greg k-h

Thanks
Barry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
  2021-03-23 23:21     ` Song Bao Hua (Barry Song)
@ 2021-04-20 18:31       ` Tim Chen
  2021-04-20 22:31         ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 17+ messages in thread
From: Tim Chen @ 2021-04-20 18:31 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song),
	catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, peterz, dietmar.eggemann, rostedt, bsegall, mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote:

>>
>> On 3/18/21 9:16 PM, Barry Song wrote:
>>> From: Tim Chen <tim.c.chen@linux.intel.com>
>>>
>>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>>> is shared among a cluster of cores instead of being exclusive
>>> to one single core.
>>>
>>> To prevent oversubscription of L2 cache, load should be
>>> balanced between such L2 clusters, especially for tasks with
>>> no shared data.
>>>
>>> Also with cluster scheduling policy where tasks are woken up
>>> in the same L2 cluster, we will benefit from keeping tasks
>>> related to each other and likely sharing data in the same L2
>>> cluster.
>>>
>>> Add CPU masks of CPUs sharing the L2 cache so we can build such
>>> L2 cluster scheduler domain.
>>>
>>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
>>> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
>>
>>
>> Barry,
>>
>> Can you also add this chunk to the patch.
>> Thanks.
> 
> Sure, Tim, Thanks. I'll put that into patch 4/4 in v6.
> 

Barry,

This chunk will also need to be added to return cluster id for x86.
Please add it in your next rev.

Thanks.

Tim

---

diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 800fa48c9fcd..2548d824f103 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -109,6 +109,7 @@ extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
 #define topology_logical_die_id(cpu)		(cpu_data(cpu).logical_die_id)
 #define topology_die_id(cpu)			(cpu_data(cpu).cpu_die_id)
+#define topology_cluster_id(cpu)		(per_cpu(cpu_l2c_id, cpu))
 #define topology_core_id(cpu)			(cpu_data(cpu).cpu_core_id)
 
 extern unsigned int __max_die_per_package;

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
  2021-04-20 18:31       ` Tim Chen
@ 2021-04-20 22:31         ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-04-20 22:31 UTC (permalink / raw)
  To: Tim Chen, catalin.marinas, will, rjw, vincent.guittot, bp, tglx,
	mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall, mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Wednesday, April 21, 2021 6:32 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net;
> vincent.guittot@linaro.org; bp@alien8.de; tglx@linutronix.de;
> mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
> 
> 
> 
> On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote:
> 
> >>
> >> On 3/18/21 9:16 PM, Barry Song wrote:
> >>> From: Tim Chen <tim.c.chen@linux.intel.com>
> >>>
> >>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> >>> is shared among a cluster of cores instead of being exclusive
> >>> to one single core.
> >>>
> >>> To prevent oversubscription of L2 cache, load should be
> >>> balanced between such L2 clusters, especially for tasks with
> >>> no shared data.
> >>>
> >>> Also with cluster scheduling policy where tasks are woken up
> >>> in the same L2 cluster, we will benefit from keeping tasks
> >>> related to each other and likely sharing data in the same L2
> >>> cluster.
> >>>
> >>> Add CPU masks of CPUs sharing the L2 cache so we can build such
> >>> L2 cluster scheduler domain.
> >>>
> >>> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> >>> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> >>
> >>
> >> Barry,
> >>
> >> Can you also add this chunk to the patch.
> >> Thanks.
> >
> > Sure, Tim, Thanks. I'll put that into patch 4/4 in v6.
> >
> 
> Barry,
> 
> This chunk will also need to be added to return cluster id for x86.
> Please add it in your next rev.

Yes. Thanks. I'll put this in either RFC v7 or Patch v1.

For spreading path, things are much easier, though packing path is 
quite tricky. But It seems RFC v6 has been quite close to what we want
to achieve to pack related tasks by scanning cluster for tasks within
same NUMA:
https://lore.kernel.org/lkml/20210420001844.9116-1-song.bao.hua@hisilicon.com/

If couples have been already in same LLC(numa), scanning clusters will
gather them further. If they are running in different NUMA nodes, the
original scanning LLC will move them to the same node, after that,
scanning cluster might put them closer to each other.

it seems it is kind of the two-level packing Dietmar has suggested.

So perhaps we won't have RFC v7, I will probably send patch v1 afterwards.

> 
> Thanks.
> 
> Tim
> 
> ---
> 
> diff --git a/arch/x86/include/asm/topology.h
> b/arch/x86/include/asm/topology.h
> index 800fa48c9fcd..2548d824f103 100644
> --- a/arch/x86/include/asm/topology.h
> +++ b/arch/x86/include/asm/topology.h
> @@ -109,6 +109,7 @@ extern const struct cpumask *cpu_clustergroup_mask(int cpu);
>  #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
>  #define topology_logical_die_id(cpu)		(cpu_data(cpu).logical_die_id)
>  #define topology_die_id(cpu)			(cpu_data(cpu).cpu_die_id)
> +#define topology_cluster_id(cpu)		(per_cpu(cpu_l2c_id, cpu))
>  #define topology_core_id(cpu)			(cpu_data(cpu).cpu_core_id)
> 
>  extern unsigned int __max_die_per_package;

Thanks
Barry


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die
  2021-03-19 10:01         ` Greg KH
  2021-04-20  3:30           ` Song Bao Hua (Barry Song)
@ 2021-04-21  4:06           ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 17+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-04-21  4:06 UTC (permalink / raw)
  To: Greg KH, Jonathan Cameron, tiantao (H)
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Tuesday, April 20, 2021 3:24 PM
> To: 'Greg KH' <gregkh@linuxfoundation.org>; Jonathan Cameron
> <jonathan.cameron@huawei.com>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com; tiantao (H)
> <tiantao6@hisilicon.com>
> Subject: RE: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> a die
> 
> 
> 
> > -----Original Message-----
> > From: Greg KH [mailto:gregkh@linuxfoundation.org]
> > Sent: Friday, March 19, 2021 11:02 PM
> > To: Jonathan Cameron <jonathan.cameron@huawei.com>
> > Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> > tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org;
> peterz@infradead.org;
> > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu
> (Kenneth)
> > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> > a die
> >
> > On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote:
> > > On Fri, 19 Mar 2021 06:57:08 +0000
> > > "Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote:
> > >
> > > > > -----Original Message-----
> > > > > From: Greg KH [mailto:gregkh@linuxfoundation.org]
> > > > > Sent: Friday, March 19, 2021 7:35 PM
> > > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com;
> > will@kernel.org;
> > > > > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > > > > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org;
> > peterz@infradead.org;
> > > > > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > > > > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > Jonathan
> > > > > Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> > > > > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > > > > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > > > > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O)
> > <xuwei5@huawei.com>;
> > > > > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org;
> > yangyicong
> > > > > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > > > > linuxarm@openeuler.org; hpa@zytor.com
> > > > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs
> within
> > > > > a die
> > > > >
> > > > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:
> > > > > > diff --git a/Documentation/admin-guide/cputopology.rst
> > > > > b/Documentation/admin-guide/cputopology.rst
> > > > > > index b90dafc..f9d3745 100644
> > > > > > --- a/Documentation/admin-guide/cputopology.rst
> > > > > > +++ b/Documentation/admin-guide/cputopology.rst
> > > > > > @@ -24,6 +24,12 @@ core_id:
> > > > > >  	identifier (rather than the kernel's).  The actual value is
> > > > > >  	architecture and platform dependent.
> > > > > >
> > > > > > +cluster_id:
> > > > > > +
> > > > > > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > > > > > +	identifier (rather than the kernel's).  The actual value is
> > > > > > +	architecture and platform dependent.
> > > > > > +
> > > > > >  book_id:
> > > > > >
> > > > > >  	the book ID of cpuX. Typically it is the hardware platform's
> > > > > > @@ -56,6 +62,14 @@ package_cpus_list:
> > > > > >  	human-readable list of CPUs sharing the same physical_package_id.
> > > > > >  	(deprecated name: "core_siblings_list")
> > > > > >
> > > > > > +cluster_cpus:
> > > > > > +
> > > > > > +	internal kernel map of CPUs within the same cluster.
> > > > > > +
> > > > > > +cluster_cpus_list:
> > > > > > +
> > > > > > +	human-readable list of CPUs within the same cluster.
> > > > > > +
> > > > > >  die_cpus:
> > > > > >
> > > > > >  	internal kernel map of CPUs within the same die.
> > > > >
> > > > > Why are these sysfs files in this file, and not in a Documentation/ABI/
> > > > > file which can be correctly parsed and shown to userspace?
> > > >
> > > > Well. Those ABIs have been there for much a long time. It is like:
> > > >
> > > > [root@ceph1 topology]# ls
> > > > core_id  core_siblings  core_siblings_list  physical_package_id
> > thread_siblings  thread_siblings_list
> > > > [root@ceph1 topology]# pwd
> > > > /sys/devices/system/cpu/cpu100/topology
> > > > [root@ceph1 topology]# cat core_siblings_list
> > > > 64-127
> > > > [root@ceph1 topology]#
> > > >
> > > > >
> > > > > Any chance you can fix that up here as well?
> > > >
> > > > Yes. we will send a separate patch to address this, which won't
> > > > be in this patchset. This patchset will base on that one.
> > > >
> > > > >
> > > > > Also note that "list" is not something that goes in sysfs, sysfs is
> "one
> > > > > value per file", and a list is not "one value".  How do you prevent
> > > > > overflowing the buffer of the sysfs file if you have a "list"?
> > > > >
> > > >
> > > > At a glance, the list is using "-" rather than a real list
> > > > [root@ceph1 topology]# cat core_siblings_list
> > > > 64-127
> > > >
> > > > Anyway, I will take a look if it has any chance to overflow.
> > >
> > > It could in theory be alternate CPUs as comma separated list.
> > > So it's would get interesting around 500-1000 cpus (guessing).
> > >
> > > Hopefully no one has that crazy a cpu numbering scheme but it's possible
> > > (note that cluster is fine for this, but I guess it might eventually
> > > happen for core-siblings list (cpus within a package).
> > >
> > > Shouldn't crash or anything like that but might terminate early.
> >
> > We have a broken sysfs api already for listing LED numbers that has had
> > to be worked around in the past, please do not create a new one with
> > that same problem, we should learn from them :)
> 
> Another place I am seeing a cpu list is in numa topology:
> /sys/devices/system/node/nodex/cpulist.
> 
> But the code has a BUILD_BUG_ON to guard the pagebuf:
> 
> static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
> {
> 	ssize_t n;
> 	cpumask_var_t mask;
> 	struct node *node_dev = to_node(dev);
> 
> 	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
> 	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
> 
> 	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
> 		return 0;
> 
> 	cpumask_and(mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
> 	n = cpumap_print_to_pagebuf(list, buf, mask);
> 	free_cpumask_var(mask);
> 
> 	return n;
> }
> 
> For lists in cpu topology, I haven't seen this while I believe we need it.
> Or am I missing something?

I would prefer we send two patches as a series
"clarify and cleanup CPU and NUMA topology ABIs" with a cover
letter and the below one as 1/2. 2/2 would be the patch moving
the place of cpu topology ABI doc.

From b32c0c00a187d4fe4c49d54d30650b0cacb2c351 Mon Sep 17 00:00:00 2001
From: Tian Tao <tiantao6@hisilicon.com>
Date: Wed, 21 Apr 2021 14:36:11 +1200
Subject: [PATCH 1/2] CPU, NUMA topology ABIs: clarify the overflow issue of sysfs
 pagebuf

Both numa node and cpu use cpu bitmap like 3,ffffffff to expose hardware
topology. When cpu number is large, the page buffer of sysfs will over-
flow. This doesn't really happen nowadays as the maximum NR_CPUS is 8196
for X86_64 and 4096 for ARM64 since 8196 * 9 / 32 = 2305 is still smaller
than 4KB page size.
So the existing BUILD_BUG_ON() in drivers/base/node.c is pretty much
preventing future problems similar with Y2K when hardware gets more
and more CPUs.
On the other hand, it should be more sensible to move the guard to common
code which can protect both cpu and numa:
/sys/devices/system/cpu/cpu0/topology/die_cpus etc.
/sys/devices/system/node/node0/cpumap etc.

Topology bitmap mask strings shouldn't be larger than PAGE_SIZE as
lstopo and numactl depend on them. But other ABIs exposing cpu lists
are not really used by common applications, so this patch also marks
those lists could be trimmed as there is no any guarantee those lists
are always less than PAGE_SIZE especially a list could be like this:
0, 3, 5, 7, 9, 11... etc.

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 Documentation/ABI/stable/sysfs-devices-node |  5 ++++-
 Documentation/admin-guide/cputopology.rst   | 15 +++++++++++++++
 drivers/base/node.c                         |  3 ---
 include/linux/cpumask.h                     |  6 ++++++
 4 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/Documentation/ABI/stable/sysfs-devices-node b/Documentation/ABI/stable/sysfs-devices-node
index 484fc04bcc25..9832a17b2b15 100644
--- a/Documentation/ABI/stable/sysfs-devices-node
+++ b/Documentation/ABI/stable/sysfs-devices-node
@@ -47,7 +47,10 @@ What:		/sys/devices/system/node/nodeX/cpulist
 Date:		October 2002
 Contact:	Linux Memory Management list <linux-mm@kvack.org>
 Description:
-		The CPUs associated to the node.
+		The CPUs associated to the node. The format is like 0-3,
+		8-11, 12-13. The maximum size is PAGE_SIZE, so the tail
+		of the string will be trimmed while its size is larger
+		than PAGE_SIZE.
 
 What:		/sys/devices/system/node/nodeX/meminfo
 Date:		October 2002
diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafcc8237..8fac776a5ffa 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -44,6 +44,9 @@ core_cpus:
 core_cpus_list:
 
 	human-readable list of CPUs within the same core.
+	The format is like 0-3, 8-11, 12-13. The maximum size is PAGE_SIZE,
+	so the tail of the string will be trimmed while its size is larger
+	than PAGE_SIZE.
 	(deprecated name: "thread_siblings_list");
 
 package_cpus:
@@ -54,6 +57,9 @@ package_cpus:
 package_cpus_list:
 
 	human-readable list of CPUs sharing the same physical_package_id.
+	The format is like 0-3, 8-11, 12-13. The maximum size is PAGE_SIZE,
+	so the tail of the string will be trimmed while its size is larger
+	than PAGE_SIZE.
 	(deprecated name: "core_siblings_list")
 
 die_cpus:
@@ -63,6 +69,9 @@ die_cpus:
 die_cpus_list:
 
 	human-readable list of CPUs within the same die.
+	The format is like 0-3, 8-11, 12-13. The maximum size is PAGE_SIZE,
+	so the tail of the string will be trimmed while its size is larger
+	than PAGE_SIZE.
 
 book_siblings:
 
@@ -73,6 +82,9 @@ book_siblings_list:
 
 	human-readable list of cpuX's hardware threads within the same
 	book_id.
+	The format is like 0-3, 8-11, 12-13. The maximum size is PAGE_SIZE,
+	so the tail of the string will be trimmed while its size is larger
+	than PAGE_SIZE.
 
 drawer_siblings:
 
@@ -83,6 +95,9 @@ drawer_siblings_list:
 
 	human-readable list of cpuX's hardware threads within the same
 	drawer_id.
+	The format is like 0-3, 8-11, 12-13. The maximum size is PAGE_SIZE,
+	so the tail of the string will be trimmed while its size is larger
+	than PAGE_SIZE.
 
 Architecture-neutral, drivers/base/topology.c, exports these attributes.
 However, the book and drawer related sysfs files will only be created if
diff --git a/drivers/base/node.c b/drivers/base/node.c
index f449dbb2c746..50324d06bcd5 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -33,9 +33,6 @@ static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
 	cpumask_var_t mask;
 	struct node *node_dev = to_node(dev);
 
-	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
-	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
-
 	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
 		return 0;
 
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 383684e30f12..81f145e0c742 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -12,6 +12,7 @@
 #include <linux/bitmap.h>
 #include <linux/atomic.h>
 #include <linux/bug.h>
+#include <asm/page.h>
 
 /* Don't assign or return these: may not be this big! */
 typedef struct cpumask { DECLARE_BITMAP(bits, NR_CPUS); } cpumask_t;
@@ -924,6 +925,11 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
 static inline ssize_t
 cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
 {
+	/*
+	 * 32bits requires 9bytes: "ff,ffffffff", thus, too many CPUs will
+	 * cause the overflow of sysfs pagebuf
+	 */
+	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
 	return bitmap_print_to_pagebuf(list, buf, cpumask_bits(mask),
 				      nr_cpu_ids);
 }
-- 
2.25.1

Thanks
Barry

^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-04-21  4:06 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-19  4:16 [RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2021-03-19  4:16 ` [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Barry Song
2021-03-19  6:35   ` Greg KH
2021-03-19  6:57     ` Song Bao Hua (Barry Song)
2021-03-19  9:36       ` Jonathan Cameron
2021-03-19 10:01         ` Greg KH
2021-04-20  3:30           ` Song Bao Hua (Barry Song)
2021-04-21  4:06           ` Song Bao Hua (Barry Song)
2021-03-19  4:16 ` [RFC PATCH v5 2/4] scheduler: add scheduler level for clusters Barry Song
2021-03-19  4:16 ` [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc Barry Song
2021-03-19 21:39   ` Song Bao Hua (Barry Song)
2021-03-19  4:16 ` [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 Barry Song
2021-03-23 22:50   ` Tim Chen
2021-03-23 23:21     ` Song Bao Hua (Barry Song)
2021-04-20 18:31       ` Tim Chen
2021-04-20 22:31         ` Song Bao Hua (Barry Song)
2021-03-31 10:07     ` Song Bao Hua (Barry Song)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).