All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler
@ 2020-12-01  2:59 ` Barry Song
  0 siblings, 0 replies; 48+ messages in thread
From: Barry Song @ 2020-12-01  2:59 UTC (permalink / raw)
  To: valentin.schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: linuxarm, xuwei5, prime.zeng, Barry Song

ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster
has local L3 tag. On the other hand, each cluster will share some
internal system bus. This means cache is much more affine inside one cluster
than across clusters.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |


The presented illustration is still a simplification of what is actually
going on, but is a more accurate model than currently presented.

Through the following small program, you can see the performance impact of
running it in one cluster and across two clusters:

struct foo {
        int x;
        int y;
} f;

void *thread1_fun(void *param)
{
        int s = 0;
        for (int i = 0; i < 0xfffffff; i++)
                s += f.x;
}

void *thread2_fun(void *param)
{
        int s = 0;
        for (int i = 0; i < 0xfffffff; i++)
                f.y++;
}

int main(int argc, char **argv)
{
        pthread_t tid1, tid2;

        pthread_create(&tid1, NULL, thread1_fun, NULL);
        pthread_create(&tid2, NULL, thread2_fun, NULL);
        pthread_join(tid1, NULL);
        pthread_join(tid2, NULL);
}

While running this program in one cluster, it takes:
$ time taskset -c 0,1 ./a.out 
real	0m0.832s
user	0m1.649s
sys	0m0.004s

As a contrast, it takes much more time if we run the same program
in two clusters:
$ time taskset -c 0,4 ./a.out 
real	0m1.133s
user	0m1.960s
sys	0m0.000s

0.832/1.133 = 73%, it is a huge difference.

This implies that we should let the Linux scheduler use cluster topology to
make better load balancing and WAKE_AFFINE decisions. Unfortuantely, right
now, all cpu0-23 are treated equally in the current kernel running on
Kunpeng 920.

This patchset first exposes the topology, then add a new sched_domain
between smt and mc. The new sched_domain will influence the load balance
and wake_affine of the scheduler.

The code is still pretty much a proof-of-concept and need lots of benchmark
and tuning. However, a rough hackbench result shows

While running hackbench on one numa node(cpu0-cpu23), we may achieve 5%+
performance improvement with the new sched_domain.
While running hackbench on two numa nodes(cpu0-cpu47), we may achieve 49%+
performance improvement with the new sched_domain.

Although I believe there is still a lot to do, sending a RFC to get feedbacks
of community experts might be helpful for the next step.

Barry Song (1):
  scheduler: add scheduler level for clusters

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die.

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/smp.c                   | 17 +++++++++
 arch/arm64/kernel/topology.c              |  2 ++
 drivers/acpi/pptt.c                       | 60 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 ++++++++
 drivers/base/topology.c                   | 10 ++++++
 include/linux/acpi.h                      |  5 +++ 
 include/linux/arch_topology.h             |  5 +++ 
 include/linux/topology.h                  | 13 +++++++
 kernel/sched/fair.c                       | 35 ++++++++++++++++++
 11 files changed, 190 insertions(+), 4 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler
@ 2020-12-01  2:59 ` Barry Song
  0 siblings, 0 replies; 48+ messages in thread
From: Barry Song @ 2020-12-01  2:59 UTC (permalink / raw)
  To: valentin.schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: Barry Song, prime.zeng, linuxarm, xuwei5

ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster
has local L3 tag. On the other hand, each cluster will share some
internal system bus. This means cache is much more affine inside one cluster
than across clusters.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |


The presented illustration is still a simplification of what is actually
going on, but is a more accurate model than currently presented.

Through the following small program, you can see the performance impact of
running it in one cluster and across two clusters:

struct foo {
        int x;
        int y;
} f;

void *thread1_fun(void *param)
{
        int s = 0;
        for (int i = 0; i < 0xfffffff; i++)
                s += f.x;
}

void *thread2_fun(void *param)
{
        int s = 0;
        for (int i = 0; i < 0xfffffff; i++)
                f.y++;
}

int main(int argc, char **argv)
{
        pthread_t tid1, tid2;

        pthread_create(&tid1, NULL, thread1_fun, NULL);
        pthread_create(&tid2, NULL, thread2_fun, NULL);
        pthread_join(tid1, NULL);
        pthread_join(tid2, NULL);
}

While running this program in one cluster, it takes:
$ time taskset -c 0,1 ./a.out 
real	0m0.832s
user	0m1.649s
sys	0m0.004s

As a contrast, it takes much more time if we run the same program
in two clusters:
$ time taskset -c 0,4 ./a.out 
real	0m1.133s
user	0m1.960s
sys	0m0.000s

0.832/1.133 = 73%, it is a huge difference.

This implies that we should let the Linux scheduler use cluster topology to
make better load balancing and WAKE_AFFINE decisions. Unfortuantely, right
now, all cpu0-23 are treated equally in the current kernel running on
Kunpeng 920.

This patchset first exposes the topology, then add a new sched_domain
between smt and mc. The new sched_domain will influence the load balance
and wake_affine of the scheduler.

The code is still pretty much a proof-of-concept and need lots of benchmark
and tuning. However, a rough hackbench result shows

While running hackbench on one numa node(cpu0-cpu23), we may achieve 5%+
performance improvement with the new sched_domain.
While running hackbench on two numa nodes(cpu0-cpu47), we may achieve 49%+
performance improvement with the new sched_domain.

Although I believe there is still a lot to do, sending a RFC to get feedbacks
of community experts might be helpful for the next step.

Barry Song (1):
  scheduler: add scheduler level for clusters

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die.

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/smp.c                   | 17 +++++++++
 arch/arm64/kernel/topology.c              |  2 ++
 drivers/acpi/pptt.c                       | 60 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 ++++++++
 drivers/base/topology.c                   | 10 ++++++
 include/linux/acpi.h                      |  5 +++ 
 include/linux/arch_topology.h             |  5 +++ 
 include/linux/topology.h                  | 13 +++++++
 kernel/sched/fair.c                       | 35 ++++++++++++++++++
 11 files changed, 190 insertions(+), 4 deletions(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die.
  2020-12-01  2:59 ` Barry Song
@ 2020-12-01  2:59   ` Barry Song
  -1 siblings, 0 replies; 48+ messages in thread
From: Barry Song @ 2020-12-01  2:59 UTC (permalink / raw)
  To: valentin.schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: linuxarm, xuwei5, prime.zeng, Barry Song

From: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.

For example Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.

This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].

Note this patch only handle the ACPI case.

Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).

Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT.  Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.

Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.

[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
    structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 -v2: no code change, just refine the commit log
 * ABI documentation to be handled seperately as precusor patch needed
   to add existing topology ABI
 * Discussion of exact naming postponed for a future patch as no
   conclusion has been reached yet

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
 arch/arm64/kernel/topology.c              |  2 ++
 drivers/acpi/pptt.c                       | 60 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 ++++++++
 drivers/base/topology.c                   | 10 ++++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/topology.h                  |  6 ++++
 8 files changed, 124 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
 	identifier (rather than the kernel's).  The actual value is
 	architecture and platform dependent.
 
+cluster_id:
+
+	the Cluster ID of cpuX.  Typically it is the hardware platform's
+	identifier (rather than the kernel's).  The actual value is
+	architecture and platform dependent.
+
 book_id:
 
 	the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
 	human-readable list of CPUs sharing the same physical_package_id.
 	(deprecated name: "core_siblings_list")
 
+cluster_cpus:
+
+	internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+	human-readable list of CPUs within the same cluster.
+
 die_cpus:
 
 	internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
 
 	#define topology_physical_package_id(cpu)
 	#define topology_die_id(cpu)
+	#define topology_cluster_id(cpu)
 	#define topology_core_id(cpu)
 	#define topology_book_id(cpu)
 	#define topology_drawer_id(cpu)
 	#define topology_sibling_cpumask(cpu)
 	#define topology_core_cpumask(cpu)
+	#define topology_cluster_cpumask(cpu)
 	#define topology_die_cpumask(cpu)
 	#define topology_book_cpumask(cpu)
 	#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
 
 1) topology_physical_package_id: -1
 2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
 
 For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
 default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 0801a0f..4c40240 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -101,6 +101,8 @@ int __init parse_acpi_topology(void)
 			cpu_topology[cpu].thread_id  = -1;
 			cpu_topology[cpu].core_id    = topology_id;
 		}
+		topology_id = find_acpi_cpu_topology_cluster(cpu);
+		cpu_topology[cpu].cluster_id = topology_id;
 		topology_id = find_acpi_cpu_topology_package(cpu);
 		cpu_topology[cpu].package_id = topology_id;
 
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..8646a93 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
 }
 
 /**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	struct acpi_pptt_processor *cpu_node, *cluster_node;
+	int retval;
+	int is_thread;
+
+	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+	if (ACPI_FAILURE(status)) {
+		acpi_pptt_warn_missing();
+		return -ENOENT;
+	}
+	cpu_node = acpi_find_processor_node(table, cpu);
+	if (cpu_node == NULL || !cpu_node->parent) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+
+	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+	cluster_node = fetch_pptt_node(table, cpu_node->parent);
+	if (cluster_node == NULL) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+	if (is_thread) {
+		if (!cluster_node->parent) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+		cluster_node = fetch_pptt_node(table, cluster_node->parent);
+		if (cluster_node == NULL) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+	}
+	retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+	acpi_put_table(table);
+
+	return retval;
+}
+
+/**
  * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
  * @cpu: Kernel logical CPU number
  *
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 75f72d6..e2ca8f3 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -497,6 +497,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return core_mask;
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].cluster_sibling;
+}
+
 void update_siblings_masks(unsigned int cpuid)
 {
 	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -514,6 +519,11 @@ void update_siblings_masks(unsigned int cpuid)
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;
 
+		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+		}
+
 		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
 		cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
 
@@ -532,6 +542,9 @@ static void clear_cpu_topology(int cpu)
 	cpumask_clear(&cpu_topo->llc_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
 
+	cpumask_clear(&cpu_topo->cluster_sibling);
+	cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
 	cpumask_clear(&cpu_topo->core_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
 	cpumask_clear(&cpu_topo->thread_sibling);
@@ -562,6 +575,7 @@ void remove_cpu_topology(unsigned int cpu)
 		cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
 	for_each_cpu(sibling, topology_sibling_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
 	for_each_cpu(sibling, topology_llc_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
 
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index ad8d33c..f72ac9a 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id);
 define_id_show_func(die_id);
 static DEVICE_ATTR_RO(die_id);
 
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
 define_id_show_func(core_id);
 static DEVICE_ATTR_RO(core_id);
 
@@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask);
 static DEVICE_ATTR_RO(core_siblings);
 static DEVICE_ATTR_RO(core_siblings_list);
 
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
 define_siblings_show_func(die_cpus, die_cpumask);
 static DEVICE_ATTR_RO(die_cpus);
 static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list);
 static struct attribute *default_attrs[] = {
 	&dev_attr_physical_package_id.attr,
 	&dev_attr_die_id.attr,
+	&dev_attr_cluster_id.attr,
 	&dev_attr_core_id.attr,
 	&dev_attr_thread_siblings.attr,
 	&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = {
 	&dev_attr_core_cpus_list.attr,
 	&dev_attr_core_siblings.attr,
 	&dev_attr_core_siblings_list.attr,
+	&dev_attr_cluster_cpus.attr,
+	&dev_attr_cluster_cpus_list.attr,
 	&dev_attr_die_cpus.attr,
 	&dev_attr_die_cpus_list.attr,
 	&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 64ae25c..2a1fbbd 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1332,6 +1332,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
 #ifdef CONFIG_ACPI_PPTT
 int acpi_pptt_cpu_is_thread(unsigned int cpu);
 int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
 int find_acpi_cpu_topology_package(unsigned int cpu);
 int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
 int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1344,6 +1345,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
 {
 	return -EINVAL;
 }
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	return -EINVAL;
+}
 static inline int find_acpi_cpu_topology_package(unsigned int cpu)
 {
 	return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 69b1dab..dba06864 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -45,10 +45,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
 struct cpu_topology {
 	int thread_id;
 	int core_id;
+	int cluster_id;
 	int package_id;
 	int llc_id;
 	cpumask_t thread_sibling;
 	cpumask_t core_sibling;
+	cpumask_t cluster_sibling;
 	cpumask_t llc_sibling;
 };
 
@@ -56,13 +58,16 @@ struct cpu_topology {
 extern struct cpu_topology cpu_topology[NR_CPUS];
 
 #define topology_physical_package_id(cpu)	(cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu)	(cpu_topology[cpu].cluster_id)
 #define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
 #define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
 #define topology_sibling_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu)	(&cpu_topology[cpu].cluster_sibling)
 #define topology_llc_cpumask(cpu)	(&cpu_topology[cpu].llc_sibling)
 void init_cpu_topology(void);
 void store_cpu_topology(unsigned int cpuid);
 const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
 void update_siblings_masks(unsigned int cpu);
 void remove_cpu_topology(unsigned int cpuid);
 void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 608fa4a..5f66648 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_die_id
 #define topology_die_id(cpu)			((void)(cpu), -1)
 #endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu)		((void)(cpu), -1)
+#endif
 #ifndef topology_core_id
 #define topology_core_id(cpu)			((void)(cpu), 0)
 #endif
@@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_core_cpumask
 #define topology_core_cpumask(cpu)		cpumask_of(cpu)
 #endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu)		cpumask_of(cpu)
+#endif
 #ifndef topology_die_cpumask
 #define topology_die_cpumask(cpu)		cpumask_of(cpu)
 #endif
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die.
@ 2020-12-01  2:59   ` Barry Song
  0 siblings, 0 replies; 48+ messages in thread
From: Barry Song @ 2020-12-01  2:59 UTC (permalink / raw)
  To: valentin.schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: Barry Song, prime.zeng, linuxarm, xuwei5

From: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.

For example Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.

This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].

Note this patch only handle the ACPI case.

Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).

Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT.  Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.

Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.

[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
    structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 -v2: no code change, just refine the commit log
 * ABI documentation to be handled seperately as precusor patch needed
   to add existing topology ABI
 * Discussion of exact naming postponed for a future patch as no
   conclusion has been reached yet

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++---
 arch/arm64/kernel/topology.c              |  2 ++
 drivers/acpi/pptt.c                       | 60 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 ++++++++
 drivers/base/topology.c                   | 10 ++++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/topology.h                  |  6 ++++
 8 files changed, 124 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
 	identifier (rather than the kernel's).  The actual value is
 	architecture and platform dependent.
 
+cluster_id:
+
+	the Cluster ID of cpuX.  Typically it is the hardware platform's
+	identifier (rather than the kernel's).  The actual value is
+	architecture and platform dependent.
+
 book_id:
 
 	the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
 	human-readable list of CPUs sharing the same physical_package_id.
 	(deprecated name: "core_siblings_list")
 
+cluster_cpus:
+
+	internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+	human-readable list of CPUs within the same cluster.
+
 die_cpus:
 
 	internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
 
 	#define topology_physical_package_id(cpu)
 	#define topology_die_id(cpu)
+	#define topology_cluster_id(cpu)
 	#define topology_core_id(cpu)
 	#define topology_book_id(cpu)
 	#define topology_drawer_id(cpu)
 	#define topology_sibling_cpumask(cpu)
 	#define topology_core_cpumask(cpu)
+	#define topology_cluster_cpumask(cpu)
 	#define topology_die_cpumask(cpu)
 	#define topology_book_cpumask(cpu)
 	#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
 
 1) topology_physical_package_id: -1
 2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
 
 For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
 default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 0801a0f..4c40240 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -101,6 +101,8 @@ int __init parse_acpi_topology(void)
 			cpu_topology[cpu].thread_id  = -1;
 			cpu_topology[cpu].core_id    = topology_id;
 		}
+		topology_id = find_acpi_cpu_topology_cluster(cpu);
+		cpu_topology[cpu].cluster_id = topology_id;
 		topology_id = find_acpi_cpu_topology_package(cpu);
 		cpu_topology[cpu].package_id = topology_id;
 
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..8646a93 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,66 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
 }
 
 /**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	struct acpi_pptt_processor *cpu_node, *cluster_node;
+	int retval;
+	int is_thread;
+
+	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+	if (ACPI_FAILURE(status)) {
+		acpi_pptt_warn_missing();
+		return -ENOENT;
+	}
+	cpu_node = acpi_find_processor_node(table, cpu);
+	if (cpu_node == NULL || !cpu_node->parent) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+
+	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+	cluster_node = fetch_pptt_node(table, cpu_node->parent);
+	if (cluster_node == NULL) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+	if (is_thread) {
+		if (!cluster_node->parent) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+		cluster_node = fetch_pptt_node(table, cluster_node->parent);
+		if (cluster_node == NULL) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+	}
+	retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+	acpi_put_table(table);
+
+	return retval;
+}
+
+/**
  * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
  * @cpu: Kernel logical CPU number
  *
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 75f72d6..e2ca8f3 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -497,6 +497,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return core_mask;
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].cluster_sibling;
+}
+
 void update_siblings_masks(unsigned int cpuid)
 {
 	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -514,6 +519,11 @@ void update_siblings_masks(unsigned int cpuid)
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;
 
+		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+		}
+
 		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
 		cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
 
@@ -532,6 +542,9 @@ static void clear_cpu_topology(int cpu)
 	cpumask_clear(&cpu_topo->llc_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
 
+	cpumask_clear(&cpu_topo->cluster_sibling);
+	cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
 	cpumask_clear(&cpu_topo->core_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
 	cpumask_clear(&cpu_topo->thread_sibling);
@@ -562,6 +575,7 @@ void remove_cpu_topology(unsigned int cpu)
 		cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
 	for_each_cpu(sibling, topology_sibling_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
 	for_each_cpu(sibling, topology_llc_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
 
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index ad8d33c..f72ac9a 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@ static DEVICE_ATTR_RO(physical_package_id);
 define_id_show_func(die_id);
 static DEVICE_ATTR_RO(die_id);
 
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
 define_id_show_func(core_id);
 static DEVICE_ATTR_RO(core_id);
 
@@ -61,6 +64,10 @@ define_siblings_show_func(core_siblings, core_cpumask);
 static DEVICE_ATTR_RO(core_siblings);
 static DEVICE_ATTR_RO(core_siblings_list);
 
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
 define_siblings_show_func(die_cpus, die_cpumask);
 static DEVICE_ATTR_RO(die_cpus);
 static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@ static DEVICE_ATTR_RO(drawer_siblings_list);
 static struct attribute *default_attrs[] = {
 	&dev_attr_physical_package_id.attr,
 	&dev_attr_die_id.attr,
+	&dev_attr_cluster_id.attr,
 	&dev_attr_core_id.attr,
 	&dev_attr_thread_siblings.attr,
 	&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@ static struct attribute *default_attrs[] = {
 	&dev_attr_core_cpus_list.attr,
 	&dev_attr_core_siblings.attr,
 	&dev_attr_core_siblings_list.attr,
+	&dev_attr_cluster_cpus.attr,
+	&dev_attr_cluster_cpus_list.attr,
 	&dev_attr_die_cpus.attr,
 	&dev_attr_die_cpus_list.attr,
 	&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 64ae25c..2a1fbbd 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1332,6 +1332,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
 #ifdef CONFIG_ACPI_PPTT
 int acpi_pptt_cpu_is_thread(unsigned int cpu);
 int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
 int find_acpi_cpu_topology_package(unsigned int cpu);
 int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
 int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1344,6 +1345,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
 {
 	return -EINVAL;
 }
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	return -EINVAL;
+}
 static inline int find_acpi_cpu_topology_package(unsigned int cpu)
 {
 	return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 69b1dab..dba06864 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -45,10 +45,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
 struct cpu_topology {
 	int thread_id;
 	int core_id;
+	int cluster_id;
 	int package_id;
 	int llc_id;
 	cpumask_t thread_sibling;
 	cpumask_t core_sibling;
+	cpumask_t cluster_sibling;
 	cpumask_t llc_sibling;
 };
 
@@ -56,13 +58,16 @@ struct cpu_topology {
 extern struct cpu_topology cpu_topology[NR_CPUS];
 
 #define topology_physical_package_id(cpu)	(cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu)	(cpu_topology[cpu].cluster_id)
 #define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
 #define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
 #define topology_sibling_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu)	(&cpu_topology[cpu].cluster_sibling)
 #define topology_llc_cpumask(cpu)	(&cpu_topology[cpu].llc_sibling)
 void init_cpu_topology(void);
 void store_cpu_topology(unsigned int cpuid);
 const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
 void update_siblings_masks(unsigned int cpu);
 void remove_cpu_topology(unsigned int cpuid);
 void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 608fa4a..5f66648 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -185,6 +185,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_die_id
 #define topology_die_id(cpu)			((void)(cpu), -1)
 #endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu)		((void)(cpu), -1)
+#endif
 #ifndef topology_core_id
 #define topology_core_id(cpu)			((void)(cpu), 0)
 #endif
@@ -194,6 +197,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_core_cpumask
 #define topology_core_cpumask(cpu)		cpumask_of(cpu)
 #endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu)		cpumask_of(cpu)
+#endif
 #ifndef topology_die_cpumask
 #define topology_die_cpumask(cpu)		cpumask_of(cpu)
 #endif
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-01  2:59 ` Barry Song
@ 2020-12-01  2:59   ` Barry Song
  -1 siblings, 0 replies; 48+ messages in thread
From: Barry Song @ 2020-12-01  2:59 UTC (permalink / raw)
  To: valentin.schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: linuxarm, xuwei5, prime.zeng, Barry Song

ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus. This means cache coherence overhead inside one cluster
is much less than the overhead across clusters.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

This patch adds the sched_domain for clusters. On kunpeng 920, without
this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
min_interval=24, max_interval=48; with this patch, MC becomes domain1,
a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
max_interval=8.
This will affect load balance. For example, without this patch, while cpu0
becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
will try to pull a task from cpu1-cpu3 first. This will have much less
overhead of task migration.

On the other hand, while doing WAKE_AFFINE, this patch will try to find
a core in the target cluster before scanning the llc domain.
This means it will proactively use a core which has better affinity with
target core at first.

Not much benchmark has been done yet. but here is a rough hackbench
result.
we run the below command with different -g parameter to increase system load
by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
and record the data to get the average time:

First, we run hackbench in only one NUMA node(cpu0-cpu23):
$ numactl -N 0 hackbench -p -T -l 100000 -g $1

g=1 (seen cpu utilization around 50% for each core)
Running in threaded mode with 1 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
performance improvement w/ patch: -1.01%

g=2 (seen cpu utilization around 70% for each core)
Running in threaded mode with 2 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162 9.955=10.1006
w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
performance improvement w/ patch: 4.08%

g=3 (seen cpu utilization around 90% for each core)
Running in threaded mode with 3 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674 15.721=15.7727
w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946 14.895=14.9005
performance improvement w/ patch: 5.53%

g=4
Running in threaded mode with 4 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
performance improvement w/ patch: 1.22%

After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
g=1
w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
performance improvement by patch: -2.2%

g=2
w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
performance improvement by patch: 9.3%

g=3
w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235 11.849=11.9557
w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
performance improvement by patch: 20.2%

g=4
w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859 18.515=17.7647
w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467 10.401=10.4927
performance improvement by patch: 40.9%

g=5
w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073 25.819=27.4382
w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954 13.907=13.9735
performance improvement by patch: 49.1%

It seems the patch can bring a huge increase on hackbench especially when
we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
on single NUMA node(cpu0-cpu23)

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/arm64/Kconfig       |  7 +++++++
 arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
 include/linux/topology.h |  7 +++++++
 kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
 4 files changed, 66 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d23283..3583c26 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -938,6 +938,13 @@ config SCHED_MC
 	  making when dealing with multi-core CPU chips at a cost of slightly
 	  increased overhead in some places. If unsure say N here.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	help
+	  Cluster scheduler support improves the CPU scheduler's decision
+	  making when dealing with machines that have clusters(sharing internal
+	  bus or sharing LLC cache tag). If unsure say N here.
+
 config SCHED_SMT
 	bool "SMT scheduler support"
 	help
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 355ee9e..5c8f026 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -32,6 +32,7 @@
 #include <linux/irq_work.h>
 #include <linux/kexec.h>
 #include <linux/kvm_host.h>
+#include <linux/sched/topology.h>
 
 #include <asm/alternative.h>
 #include <asm/atomic.h>
@@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
 	}
 }
 
+static struct sched_domain_topology_level arm64_topology[] = {
+#ifdef CONFIG_SCHED_SMT
+        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
+#endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
+#endif
+#ifdef CONFIG_SCHED_MC
+        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
+#endif
+	{ cpu_cpu_mask, SD_INIT_NAME(DIE) },
+        { NULL, },
+};
+
 void __init smp_prepare_cpus(unsigned int max_cpus)
 {
 	const struct cpu_operations *ops;
@@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
 
 	init_cpu_topology();
 
+	set_sched_topology(arm64_topology);
+
 	this_cpu = smp_processor_id();
 	store_cpu_topology(this_cpu);
 	numa_store_cpu_info(this_cpu);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 5f66648..2c823c0 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
 }
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+static inline const struct cpumask *cpu_cluster_mask(int cpu)
+{
+	return topology_cluster_cpumask(cpu);
+}
+#endif
+
 static inline const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..ae8ec910 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target)
 
 #endif /* CONFIG_SCHED_SMT */
 
+#ifdef CONFIG_SCHED_CLUSTER
+/*
+ * Scan the local CLUSTER mask for idle CPUs.
+ */
+static int select_idle_cluster(struct task_struct *p, int target)
+{
+	int cpu;
+
+	/* right now, no hardware with both cluster and smt to run */
+	if (sched_smt_active())
+		return -1;
+
+	for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
+		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+			continue;
+		if (available_idle_cpu(cpu))
+			return cpu;
+	}
+
+	return -1;
+}
+
+#else /* CONFIG_SCHED_CLUSTER */
+
+static inline int select_idle_cluster(struct task_struct *p, int target)
+{
+	return -1;
+}
+
+#endif /* CONFIG_SCHED_CLUSTER */
+
 /*
  * Scan the LLC domain for idle CPUs; this is dynamically regulated by
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
+	i = select_idle_cluster(p, target);
+	if ((unsigned)i < nr_cpumask_bits)
+		return i;
+
 	i = select_idle_cpu(p, sd, target);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-01  2:59   ` Barry Song
  0 siblings, 0 replies; 48+ messages in thread
From: Barry Song @ 2020-12-01  2:59 UTC (permalink / raw)
  To: valentin.schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: Barry Song, prime.zeng, linuxarm, xuwei5

ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus. This means cache coherence overhead inside one cluster
is much less than the overhead across clusters.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

This patch adds the sched_domain for clusters. On kunpeng 920, without
this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
min_interval=24, max_interval=48; with this patch, MC becomes domain1,
a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
max_interval=8.
This will affect load balance. For example, without this patch, while cpu0
becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
will try to pull a task from cpu1-cpu3 first. This will have much less
overhead of task migration.

On the other hand, while doing WAKE_AFFINE, this patch will try to find
a core in the target cluster before scanning the llc domain.
This means it will proactively use a core which has better affinity with
target core at first.

Not much benchmark has been done yet. but here is a rough hackbench
result.
we run the below command with different -g parameter to increase system load
by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
and record the data to get the average time:

First, we run hackbench in only one NUMA node(cpu0-cpu23):
$ numactl -N 0 hackbench -p -T -l 100000 -g $1

g=1 (seen cpu utilization around 50% for each core)
Running in threaded mode with 1 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
performance improvement w/ patch: -1.01%

g=2 (seen cpu utilization around 70% for each core)
Running in threaded mode with 2 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162 9.955=10.1006
w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
performance improvement w/ patch: 4.08%

g=3 (seen cpu utilization around 90% for each core)
Running in threaded mode with 3 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674 15.721=15.7727
w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946 14.895=14.9005
performance improvement w/ patch: 5.53%

g=4
Running in threaded mode with 4 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
performance improvement w/ patch: 1.22%

After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
g=1
w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
performance improvement by patch: -2.2%

g=2
w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
performance improvement by patch: 9.3%

g=3
w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235 11.849=11.9557
w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
performance improvement by patch: 20.2%

g=4
w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859 18.515=17.7647
w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467 10.401=10.4927
performance improvement by patch: 40.9%

g=5
w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073 25.819=27.4382
w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954 13.907=13.9735
performance improvement by patch: 49.1%

It seems the patch can bring a huge increase on hackbench especially when
we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
on single NUMA node(cpu0-cpu23)

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/arm64/Kconfig       |  7 +++++++
 arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
 include/linux/topology.h |  7 +++++++
 kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
 4 files changed, 66 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d23283..3583c26 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -938,6 +938,13 @@ config SCHED_MC
 	  making when dealing with multi-core CPU chips at a cost of slightly
 	  increased overhead in some places. If unsure say N here.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	help
+	  Cluster scheduler support improves the CPU scheduler's decision
+	  making when dealing with machines that have clusters(sharing internal
+	  bus or sharing LLC cache tag). If unsure say N here.
+
 config SCHED_SMT
 	bool "SMT scheduler support"
 	help
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 355ee9e..5c8f026 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -32,6 +32,7 @@
 #include <linux/irq_work.h>
 #include <linux/kexec.h>
 #include <linux/kvm_host.h>
+#include <linux/sched/topology.h>
 
 #include <asm/alternative.h>
 #include <asm/atomic.h>
@@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
 	}
 }
 
+static struct sched_domain_topology_level arm64_topology[] = {
+#ifdef CONFIG_SCHED_SMT
+        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
+#endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
+#endif
+#ifdef CONFIG_SCHED_MC
+        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
+#endif
+	{ cpu_cpu_mask, SD_INIT_NAME(DIE) },
+        { NULL, },
+};
+
 void __init smp_prepare_cpus(unsigned int max_cpus)
 {
 	const struct cpu_operations *ops;
@@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
 
 	init_cpu_topology();
 
+	set_sched_topology(arm64_topology);
+
 	this_cpu = smp_processor_id();
 	store_cpu_topology(this_cpu);
 	numa_store_cpu_info(this_cpu);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 5f66648..2c823c0 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
 }
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+static inline const struct cpumask *cpu_cluster_mask(int cpu)
+{
+	return topology_cluster_cpumask(cpu);
+}
+#endif
+
 static inline const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..ae8ec910 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target)
 
 #endif /* CONFIG_SCHED_SMT */
 
+#ifdef CONFIG_SCHED_CLUSTER
+/*
+ * Scan the local CLUSTER mask for idle CPUs.
+ */
+static int select_idle_cluster(struct task_struct *p, int target)
+{
+	int cpu;
+
+	/* right now, no hardware with both cluster and smt to run */
+	if (sched_smt_active())
+		return -1;
+
+	for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
+		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+			continue;
+		if (available_idle_cpu(cpu))
+			return cpu;
+	}
+
+	return -1;
+}
+
+#else /* CONFIG_SCHED_CLUSTER */
+
+static inline int select_idle_cluster(struct task_struct *p, int target)
+{
+	return -1;
+}
+
+#endif /* CONFIG_SCHED_CLUSTER */
+
 /*
  * Scan the LLC domain for idle CPUs; this is dynamically regulated by
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
+	i = select_idle_cluster(p, target);
+	if ((unsigned)i < nr_cpumask_bits)
+		return i;
+
 	i = select_idle_cpu(p, sd, target);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler
  2020-12-01  2:59 ` Barry Song
@ 2020-12-01 10:46   ` Dietmar Eggemann
  -1 siblings, 0 replies; 48+ messages in thread
From: Dietmar Eggemann @ 2020-12-01 10:46 UTC (permalink / raw)
  To: Barry Song, valentin.schneider, catalin.marinas, will, rjw, lenb,
	gregkh, Jonathan.Cameron, mingo, peterz, juri.lelli,
	vincent.guittot, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: linuxarm, xuwei5, prime.zeng

On 01/12/2020 03:59, Barry Song wrote:

[...]

> Although I believe there is still a lot to do, sending a RFC to get feedbacks
> of community experts might be helpful for the next step.
> 
> Barry Song (1):
>   scheduler: add scheduler level for clusters
> 
> Jonathan Cameron (1):
>   topology: Represent clusters of CPUs within a die.

Just to make sure. Since this is v2, the v1 is
https://lkml.kernel.org/r//20201016152702.1513592-1-Jonathan.Cameron@huawei.com

Might not be obvious to everyone since sender and subject have changed.

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler
@ 2020-12-01 10:46   ` Dietmar Eggemann
  0 siblings, 0 replies; 48+ messages in thread
From: Dietmar Eggemann @ 2020-12-01 10:46 UTC (permalink / raw)
  To: Barry Song, valentin.schneider, catalin.marinas, will, rjw, lenb,
	gregkh, Jonathan.Cameron, mingo, peterz, juri.lelli,
	vincent.guittot, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi
  Cc: prime.zeng, linuxarm, xuwei5

On 01/12/2020 03:59, Barry Song wrote:

[...]

> Although I believe there is still a lot to do, sending a RFC to get feedbacks
> of community experts might be helpful for the next step.
> 
> Barry Song (1):
>   scheduler: add scheduler level for clusters
> 
> Jonathan Cameron (1):
>   topology: Represent clusters of CPUs within a die.

Just to make sure. Since this is v2, the v1 is
https://lkml.kernel.org/r//20201016152702.1513592-1-Jonathan.Cameron@huawei.com

Might not be obvious to everyone since sender and subject have changed.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die.
  2020-12-01  2:59   ` Barry Song
@ 2020-12-01 16:03     ` Valentin Schneider
  -1 siblings, 0 replies; 48+ messages in thread
From: Valentin Schneider @ 2020-12-01 16:03 UTC (permalink / raw)
  To: Barry Song
  Cc: catalin.marinas, will, rjw, lenb, gregkh, Jonathan.Cameron,
	mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, mark.rutland, linux-arm-kernel,
	linux-kernel, linux-acpi, linuxarm, xuwei5, prime.zeng


On 01/12/20 02:59, Barry Song wrote:
> That means the cost to transfer ownership of a cacheline between CPUs
> within a cluster is lower than between CPUs in different clusters on
> the same die. Hence, it can make sense to tell the scheduler to use
> the cache affinity of the cluster to make better decision on thread
> migration.
>
> This patch simply exposes this information to userspace libraries
> like hwloc by providing cluster_cpus and related sysfs attributes.
> PoC of HWLOC support at [2].
>
> Note this patch only handle the ACPI case.
>

AIUI this requires PPTT to describe your system like so:

 {Processor nodes}             {Caches}

       [Node0] ----------------> [L3]
          ^
          |
      [Cluster0] ---------------> []
          ^
          |
        [CPU0] ------------> [L1] -> [L2]

which is a bit odd, because there is that middling level without any
private resources. I suppose right now this is the only way to describe
this kind of cache topology via PPTT, but is that widespread?


Now, looking at the Ampere eMAG's PPTT, this has a "similar" shape. The
topology is private L1, L2 shared by pairs of CPUs, shared L3 [1].

If I parse the PPTT thing right this is encoded as:

 {Processor nodes}            {Caches}

      [Cluster0] -------------> ([L3] not present in my PPTT for some reason)
          ^
          |
      [  Pair0  ] ------------> [L2]
        ^     ^
        |     |
        |  [CPU1] ------------> [L1]
      [CPU0] -----------------> [L1] 

So you could spin the same story there were first scanning the pair and
then the cluster could help.

[1]: https://en.wikichip.org/wiki/ampere_computing/emag/8180

> Special consideration is needed for SMT processors, where it is
> necessary to move 2 levels up the hierarchy from the leaf nodes
> (thus skipping the processor core level).
>
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
>

Skimming through the spec, this sounds like something the ID structure
(Type 2) could be used for. However in v1 Jonathan and Sudeep talked about
UID's / DSDT, any news on that?

> Note that arm64 / ACPI does not provide any means of identifying
> a die level in the topology but that may be unrelate to the cluster
> level.
>
> [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
>     structure (Type 0)
> [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die.
@ 2020-12-01 16:03     ` Valentin Schneider
  0 siblings, 0 replies; 48+ messages in thread
From: Valentin Schneider @ 2020-12-01 16:03 UTC (permalink / raw)
  To: Barry Song
  Cc: juri.lelli, mark.rutland, vincent.guittot, prime.zeng, peterz,
	catalin.marinas, Jonathan.Cameron, rjw, linux-kernel, rostedt,
	linuxarm, bsegall, linux-acpi, mingo, mgorman, xuwei5, gregkh,
	will, dietmar.eggemann, linux-arm-kernel, lenb


On 01/12/20 02:59, Barry Song wrote:
> That means the cost to transfer ownership of a cacheline between CPUs
> within a cluster is lower than between CPUs in different clusters on
> the same die. Hence, it can make sense to tell the scheduler to use
> the cache affinity of the cluster to make better decision on thread
> migration.
>
> This patch simply exposes this information to userspace libraries
> like hwloc by providing cluster_cpus and related sysfs attributes.
> PoC of HWLOC support at [2].
>
> Note this patch only handle the ACPI case.
>

AIUI this requires PPTT to describe your system like so:

 {Processor nodes}             {Caches}

       [Node0] ----------------> [L3]
          ^
          |
      [Cluster0] ---------------> []
          ^
          |
        [CPU0] ------------> [L1] -> [L2]

which is a bit odd, because there is that middling level without any
private resources. I suppose right now this is the only way to describe
this kind of cache topology via PPTT, but is that widespread?


Now, looking at the Ampere eMAG's PPTT, this has a "similar" shape. The
topology is private L1, L2 shared by pairs of CPUs, shared L3 [1].

If I parse the PPTT thing right this is encoded as:

 {Processor nodes}            {Caches}

      [Cluster0] -------------> ([L3] not present in my PPTT for some reason)
          ^
          |
      [  Pair0  ] ------------> [L2]
        ^     ^
        |     |
        |  [CPU1] ------------> [L1]
      [CPU0] -----------------> [L1] 

So you could spin the same story there were first scanning the pair and
then the cluster could help.

[1]: https://en.wikichip.org/wiki/ampere_computing/emag/8180

> Special consideration is needed for SMT processors, where it is
> necessary to move 2 levels up the hierarchy from the leaf nodes
> (thus skipping the processor core level).
>
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
>

Skimming through the spec, this sounds like something the ID structure
(Type 2) could be used for. However in v1 Jonathan and Sudeep talked about
UID's / DSDT, any news on that?

> Note that arm64 / ACPI does not provide any means of identifying
> a die level in the topology but that may be unrelate to the cluster
> level.
>
> [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
>     structure (Type 0)
> [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-01  2:59   ` Barry Song
@ 2020-12-01 16:04     ` Valentin Schneider
  -1 siblings, 0 replies; 48+ messages in thread
From: Valentin Schneider @ 2020-12-01 16:04 UTC (permalink / raw)
  To: Barry Song
  Cc: catalin.marinas, will, rjw, lenb, gregkh, Jonathan.Cameron,
	mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, mark.rutland, linux-arm-kernel,
	linux-kernel, linux-acpi, linuxarm, xuwei5, prime.zeng


On 01/12/20 02:59, Barry Song wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..ae8ec910 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target)
>  
>  #endif /* CONFIG_SCHED_SMT */
>  
> +#ifdef CONFIG_SCHED_CLUSTER
> +/*
> + * Scan the local CLUSTER mask for idle CPUs.
> + */
> +static int select_idle_cluster(struct task_struct *p, int target)
> +{
> +	int cpu;
> +
> +	/* right now, no hardware with both cluster and smt to run */
> +	if (sched_smt_active())
> +		return -1;
> +
> +	for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {

Gating this behind this new config only leveraged by arm64 doesn't make it
very generic. Note that powerpc also has this newish "CACHE" level which
seems to overlap in function with your "CLUSTER" one (both are arch
specific, though).

I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
i.e. scan CPUs by increasing cache "distance". We already have it in some
form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
for said powerpc's CACHE thingie.

*If* we are to generally support more levels with SD_SHARE_PKG_RESOURCES,
we could say frob something into select_idle_cpu(). I'm thinking of
something like the incomplete, untested below: 

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ae7ceba8fd4f..70692888db00 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6120,7 +6120,7 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
 static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	struct sched_domain *this_sd;
+	struct sched_domain *this_sd, *child = NULL;
 	u64 avg_cost, avg_idle;
 	u64 time;
 	int this = smp_processor_id();
@@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 
 	time = cpu_clock(this);
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	do {
+		/* XXX: sd should start as SMT's parent */
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		if (child)
+			cpumask_andnot(cpus, cpus, sched_domain_span(child));
+
+		for_each_cpu_wrap(cpu, cpus, target) {
+			if (!--nr)
+				return -1;
+			if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+				break;
+		}
 
-	for_each_cpu_wrap(cpu, cpus, target) {
-		if (!--nr)
-			return -1;
-		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
-			break;
-	}
+		child = sd;
+		sd = sd->parent;
+	} while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);
 
 	time = cpu_clock(this) - time;
 	update_avg(&this_sd->avg_scan_cost, time);

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-01 16:04     ` Valentin Schneider
  0 siblings, 0 replies; 48+ messages in thread
From: Valentin Schneider @ 2020-12-01 16:04 UTC (permalink / raw)
  To: Barry Song
  Cc: juri.lelli, mark.rutland, vincent.guittot, prime.zeng, peterz,
	catalin.marinas, Jonathan.Cameron, rjw, linux-kernel, rostedt,
	linuxarm, bsegall, linux-acpi, mingo, mgorman, xuwei5, gregkh,
	will, dietmar.eggemann, linux-arm-kernel, lenb


On 01/12/20 02:59, Barry Song wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..ae8ec910 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target)
>  
>  #endif /* CONFIG_SCHED_SMT */
>  
> +#ifdef CONFIG_SCHED_CLUSTER
> +/*
> + * Scan the local CLUSTER mask for idle CPUs.
> + */
> +static int select_idle_cluster(struct task_struct *p, int target)
> +{
> +	int cpu;
> +
> +	/* right now, no hardware with both cluster and smt to run */
> +	if (sched_smt_active())
> +		return -1;
> +
> +	for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {

Gating this behind this new config only leveraged by arm64 doesn't make it
very generic. Note that powerpc also has this newish "CACHE" level which
seems to overlap in function with your "CLUSTER" one (both are arch
specific, though).

I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
i.e. scan CPUs by increasing cache "distance". We already have it in some
form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
for said powerpc's CACHE thingie.

*If* we are to generally support more levels with SD_SHARE_PKG_RESOURCES,
we could say frob something into select_idle_cpu(). I'm thinking of
something like the incomplete, untested below: 

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ae7ceba8fd4f..70692888db00 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6120,7 +6120,7 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
 static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	struct sched_domain *this_sd;
+	struct sched_domain *this_sd, *child = NULL;
 	u64 avg_cost, avg_idle;
 	u64 time;
 	int this = smp_processor_id();
@@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 
 	time = cpu_clock(this);
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	do {
+		/* XXX: sd should start as SMT's parent */
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		if (child)
+			cpumask_andnot(cpus, cpus, sched_domain_span(child));
+
+		for_each_cpu_wrap(cpu, cpus, target) {
+			if (!--nr)
+				return -1;
+			if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+				break;
+		}
 
-	for_each_cpu_wrap(cpu, cpus, target) {
-		if (!--nr)
-			return -1;
-		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
-			break;
-	}
+		child = sd;
+		sd = sd->parent;
+	} while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);
 
 	time = cpu_clock(this) - time;
 	update_avg(&this_sd->avg_scan_cost, time);

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-01  2:59   ` Barry Song
@ 2020-12-02  8:27     ` Vincent Guittot
  -1 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-02  8:27 UTC (permalink / raw)
  To: Barry Song
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei5,
	prime.zeng

On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
>
> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus. This means cache coherence overhead inside one cluster
> is much less than the overhead across clusters.
>
> +-----------------------------------+                          +---------+
> |  +------+    +------+            +---------------------------+         |
> |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+   cluster   |    |    tag    |         |         |
> |  | CPU2 |    | CPU3 |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   |    |    L3     |         |         |
> |  +------+    +------+             +----+    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |   L3    |
>                                                                |   data  |
> +-----------------------------------+                          |         |
> |  +------+    +------+             |    +-----------+         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             +----+    L3     |         |         |
> |                                   |    |    tag    |         |         |
> |  +------+    +------+             |    |           |         |         |
> |  |      |    |      |            ++    +-----------+         |         |
> |  +------+    +------+            |---------------------------+         |
> +-----------------------------------|                          |         |
> +-----------------------------------|                          |         |
> |  +------+    +------+            +---------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+             |    |    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |   +-----------+          |         |
> |  +------+    +------+             |   |           |          |         |
> |                                   |   |    L3     |          |         |
> |  +------+    +------+             +---+    tag    |          |         |
> |  |      |    |      |             |   |           |          |         |
> |  +------+    +------+             |   +-----------+          |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                         ++         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |  +-----------+           |         |
> |  +------+    +------+             |  |           |           |         |
> |                                   |  |    L3     |           |         |
> |  +------+    +------+             +--+    tag    |           |         |
> |  |      |    |      |             |  |           |           |         |
> |  +------+    +------+             |  +-----------+           |         |
> |                                   |                          +---------+
> +-----------------------------------+
>
> This patch adds the sched_domain for clusters. On kunpeng 920, without
> this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> max_interval=8.
> This will affect load balance. For example, without this patch, while cpu0
> becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> will try to pull a task from cpu1-cpu3 first. This will have much less
> overhead of task migration.
>
> On the other hand, while doing WAKE_AFFINE, this patch will try to find
> a core in the target cluster before scanning the llc domain.
> This means it will proactively use a core which has better affinity with
> target core at first.

Which is at the opposite of what we are usually trying to do in the
fast wakeup path: trying to minimize resource sharing by finding an
idle core with all smt idle as an example

>
> Not much benchmark has been done yet. but here is a rough hackbench
> result.
> we run the below command with different -g parameter to increase system load
> by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
> and record the data to get the average time:
>
> First, we run hackbench in only one NUMA node(cpu0-cpu23):
> $ numactl -N 0 hackbench -p -T -l 100000 -g $1

What is your ref tree ? v5.10-rcX or tip/sched/core ?

>
> g=1 (seen cpu utilization around 50% for each core)
> Running in threaded mode with 1 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> performance improvement w/ patch: -1.01%
>
> g=2 (seen cpu utilization around 70% for each core)
> Running in threaded mode with 2 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162 9.955=10.1006
> w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> performance improvement w/ patch: 4.08%
>
> g=3 (seen cpu utilization around 90% for each core)
> Running in threaded mode with 3 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674 15.721=15.7727
> w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946 14.895=14.9005
> performance improvement w/ patch: 5.53%
>
> g=4
> Running in threaded mode with 4 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
> w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
> performance improvement w/ patch: 1.22%
>
> After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> g=1
> w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> performance improvement by patch: -2.2%
>
> g=2
> w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> performance improvement by patch: 9.3%
>
> g=3
> w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235 11.849=11.9557
> w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> performance improvement by patch: 20.2%
>
> g=4
> w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859 18.515=17.7647
> w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467 10.401=10.4927
> performance improvement by patch: 40.9%
>
> g=5
> w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073 25.819=27.4382
> w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954 13.907=13.9735
> performance improvement by patch: 49.1%
>
> It seems the patch can bring a huge increase on hackbench especially when
> we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> on single NUMA node(cpu0-cpu23)

Interesting that this patch mainly impacts the numa case

>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
>  arch/arm64/Kconfig       |  7 +++++++
>  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
>  include/linux/topology.h |  7 +++++++
>  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
>  4 files changed, 66 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d23283..3583c26 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -938,6 +938,13 @@ config SCHED_MC
>           making when dealing with multi-core CPU chips at a cost of slightly
>           increased overhead in some places. If unsure say N here.
>
> +config SCHED_CLUSTER
> +       bool "Cluster scheduler support"
> +       help
> +         Cluster scheduler support improves the CPU scheduler's decision
> +         making when dealing with machines that have clusters(sharing internal
> +         bus or sharing LLC cache tag). If unsure say N here.
> +
>  config SCHED_SMT
>         bool "SMT scheduler support"
>         help
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 355ee9e..5c8f026 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -32,6 +32,7 @@
>  #include <linux/irq_work.h>
>  #include <linux/kexec.h>
>  #include <linux/kvm_host.h>
> +#include <linux/sched/topology.h>
>
>  #include <asm/alternative.h>
>  #include <asm/atomic.h>
> @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
>         }
>  }
>
> +static struct sched_domain_topology_level arm64_topology[] = {
> +#ifdef CONFIG_SCHED_SMT
> +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> +#endif
> +#ifdef CONFIG_SCHED_CLUSTER
> +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> +#endif
> +#ifdef CONFIG_SCHED_MC
> +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> +#endif
> +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> +        { NULL, },
> +};
> +
>  void __init smp_prepare_cpus(unsigned int max_cpus)
>  {
>         const struct cpu_operations *ops;
> @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
>
>         init_cpu_topology();
>
> +       set_sched_topology(arm64_topology);
> +
>         this_cpu = smp_processor_id();
>         store_cpu_topology(this_cpu);
>         numa_store_cpu_info(this_cpu);
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 5f66648..2c823c0 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
>  }
>  #endif
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> +{
> +       return topology_cluster_cpumask(cpu);
> +}
> +#endif
> +
>  static inline const struct cpumask *cpu_cpu_mask(int cpu)
>  {
>         return cpumask_of_node(cpu_to_node(cpu));
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..ae8ec910 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target)
>
>  #endif /* CONFIG_SCHED_SMT */
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +/*
> + * Scan the local CLUSTER mask for idle CPUs.
> + */
> +static int select_idle_cluster(struct task_struct *p, int target)
> +{
> +       int cpu;
> +
> +       /* right now, no hardware with both cluster and smt to run */
> +       if (sched_smt_active())

don't use smt static key but a dedicated one if needed

> +               return -1;
> +
> +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> +                       continue;
> +               if (available_idle_cpu(cpu))
> +                       return cpu;
> +       }
> +
> +       return -1;
> +}
> +
> +#else /* CONFIG_SCHED_CLUSTER */
> +
> +static inline int select_idle_cluster(struct task_struct *p, int target)
> +{
> +       return -1;
> +}
> +
> +#endif /* CONFIG_SCHED_CLUSTER */
> +
>  /*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>         if ((unsigned)i < nr_cpumask_bits)
>                 return i;
>
> +       i = select_idle_cluster(p, target);
> +       if ((unsigned)i < nr_cpumask_bits)
> +               return i;

This is yet another loop in the fast wake up path.

I'm curious to know which part of this patch really gives the perf improvement ?
-Is it the new sched domain level with a shorter interval that is then
used by Load balance to better spread task in the cluster and between
clusters ?
-Or this new loop in the wake up path which tries to keep threads in
the same cluster ? which is at the opposite of the rest of the
scheduler which tries to spread

Also could the sched_feat(SIS_PROP) impacts significantly your
topology because it  breaks before looking for all cores in the LLC ?
And this new loop extends the number of tested core ?

> +
>         i = select_idle_cpu(p, sd, target);
>         if ((unsigned)i < nr_cpumask_bits)
>                 return i;
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-02  8:27     ` Vincent Guittot
  0 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-02  8:27 UTC (permalink / raw)
  To: Barry Song
  Cc: Juri Lelli, Mark Rutland, prime.zeng, Peter Zijlstra,
	Catalin Marinas, Jonathan Cameron, Rafael J. Wysocki,
	linux-kernel, Steven Rostedt, Dietmar Eggemann, Ben Segall,
	ACPI Devel Maling List, Ingo Molnar, Linuxarm, Mel Gorman,
	xuwei5, gregkh, Will Deacon, Valentin Schneider, LAK,
	Cc: Len Brown

On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
>
> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus. This means cache coherence overhead inside one cluster
> is much less than the overhead across clusters.
>
> +-----------------------------------+                          +---------+
> |  +------+    +------+            +---------------------------+         |
> |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+   cluster   |    |    tag    |         |         |
> |  | CPU2 |    | CPU3 |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   |    |    L3     |         |         |
> |  +------+    +------+             +----+    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |   L3    |
>                                                                |   data  |
> +-----------------------------------+                          |         |
> |  +------+    +------+             |    +-----------+         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             +----+    L3     |         |         |
> |                                   |    |    tag    |         |         |
> |  +------+    +------+             |    |           |         |         |
> |  |      |    |      |            ++    +-----------+         |         |
> |  +------+    +------+            |---------------------------+         |
> +-----------------------------------|                          |         |
> +-----------------------------------|                          |         |
> |  +------+    +------+            +---------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+             |    |    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |   +-----------+          |         |
> |  +------+    +------+             |   |           |          |         |
> |                                   |   |    L3     |          |         |
> |  +------+    +------+             +---+    tag    |          |         |
> |  |      |    |      |             |   |           |          |         |
> |  +------+    +------+             |   +-----------+          |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                         ++         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |  +-----------+           |         |
> |  +------+    +------+             |  |           |           |         |
> |                                   |  |    L3     |           |         |
> |  +------+    +------+             +--+    tag    |           |         |
> |  |      |    |      |             |  |           |           |         |
> |  +------+    +------+             |  +-----------+           |         |
> |                                   |                          +---------+
> +-----------------------------------+
>
> This patch adds the sched_domain for clusters. On kunpeng 920, without
> this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> max_interval=8.
> This will affect load balance. For example, without this patch, while cpu0
> becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> will try to pull a task from cpu1-cpu3 first. This will have much less
> overhead of task migration.
>
> On the other hand, while doing WAKE_AFFINE, this patch will try to find
> a core in the target cluster before scanning the llc domain.
> This means it will proactively use a core which has better affinity with
> target core at first.

Which is at the opposite of what we are usually trying to do in the
fast wakeup path: trying to minimize resource sharing by finding an
idle core with all smt idle as an example

>
> Not much benchmark has been done yet. but here is a rough hackbench
> result.
> we run the below command with different -g parameter to increase system load
> by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
> and record the data to get the average time:
>
> First, we run hackbench in only one NUMA node(cpu0-cpu23):
> $ numactl -N 0 hackbench -p -T -l 100000 -g $1

What is your ref tree ? v5.10-rcX or tip/sched/core ?

>
> g=1 (seen cpu utilization around 50% for each core)
> Running in threaded mode with 1 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> performance improvement w/ patch: -1.01%
>
> g=2 (seen cpu utilization around 70% for each core)
> Running in threaded mode with 2 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162 9.955=10.1006
> w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> performance improvement w/ patch: 4.08%
>
> g=3 (seen cpu utilization around 90% for each core)
> Running in threaded mode with 3 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674 15.721=15.7727
> w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946 14.895=14.9005
> performance improvement w/ patch: 5.53%
>
> g=4
> Running in threaded mode with 4 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
> w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
> performance improvement w/ patch: 1.22%
>
> After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> g=1
> w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> performance improvement by patch: -2.2%
>
> g=2
> w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> performance improvement by patch: 9.3%
>
> g=3
> w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235 11.849=11.9557
> w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> performance improvement by patch: 20.2%
>
> g=4
> w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859 18.515=17.7647
> w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467 10.401=10.4927
> performance improvement by patch: 40.9%
>
> g=5
> w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073 25.819=27.4382
> w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954 13.907=13.9735
> performance improvement by patch: 49.1%
>
> It seems the patch can bring a huge increase on hackbench especially when
> we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> on single NUMA node(cpu0-cpu23)

Interesting that this patch mainly impacts the numa case

>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
>  arch/arm64/Kconfig       |  7 +++++++
>  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
>  include/linux/topology.h |  7 +++++++
>  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
>  4 files changed, 66 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d23283..3583c26 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -938,6 +938,13 @@ config SCHED_MC
>           making when dealing with multi-core CPU chips at a cost of slightly
>           increased overhead in some places. If unsure say N here.
>
> +config SCHED_CLUSTER
> +       bool "Cluster scheduler support"
> +       help
> +         Cluster scheduler support improves the CPU scheduler's decision
> +         making when dealing with machines that have clusters(sharing internal
> +         bus or sharing LLC cache tag). If unsure say N here.
> +
>  config SCHED_SMT
>         bool "SMT scheduler support"
>         help
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 355ee9e..5c8f026 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -32,6 +32,7 @@
>  #include <linux/irq_work.h>
>  #include <linux/kexec.h>
>  #include <linux/kvm_host.h>
> +#include <linux/sched/topology.h>
>
>  #include <asm/alternative.h>
>  #include <asm/atomic.h>
> @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
>         }
>  }
>
> +static struct sched_domain_topology_level arm64_topology[] = {
> +#ifdef CONFIG_SCHED_SMT
> +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> +#endif
> +#ifdef CONFIG_SCHED_CLUSTER
> +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> +#endif
> +#ifdef CONFIG_SCHED_MC
> +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> +#endif
> +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> +        { NULL, },
> +};
> +
>  void __init smp_prepare_cpus(unsigned int max_cpus)
>  {
>         const struct cpu_operations *ops;
> @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
>
>         init_cpu_topology();
>
> +       set_sched_topology(arm64_topology);
> +
>         this_cpu = smp_processor_id();
>         store_cpu_topology(this_cpu);
>         numa_store_cpu_info(this_cpu);
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 5f66648..2c823c0 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
>  }
>  #endif
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> +{
> +       return topology_cluster_cpumask(cpu);
> +}
> +#endif
> +
>  static inline const struct cpumask *cpu_cpu_mask(int cpu)
>  {
>         return cpumask_of_node(cpu_to_node(cpu));
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1a68a05..ae8ec910 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct *p, int target)
>
>  #endif /* CONFIG_SCHED_SMT */
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +/*
> + * Scan the local CLUSTER mask for idle CPUs.
> + */
> +static int select_idle_cluster(struct task_struct *p, int target)
> +{
> +       int cpu;
> +
> +       /* right now, no hardware with both cluster and smt to run */
> +       if (sched_smt_active())

don't use smt static key but a dedicated one if needed

> +               return -1;
> +
> +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> +                       continue;
> +               if (available_idle_cpu(cpu))
> +                       return cpu;
> +       }
> +
> +       return -1;
> +}
> +
> +#else /* CONFIG_SCHED_CLUSTER */
> +
> +static inline int select_idle_cluster(struct task_struct *p, int target)
> +{
> +       return -1;
> +}
> +
> +#endif /* CONFIG_SCHED_CLUSTER */
> +
>  /*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>         if ((unsigned)i < nr_cpumask_bits)
>                 return i;
>
> +       i = select_idle_cluster(p, target);
> +       if ((unsigned)i < nr_cpumask_bits)
> +               return i;

This is yet another loop in the fast wake up path.

I'm curious to know which part of this patch really gives the perf improvement ?
-Is it the new sched domain level with a shorter interval that is then
used by Load balance to better spread task in the cluster and between
clusters ?
-Or this new loop in the wake up path which tries to keep threads in
the same cluster ? which is at the opposite of the rest of the
scheduler which tries to spread

Also could the sched_feat(SIS_PROP) impacts significantly your
topology because it  breaks before looking for all cores in the LLC ?
And this new loop extends the number of tested core ?

> +
>         i = select_idle_cpu(p, sd, target);
>         if ((unsigned)i < nr_cpumask_bits)
>                 return i;
> --
> 2.7.4
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-02  8:27     ` Vincent Guittot
@ 2020-12-02  9:20       ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02  9:20 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Wednesday, December 2, 2020 9:27 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> >
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > has local L3 tag. On the other hand, each clusters will share some
> > internal system bus. This means cache coherence overhead inside one cluster
> > is much less than the overhead across clusters.
> >
> > +-----------------------------------+                          +---------+
> > |  +------+    +------+            +---------------------------+         |
> > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   |    |    L3     |         |         |
> > |  +------+    +------+             +----+    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |   L3    |
> >                                                                |   data  |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             +----+    L3     |         |         |
> > |                                   |    |    tag    |         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |  |      |    |      |            ++    +-----------+         |         |
> > |  +------+    +------+            |---------------------------+         |
> > +-----------------------------------|                          |         |
> > +-----------------------------------|                          |         |
> > |  +------+    +------+            +---------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+             |    |    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |   +-----------+          |         |
> > |  +------+    +------+             |   |           |          |         |
> > |                                   |   |    L3     |          |         |
> > |  +------+    +------+             +---+    tag    |          |         |
> > |  |      |    |      |             |   |           |          |         |
> > |  +------+    +------+             |   +-----------+          |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                         ++         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |  +-----------+           |         |
> > |  +------+    +------+             |  |           |           |         |
> > |                                   |  |    L3     |           |         |
> > |  +------+    +------+             +--+    tag    |           |         |
> > |  |      |    |      |             |  |           |           |         |
> > |  +------+    +------+             |  +-----------+           |         |
> > |                                   |                          +---------+
> > +-----------------------------------+
> >
> > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> > max_interval=8.
> > This will affect load balance. For example, without this patch, while cpu0
> > becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> > will try to pull a task from cpu1-cpu3 first. This will have much less
> > overhead of task migration.
> >
> > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > a core in the target cluster before scanning the llc domain.
> > This means it will proactively use a core which has better affinity with
> > target core at first.
> 
> Which is at the opposite of what we are usually trying to do in the
> fast wakeup path: trying to minimize resource sharing by finding an
> idle core with all smt idle as an example

In wake_affine case, I guess we are actually want some kind of
resource sharing such as LLC to get waker and wakee get closer
to each other. find_idlest_cpu() is really opposite.

So the real question is that LLC is always the right choice of
idle sibling? 

In this case, 6 clusters are in same LLC, but hardware has different
behavior for inside single cluster and across multiple clusters.


> 
> >
> > Not much benchmark has been done yet. but here is a rough hackbench
> > result.
> > we run the below command with different -g parameter to increase system load
> > by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
> > and record the data to get the average time:
> >
> > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> 
> What is your ref tree ? v5.10-rcX or tip/sched/core ?

Actually I was using 5.9 release.  That must be weird. 
But the reason is that disk driver is getting hang
in my hardware in 5.10-rcx.

> 
> >
> > g=1 (seen cpu utilization around 50% for each core)
> > Running in threaded mode with 1 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > performance improvement w/ patch: -1.01%
> >
> > g=2 (seen cpu utilization around 70% for each core)
> > Running in threaded mode with 2 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> 9.955=10.1006
> > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > performance improvement w/ patch: 4.08%
> >
> > g=3 (seen cpu utilization around 90% for each core)
> > Running in threaded mode with 3 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> 15.721=15.7727
> > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> 14.895=14.9005
> > performance improvement w/ patch: 5.53%
> >
> > g=4
> > Running in threaded mode with 4 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> 21.090=20.8187
> > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> 20.452=20.5647
> > performance improvement w/ patch: 1.22%
> >
> > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > g=1
> > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > performance improvement by patch: -2.2%
> >
> > g=2
> > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > performance improvement by patch: 9.3%
> >
> > g=3
> > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> 11.849=11.9557
> > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > performance improvement by patch: 20.2%
> >
> > g=4
> > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> 18.515=17.7647
> > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> 10.401=10.4927
> > performance improvement by patch: 40.9%
> >
> > g=5
> > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> 25.819=27.4382
> > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> 13.907=13.9735
> > performance improvement by patch: 49.1%
> >
> > It seems the patch can bring a huge increase on hackbench especially when
> > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > on single NUMA node(cpu0-cpu23)
> 
> Interesting that this patch mainly impacts the numa case
> 
> >
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > ---
> >  arch/arm64/Kconfig       |  7 +++++++
> >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> >  include/linux/topology.h |  7 +++++++
> >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> >  4 files changed, 66 insertions(+)
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 6d23283..3583c26 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -938,6 +938,13 @@ config SCHED_MC
> >           making when dealing with multi-core CPU chips at a cost of slightly
> >           increased overhead in some places. If unsure say N here.
> >
> > +config SCHED_CLUSTER
> > +       bool "Cluster scheduler support"
> > +       help
> > +         Cluster scheduler support improves the CPU scheduler's decision
> > +         making when dealing with machines that have clusters(sharing internal
> > +         bus or sharing LLC cache tag). If unsure say N here.
> > +
> >  config SCHED_SMT
> >         bool "SMT scheduler support"
> >         help
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 355ee9e..5c8f026 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -32,6 +32,7 @@
> >  #include <linux/irq_work.h>
> >  #include <linux/kexec.h>
> >  #include <linux/kvm_host.h>
> > +#include <linux/sched/topology.h>
> >
> >  #include <asm/alternative.h>
> >  #include <asm/atomic.h>
> > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> >         }
> >  }
> >
> > +static struct sched_domain_topology_level arm64_topology[] = {
> > +#ifdef CONFIG_SCHED_SMT
> > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > +#endif
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > +#endif
> > +#ifdef CONFIG_SCHED_MC
> > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > +#endif
> > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > +        { NULL, },
> > +};
> > +
> >  void __init smp_prepare_cpus(unsigned int max_cpus)
> >  {
> >         const struct cpu_operations *ops;
> > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> >
> >         init_cpu_topology();
> >
> > +       set_sched_topology(arm64_topology);
> > +
> >         this_cpu = smp_processor_id();
> >         store_cpu_topology(this_cpu);
> >         numa_store_cpu_info(this_cpu);
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index 5f66648..2c823c0 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> cpu)
> >  }
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > +{
> > +       return topology_cluster_cpumask(cpu);
> > +}
> > +#endif
> > +
> >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> >  {
> >         return cpumask_of_node(cpu_to_node(cpu));
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1a68a05..ae8ec910 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct
> *p, int target)
> >
> >  #endif /* CONFIG_SCHED_SMT */
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +/*
> > + * Scan the local CLUSTER mask for idle CPUs.
> > + */
> > +static int select_idle_cluster(struct task_struct *p, int target)
> > +{
> > +       int cpu;
> > +
> > +       /* right now, no hardware with both cluster and smt to run */
> > +       if (sched_smt_active())
> 
> don't use smt static key but a dedicated one if needed

Sure. 

> 
> > +               return -1;
> > +
> > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > +                       continue;
> > +               if (available_idle_cpu(cpu))
> > +                       return cpu;
> > +       }
> > +
> > +       return -1;
> > +}
> > +
> > +#else /* CONFIG_SCHED_CLUSTER */
> > +
> > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > +{
> > +       return -1;
> > +}
> > +
> > +#endif /* CONFIG_SCHED_CLUSTER */
> > +
> >  /*
> >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> the
> > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p,
> int prev, int target)
> >         if ((unsigned)i < nr_cpumask_bits)
> >                 return i;
> >
> > +       i = select_idle_cluster(p, target);
> > +       if ((unsigned)i < nr_cpumask_bits)
> > +               return i;
> 
> This is yet another loop in the fast wake up path.
> 
> I'm curious to know which part of this patch really gives the perf improvement ?
> -Is it the new sched domain level with a shorter interval that is then
> used by Load balance to better spread task in the cluster and between
> clusters ?
> -Or this new loop in the wake up path which tries to keep threads in
> the same cluster ? which is at the opposite of the rest of the
> scheduler which tries to spread

If I don't scan cluster first for wake_affine, I almost don't see large
hackbench change by the new sche_domain.
For example:
g=4 in hackbench on cpu0-cpu47(two numa)
w/o patch: 17.7647 (average time in 10 times of hackbench)
w/ the full patch: 10.4927 
w/ patch but drop select_idle_cluster(): 15.0931

So I don't think the hackbench increase mainly comes from the shorter
interval of load balance of the new cluster domain.

What does really matter is select_idle_cluster() according to my tests.

> 
> Also could the sched_feat(SIS_PROP) impacts significantly your
> topology because it  breaks before looking for all cores in the LLC ?
> And this new loop extends the number of tested core ?

In this case, cluster must belong to LLC. Cluster is the child of LLC.

Maybe the code Valentin suggested in his reply is a good way to
keep the code align with the existing select_idle_cpu():

static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	struct sched_domain *this_sd;
+	struct sched_domain *this_sd, *child = NULL;
 	u64 avg_cost, avg_idle;
 	u64 time;
 	int this = smp_processor_id();
@@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 
 	time = cpu_clock(this);
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	do {
+		/* XXX: sd should start as SMT's parent */
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		if (child)
+			cpumask_andnot(cpus, cpus, sched_domain_span(child));
+
+		for_each_cpu_wrap(cpu, cpus, target) {
+			if (!--nr)
+				return -1;
+			if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+				break;
+		}
 
-	for_each_cpu_wrap(cpu, cpus, target) {
-		if (!--nr)
-			return -1;
-		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
-			break;
-	}
+		child = sd;
+		sd = sd->parent;
+	} while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);

> 
> > +
> >         i = select_idle_cpu(p, sd, target);
> >         if ((unsigned)i < nr_cpumask_bits)
> >                 return i;
> > --
> > 2.7.4
> >

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-02  9:20       ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02  9:20 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Wednesday, December 2, 2020 9:27 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> >
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > has local L3 tag. On the other hand, each clusters will share some
> > internal system bus. This means cache coherence overhead inside one cluster
> > is much less than the overhead across clusters.
> >
> > +-----------------------------------+                          +---------+
> > |  +------+    +------+            +---------------------------+         |
> > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   |    |    L3     |         |         |
> > |  +------+    +------+             +----+    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |   L3    |
> >                                                                |   data  |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             +----+    L3     |         |         |
> > |                                   |    |    tag    |         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |  |      |    |      |            ++    +-----------+         |         |
> > |  +------+    +------+            |---------------------------+         |
> > +-----------------------------------|                          |         |
> > +-----------------------------------|                          |         |
> > |  +------+    +------+            +---------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+             |    |    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |   +-----------+          |         |
> > |  +------+    +------+             |   |           |          |         |
> > |                                   |   |    L3     |          |         |
> > |  +------+    +------+             +---+    tag    |          |         |
> > |  |      |    |      |             |   |           |          |         |
> > |  +------+    +------+             |   +-----------+          |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                         ++         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |  +-----------+           |         |
> > |  +------+    +------+             |  |           |           |         |
> > |                                   |  |    L3     |           |         |
> > |  +------+    +------+             +--+    tag    |           |         |
> > |  |      |    |      |             |  |           |           |         |
> > |  +------+    +------+             |  +-----------+           |         |
> > |                                   |                          +---------+
> > +-----------------------------------+
> >
> > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> > max_interval=8.
> > This will affect load balance. For example, without this patch, while cpu0
> > becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> > will try to pull a task from cpu1-cpu3 first. This will have much less
> > overhead of task migration.
> >
> > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > a core in the target cluster before scanning the llc domain.
> > This means it will proactively use a core which has better affinity with
> > target core at first.
> 
> Which is at the opposite of what we are usually trying to do in the
> fast wakeup path: trying to minimize resource sharing by finding an
> idle core with all smt idle as an example

In wake_affine case, I guess we are actually want some kind of
resource sharing such as LLC to get waker and wakee get closer
to each other. find_idlest_cpu() is really opposite.

So the real question is that LLC is always the right choice of
idle sibling? 

In this case, 6 clusters are in same LLC, but hardware has different
behavior for inside single cluster and across multiple clusters.


> 
> >
> > Not much benchmark has been done yet. but here is a rough hackbench
> > result.
> > we run the below command with different -g parameter to increase system load
> > by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
> > and record the data to get the average time:
> >
> > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> 
> What is your ref tree ? v5.10-rcX or tip/sched/core ?

Actually I was using 5.9 release.  That must be weird. 
But the reason is that disk driver is getting hang
in my hardware in 5.10-rcx.

> 
> >
> > g=1 (seen cpu utilization around 50% for each core)
> > Running in threaded mode with 1 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > performance improvement w/ patch: -1.01%
> >
> > g=2 (seen cpu utilization around 70% for each core)
> > Running in threaded mode with 2 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> 9.955=10.1006
> > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > performance improvement w/ patch: 4.08%
> >
> > g=3 (seen cpu utilization around 90% for each core)
> > Running in threaded mode with 3 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> 15.721=15.7727
> > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> 14.895=14.9005
> > performance improvement w/ patch: 5.53%
> >
> > g=4
> > Running in threaded mode with 4 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> 21.090=20.8187
> > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> 20.452=20.5647
> > performance improvement w/ patch: 1.22%
> >
> > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > g=1
> > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > performance improvement by patch: -2.2%
> >
> > g=2
> > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > performance improvement by patch: 9.3%
> >
> > g=3
> > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> 11.849=11.9557
> > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > performance improvement by patch: 20.2%
> >
> > g=4
> > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> 18.515=17.7647
> > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> 10.401=10.4927
> > performance improvement by patch: 40.9%
> >
> > g=5
> > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> 25.819=27.4382
> > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> 13.907=13.9735
> > performance improvement by patch: 49.1%
> >
> > It seems the patch can bring a huge increase on hackbench especially when
> > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > on single NUMA node(cpu0-cpu23)
> 
> Interesting that this patch mainly impacts the numa case
> 
> >
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > ---
> >  arch/arm64/Kconfig       |  7 +++++++
> >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> >  include/linux/topology.h |  7 +++++++
> >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> >  4 files changed, 66 insertions(+)
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 6d23283..3583c26 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -938,6 +938,13 @@ config SCHED_MC
> >           making when dealing with multi-core CPU chips at a cost of slightly
> >           increased overhead in some places. If unsure say N here.
> >
> > +config SCHED_CLUSTER
> > +       bool "Cluster scheduler support"
> > +       help
> > +         Cluster scheduler support improves the CPU scheduler's decision
> > +         making when dealing with machines that have clusters(sharing internal
> > +         bus or sharing LLC cache tag). If unsure say N here.
> > +
> >  config SCHED_SMT
> >         bool "SMT scheduler support"
> >         help
> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > index 355ee9e..5c8f026 100644
> > --- a/arch/arm64/kernel/smp.c
> > +++ b/arch/arm64/kernel/smp.c
> > @@ -32,6 +32,7 @@
> >  #include <linux/irq_work.h>
> >  #include <linux/kexec.h>
> >  #include <linux/kvm_host.h>
> > +#include <linux/sched/topology.h>
> >
> >  #include <asm/alternative.h>
> >  #include <asm/atomic.h>
> > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> >         }
> >  }
> >
> > +static struct sched_domain_topology_level arm64_topology[] = {
> > +#ifdef CONFIG_SCHED_SMT
> > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > +#endif
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > +#endif
> > +#ifdef CONFIG_SCHED_MC
> > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > +#endif
> > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > +        { NULL, },
> > +};
> > +
> >  void __init smp_prepare_cpus(unsigned int max_cpus)
> >  {
> >         const struct cpu_operations *ops;
> > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> >
> >         init_cpu_topology();
> >
> > +       set_sched_topology(arm64_topology);
> > +
> >         this_cpu = smp_processor_id();
> >         store_cpu_topology(this_cpu);
> >         numa_store_cpu_info(this_cpu);
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index 5f66648..2c823c0 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> cpu)
> >  }
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > +{
> > +       return topology_cluster_cpumask(cpu);
> > +}
> > +#endif
> > +
> >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> >  {
> >         return cpumask_of_node(cpu_to_node(cpu));
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1a68a05..ae8ec910 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct
> *p, int target)
> >
> >  #endif /* CONFIG_SCHED_SMT */
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +/*
> > + * Scan the local CLUSTER mask for idle CPUs.
> > + */
> > +static int select_idle_cluster(struct task_struct *p, int target)
> > +{
> > +       int cpu;
> > +
> > +       /* right now, no hardware with both cluster and smt to run */
> > +       if (sched_smt_active())
> 
> don't use smt static key but a dedicated one if needed

Sure. 

> 
> > +               return -1;
> > +
> > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > +                       continue;
> > +               if (available_idle_cpu(cpu))
> > +                       return cpu;
> > +       }
> > +
> > +       return -1;
> > +}
> > +
> > +#else /* CONFIG_SCHED_CLUSTER */
> > +
> > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > +{
> > +       return -1;
> > +}
> > +
> > +#endif /* CONFIG_SCHED_CLUSTER */
> > +
> >  /*
> >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> the
> > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p,
> int prev, int target)
> >         if ((unsigned)i < nr_cpumask_bits)
> >                 return i;
> >
> > +       i = select_idle_cluster(p, target);
> > +       if ((unsigned)i < nr_cpumask_bits)
> > +               return i;
> 
> This is yet another loop in the fast wake up path.
> 
> I'm curious to know which part of this patch really gives the perf improvement ?
> -Is it the new sched domain level with a shorter interval that is then
> used by Load balance to better spread task in the cluster and between
> clusters ?
> -Or this new loop in the wake up path which tries to keep threads in
> the same cluster ? which is at the opposite of the rest of the
> scheduler which tries to spread

If I don't scan cluster first for wake_affine, I almost don't see large
hackbench change by the new sche_domain.
For example:
g=4 in hackbench on cpu0-cpu47(two numa)
w/o patch: 17.7647 (average time in 10 times of hackbench)
w/ the full patch: 10.4927 
w/ patch but drop select_idle_cluster(): 15.0931

So I don't think the hackbench increase mainly comes from the shorter
interval of load balance of the new cluster domain.

What does really matter is select_idle_cluster() according to my tests.

> 
> Also could the sched_feat(SIS_PROP) impacts significantly your
> topology because it  breaks before looking for all cores in the LLC ?
> And this new loop extends the number of tested core ?

In this case, cluster must belong to LLC. Cluster is the child of LLC.

Maybe the code Valentin suggested in his reply is a good way to
keep the code align with the existing select_idle_cpu():

static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	struct sched_domain *this_sd;
+	struct sched_domain *this_sd, *child = NULL;
 	u64 avg_cost, avg_idle;
 	u64 time;
 	int this = smp_processor_id();
@@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 
 	time = cpu_clock(this);
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	do {
+		/* XXX: sd should start as SMT's parent */
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		if (child)
+			cpumask_andnot(cpus, cpus, sched_domain_span(child));
+
+		for_each_cpu_wrap(cpu, cpus, target) {
+			if (!--nr)
+				return -1;
+			if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
+				break;
+		}
 
-	for_each_cpu_wrap(cpu, cpus, target) {
-		if (!--nr)
-			return -1;
-		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
-			break;
-	}
+		child = sd;
+		sd = sd->parent;
+	} while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);

> 
> > +
> >         i = select_idle_cpu(p, sd, target);
> >         if ((unsigned)i < nr_cpumask_bits)
> >                 return i;
> > --
> > 2.7.4
> >

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die.
  2020-12-01 16:03     ` Valentin Schneider
@ 2020-12-02  9:55       ` Sudeep Holla
  -1 siblings, 0 replies; 48+ messages in thread
From: Sudeep Holla @ 2020-12-02  9:55 UTC (permalink / raw)
  To: Barry Song
  Cc: Valentin Schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, peterz, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi, linuxarm, xuwei5,
	prime.zeng, Sudeep Holla

On Tue, Dec 01, 2020 at 04:03:46PM +0000, Valentin Schneider wrote:
> 
> On 01/12/20 02:59, Barry Song wrote:
> > Currently the ID provided is the offset of the Processor
> > Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> > it is not terribly elegant so alternative suggestions welcome.
> >

I had already mentioned that you need to fix the firmware/PPTT on your
platform. If you fill only mandatory fields, then yes this is optional
and we resort to use offset as unique number in the kernel.

>
> Skimming through the spec, this sounds like something the ID structure
> (Type 2) could be used for. However in v1 Jonathan and Sudeep talked about
> UID's / DSDT, any news on that?
>

FYI, type 2 is for SoC identification which is being deprecated(still
need to check if that progressed and made to the official release yet)
in favour of Arm SMCCC v1.2 SOC_ID. Anyways it is irrelevant in this
context. They need to use UIDs and mark the corresponding flag as valid
for OSPM/kernel to use it.

> > Note that arm64 / ACPI does not provide any means of identifying
> > a die level in the topology but that may be unrelate to the cluster
> > level.
> >

May need spec extension if there are no ways to identify the same.

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die.
@ 2020-12-02  9:55       ` Sudeep Holla
  0 siblings, 0 replies; 48+ messages in thread
From: Sudeep Holla @ 2020-12-02  9:55 UTC (permalink / raw)
  To: Barry Song
  Cc: juri.lelli, mark.rutland, peterz, catalin.marinas, linuxarm,
	bsegall, xuwei5, will, vincent.guittot, linux-acpi, mingo,
	mgorman, Valentin Schneider, lenb, rostedt, prime.zeng,
	Jonathan.Cameron, dietmar.eggemann, linux-arm-kernel, gregkh,
	rjw, linux-kernel, Sudeep Holla

On Tue, Dec 01, 2020 at 04:03:46PM +0000, Valentin Schneider wrote:
> 
> On 01/12/20 02:59, Barry Song wrote:
> > Currently the ID provided is the offset of the Processor
> > Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> > it is not terribly elegant so alternative suggestions welcome.
> >

I had already mentioned that you need to fix the firmware/PPTT on your
platform. If you fill only mandatory fields, then yes this is optional
and we resort to use offset as unique number in the kernel.

>
> Skimming through the spec, this sounds like something the ID structure
> (Type 2) could be used for. However in v1 Jonathan and Sudeep talked about
> UID's / DSDT, any news on that?
>

FYI, type 2 is for SoC identification which is being deprecated(still
need to check if that progressed and made to the official release yet)
in favour of Arm SMCCC v1.2 SOC_ID. Anyways it is irrelevant in this
context. They need to use UIDs and mark the corresponding flag as valid
for OSPM/kernel to use it.

> > Note that arm64 / ACPI does not provide any means of identifying
> > a die level in the topology but that may be unrelate to the cluster
> > level.
> >

May need spec extension if there are no ways to identify the same.

-- 
Regards,
Sudeep

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-02  9:20       ` Song Bao Hua (Barry Song)
@ 2020-12-02 10:16         ` Vincent Guittot
  -1 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-02 10:16 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)

On Wed, 2 Dec 2020 at 10:20, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Wednesday, December 2, 2020 9:27 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> > >
> > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > > has local L3 tag. On the other hand, each clusters will share some
> > > internal system bus. This means cache coherence overhead inside one cluster
> > > is much less than the overhead across clusters.
> > >
> > > +-----------------------------------+                          +---------+
> > > |  +------+    +------+            +---------------------------+         |
> > > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |                                   +----+    L3     |         |         |
> > > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |         |
> > > +-----------------------------------+                          |         |
> > > |  +------+    +------+             +--------------------------+         |
> > > |  |      |    |      |             |    +-----------+         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |                                   |    |    L3     |         |         |
> > > |  +------+    +------+             +----+    tag    |         |         |
> > > |  |      |    |      |             |    |           |         |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |   L3    |
> > >                                                                |   data  |
> > > +-----------------------------------+                          |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |  |      |    |      |             |    |           |         |         |
> > > |  +------+    +------+             +----+    L3     |         |         |
> > > |                                   |    |    tag    |         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |  |      |    |      |            ++    +-----------+         |         |
> > > |  +------+    +------+            |---------------------------+         |
> > > +-----------------------------------|                          |         |
> > > +-----------------------------------|                          |         |
> > > |  +------+    +------+            +---------------------------+         |
> > > |  |      |    |      |             |    +-----------+         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |                                   +----+    L3     |         |         |
> > > |  +------+    +------+             |    |    tag    |         |         |
> > > |  |      |    |      |             |    |           |         |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |         |
> > > +-----------------------------------+                          |         |
> > > |  +------+    +------+             +--------------------------+         |
> > > |  |      |    |      |             |   +-----------+          |         |
> > > |  +------+    +------+             |   |           |          |         |
> > > |                                   |   |    L3     |          |         |
> > > |  +------+    +------+             +---+    tag    |          |         |
> > > |  |      |    |      |             |   |           |          |         |
> > > |  +------+    +------+             |   +-----------+          |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |         |
> > > +-----------------------------------+                         ++         |
> > > |  +------+    +------+             +--------------------------+         |
> > > |  |      |    |      |             |  +-----------+           |         |
> > > |  +------+    +------+             |  |           |           |         |
> > > |                                   |  |    L3     |           |         |
> > > |  +------+    +------+             +--+    tag    |           |         |
> > > |  |      |    |      |             |  |           |           |         |
> > > |  +------+    +------+             |  +-----------+           |         |
> > > |                                   |                          +---------+
> > > +-----------------------------------+
> > >
> > > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> > > max_interval=8.
> > > This will affect load balance. For example, without this patch, while cpu0
> > > becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> > > will try to pull a task from cpu1-cpu3 first. This will have much less
> > > overhead of task migration.
> > >
> > > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > > a core in the target cluster before scanning the llc domain.
> > > This means it will proactively use a core which has better affinity with
> > > target core at first.
> >
> > Which is at the opposite of what we are usually trying to do in the
> > fast wakeup path: trying to minimize resource sharing by finding an
> > idle core with all smt idle as an example
>
> In wake_affine case, I guess we are actually want some kind of
> resource sharing such as LLC to get waker and wakee get closer

In wake_affine, we don't want to move outside the LLC  but then in the
LLC we tries to minimize resource sharing like looking for a core
fully idle for SMT

> to each other. find_idlest_cpu() is really opposite.
>
> So the real question is that LLC is always the right choice of
> idle sibling?

That's the eternal question: spread or gather

>
> In this case, 6 clusters are in same LLC, but hardware has different
> behavior for inside single cluster and across multiple clusters.
>
>
> >
> > >
> > > Not much benchmark has been done yet. but here is a rough hackbench
> > > result.
> > > we run the below command with different -g parameter to increase system load
> > > by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
> > > and record the data to get the average time:
> > >
> > > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> >
> > What is your ref tree ? v5.10-rcX or tip/sched/core ?
>
> Actually I was using 5.9 release.  That must be weird.
> But the reason is that disk driver is getting hang
> in my hardware in 5.10-rcx.

In fact there are several changes in v5.10 and tip/sched/core that
could help your topology

>
> >
> > >
> > > g=1 (seen cpu utilization around 50% for each core)
> > > Running in threaded mode with 1 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > performance improvement w/ patch: -1.01%
> > >
> > > g=2 (seen cpu utilization around 70% for each core)
> > > Running in threaded mode with 2 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > 9.955=10.1006
> > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > performance improvement w/ patch: 4.08%
> > >
> > > g=3 (seen cpu utilization around 90% for each core)
> > > Running in threaded mode with 3 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > 15.721=15.7727
> > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > 14.895=14.9005
> > > performance improvement w/ patch: 5.53%
> > >
> > > g=4
> > > Running in threaded mode with 4 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > 21.090=20.8187
> > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > 20.452=20.5647
> > > performance improvement w/ patch: 1.22%
> > >
> > > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > > g=1
> > > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > > performance improvement by patch: -2.2%
> > >
> > > g=2
> > > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > > performance improvement by patch: 9.3%
> > >
> > > g=3
> > > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> > 11.849=11.9557
> > > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > > performance improvement by patch: 20.2%
> > >
> > > g=4
> > > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> > 18.515=17.7647
> > > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> > 10.401=10.4927
> > > performance improvement by patch: 40.9%
> > >
> > > g=5
> > > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> > 25.819=27.4382
> > > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> > 13.907=13.9735
> > > performance improvement by patch: 49.1%
> > >
> > > It seems the patch can bring a huge increase on hackbench especially when
> > > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > > on single NUMA node(cpu0-cpu23)
> >
> > Interesting that this patch mainly impacts the numa case
> >
> > >
> > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > > ---
> > >  arch/arm64/Kconfig       |  7 +++++++
> > >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> > >  include/linux/topology.h |  7 +++++++
> > >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> > >  4 files changed, 66 insertions(+)
> > >
> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index 6d23283..3583c26 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -938,6 +938,13 @@ config SCHED_MC
> > >           making when dealing with multi-core CPU chips at a cost of slightly
> > >           increased overhead in some places. If unsure say N here.
> > >
> > > +config SCHED_CLUSTER
> > > +       bool "Cluster scheduler support"
> > > +       help
> > > +         Cluster scheduler support improves the CPU scheduler's decision
> > > +         making when dealing with machines that have clusters(sharing internal
> > > +         bus or sharing LLC cache tag). If unsure say N here.
> > > +
> > >  config SCHED_SMT
> > >         bool "SMT scheduler support"
> > >         help
> > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > index 355ee9e..5c8f026 100644
> > > --- a/arch/arm64/kernel/smp.c
> > > +++ b/arch/arm64/kernel/smp.c
> > > @@ -32,6 +32,7 @@
> > >  #include <linux/irq_work.h>
> > >  #include <linux/kexec.h>
> > >  #include <linux/kvm_host.h>
> > > +#include <linux/sched/topology.h>
> > >
> > >  #include <asm/alternative.h>
> > >  #include <asm/atomic.h>
> > > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> > >         }
> > >  }
> > >
> > > +static struct sched_domain_topology_level arm64_topology[] = {
> > > +#ifdef CONFIG_SCHED_SMT
> > > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > > +#endif
> > > +#ifdef CONFIG_SCHED_CLUSTER
> > > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > > +#endif
> > > +#ifdef CONFIG_SCHED_MC
> > > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > > +#endif
> > > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > > +        { NULL, },
> > > +};
> > > +
> > >  void __init smp_prepare_cpus(unsigned int max_cpus)
> > >  {
> > >         const struct cpu_operations *ops;
> > > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> > >
> > >         init_cpu_topology();
> > >
> > > +       set_sched_topology(arm64_topology);
> > > +
> > >         this_cpu = smp_processor_id();
> > >         store_cpu_topology(this_cpu);
> > >         numa_store_cpu_info(this_cpu);
> > > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > > index 5f66648..2c823c0 100644
> > > --- a/include/linux/topology.h
> > > +++ b/include/linux/topology.h
> > > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> > cpu)
> > >  }
> > >  #endif
> > >
> > > +#ifdef CONFIG_SCHED_CLUSTER
> > > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > > +{
> > > +       return topology_cluster_cpumask(cpu);
> > > +}
> > > +#endif
> > > +
> > >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> > >  {
> > >         return cpumask_of_node(cpu_to_node(cpu));
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 1a68a05..ae8ec910 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct
> > *p, int target)
> > >
> > >  #endif /* CONFIG_SCHED_SMT */
> > >
> > > +#ifdef CONFIG_SCHED_CLUSTER
> > > +/*
> > > + * Scan the local CLUSTER mask for idle CPUs.
> > > + */
> > > +static int select_idle_cluster(struct task_struct *p, int target)
> > > +{
> > > +       int cpu;
> > > +
> > > +       /* right now, no hardware with both cluster and smt to run */
> > > +       if (sched_smt_active())
> >
> > don't use smt static key but a dedicated one if needed
>
> Sure.
>
> >
> > > +               return -1;
> > > +
> > > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > > +                       continue;
> > > +               if (available_idle_cpu(cpu))
> > > +                       return cpu;
> > > +       }
> > > +
> > > +       return -1;
> > > +}
> > > +
> > > +#else /* CONFIG_SCHED_CLUSTER */
> > > +
> > > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > > +{
> > > +       return -1;
> > > +}
> > > +
> > > +#endif /* CONFIG_SCHED_CLUSTER */
> > > +
> > >  /*
> > >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> > >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> > the
> > > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p,
> > int prev, int target)
> > >         if ((unsigned)i < nr_cpumask_bits)
> > >                 return i;
> > >
> > > +       i = select_idle_cluster(p, target);
> > > +       if ((unsigned)i < nr_cpumask_bits)
> > > +               return i;
> >
> > This is yet another loop in the fast wake up path.
> >
> > I'm curious to know which part of this patch really gives the perf improvement ?
> > -Is it the new sched domain level with a shorter interval that is then
> > used by Load balance to better spread task in the cluster and between
> > clusters ?
> > -Or this new loop in the wake up path which tries to keep threads in
> > the same cluster ? which is at the opposite of the rest of the
> > scheduler which tries to spread
>
> If I don't scan cluster first for wake_affine, I almost don't see large
> hackbench change by the new sche_domain.
> For example:
> g=4 in hackbench on cpu0-cpu47(two numa)
> w/o patch: 17.7647 (average time in 10 times of hackbench)
> w/ the full patch: 10.4927
> w/ patch but drop select_idle_cluster(): 15.0931

And for the case with one numa node ?

I'd like to understand why this patch impacts much the numa case but
not the  one numa node case.

>
> So I don't think the hackbench increase mainly comes from the shorter
> interval of load balance of the new cluster domain.
>
> What does really matter is select_idle_cluster() according to my tests.
>
> >
> > Also could the sched_feat(SIS_PROP) impacts significantly your
> > topology because it  breaks before looking for all cores in the LLC ?
> > And this new loop extends the number of tested core ?
>
> In this case, cluster must belong to LLC. Cluster is the child of LLC.

Yes . My point is: in select_idle_cpu, we don't always loop on all
CPUs especially when you have a large number of CPUs in the LLC.
Instead, the loop can break after testing only 4 CPUs in some cases of
shart idle time (like hackbench). I don't know how the CPUs are
numbered but I can easily imagine that select_idle_cluster doesn't
loop on the same CPUs as the few ones that are then tested in
select_idle_cpu when it doesn't test all CPUs of the LLC

>
> Maybe the code Valentin suggested in his reply is a good way to
> keep the code align with the existing select_idle_cpu():
>
> static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -       struct sched_domain *this_sd;
> +       struct sched_domain *this_sd, *child = NULL;
>         u64 avg_cost, avg_idle;
>         u64 time;
>         int this = smp_processor_id();
> @@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>
>         time = cpu_clock(this);
>
> -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +       do {
> +               /* XXX: sd should start as SMT's parent */
> +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +               if (child)
> +                       cpumask_andnot(cpus, cpus, sched_domain_span(child));
> +
> +               for_each_cpu_wrap(cpu, cpus, target) {
> +                       if (!--nr)
> +                               return -1;
> +                       if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> +                               break;
> +               }
>
> -       for_each_cpu_wrap(cpu, cpus, target) {
> -               if (!--nr)
> -                       return -1;
> -               if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> -                       break;
> -       }
> +               child = sd;
> +               sd = sd->parent;
> +       } while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);
>
> >
> > > +
> > >         i = select_idle_cpu(p, sd, target);
> > >         if ((unsigned)i < nr_cpumask_bits)
> > >                 return i;
> > > --
> > > 2.7.4
> > >
>
> Thanks
> Barry
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-02 10:16         ` Vincent Guittot
  0 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-02 10:16 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown

On Wed, 2 Dec 2020 at 10:20, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Wednesday, December 2, 2020 9:27 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> > >
> > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > > has local L3 tag. On the other hand, each clusters will share some
> > > internal system bus. This means cache coherence overhead inside one cluster
> > > is much less than the overhead across clusters.
> > >
> > > +-----------------------------------+                          +---------+
> > > |  +------+    +------+            +---------------------------+         |
> > > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |                                   +----+    L3     |         |         |
> > > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |         |
> > > +-----------------------------------+                          |         |
> > > |  +------+    +------+             +--------------------------+         |
> > > |  |      |    |      |             |    +-----------+         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |                                   |    |    L3     |         |         |
> > > |  +------+    +------+             +----+    tag    |         |         |
> > > |  |      |    |      |             |    |           |         |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |   L3    |
> > >                                                                |   data  |
> > > +-----------------------------------+                          |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |  |      |    |      |             |    |           |         |         |
> > > |  +------+    +------+             +----+    L3     |         |         |
> > > |                                   |    |    tag    |         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |  |      |    |      |            ++    +-----------+         |         |
> > > |  +------+    +------+            |---------------------------+         |
> > > +-----------------------------------|                          |         |
> > > +-----------------------------------|                          |         |
> > > |  +------+    +------+            +---------------------------+         |
> > > |  |      |    |      |             |    +-----------+         |         |
> > > |  +------+    +------+             |    |           |         |         |
> > > |                                   +----+    L3     |         |         |
> > > |  +------+    +------+             |    |    tag    |         |         |
> > > |  |      |    |      |             |    |           |         |         |
> > > |  +------+    +------+             |    +-----------+         |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |         |
> > > +-----------------------------------+                          |         |
> > > |  +------+    +------+             +--------------------------+         |
> > > |  |      |    |      |             |   +-----------+          |         |
> > > |  +------+    +------+             |   |           |          |         |
> > > |                                   |   |    L3     |          |         |
> > > |  +------+    +------+             +---+    tag    |          |         |
> > > |  |      |    |      |             |   |           |          |         |
> > > |  +------+    +------+             |   +-----------+          |         |
> > > |                                   |                          |         |
> > > +-----------------------------------+                          |         |
> > > +-----------------------------------+                         ++         |
> > > |  +------+    +------+             +--------------------------+         |
> > > |  |      |    |      |             |  +-----------+           |         |
> > > |  +------+    +------+             |  |           |           |         |
> > > |                                   |  |    L3     |           |         |
> > > |  +------+    +------+             +--+    tag    |           |         |
> > > |  |      |    |      |             |  |           |           |         |
> > > |  +------+    +------+             |  +-----------+           |         |
> > > |                                   |                          +---------+
> > > +-----------------------------------+
> > >
> > > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> > > max_interval=8.
> > > This will affect load balance. For example, without this patch, while cpu0
> > > becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> > > will try to pull a task from cpu1-cpu3 first. This will have much less
> > > overhead of task migration.
> > >
> > > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > > a core in the target cluster before scanning the llc domain.
> > > This means it will proactively use a core which has better affinity with
> > > target core at first.
> >
> > Which is at the opposite of what we are usually trying to do in the
> > fast wakeup path: trying to minimize resource sharing by finding an
> > idle core with all smt idle as an example
>
> In wake_affine case, I guess we are actually want some kind of
> resource sharing such as LLC to get waker and wakee get closer

In wake_affine, we don't want to move outside the LLC  but then in the
LLC we tries to minimize resource sharing like looking for a core
fully idle for SMT

> to each other. find_idlest_cpu() is really opposite.
>
> So the real question is that LLC is always the right choice of
> idle sibling?

That's the eternal question: spread or gather

>
> In this case, 6 clusters are in same LLC, but hardware has different
> behavior for inside single cluster and across multiple clusters.
>
>
> >
> > >
> > > Not much benchmark has been done yet. but here is a rough hackbench
> > > result.
> > > we run the below command with different -g parameter to increase system load
> > > by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten times
> > > and record the data to get the average time:
> > >
> > > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> >
> > What is your ref tree ? v5.10-rcX or tip/sched/core ?
>
> Actually I was using 5.9 release.  That must be weird.
> But the reason is that disk driver is getting hang
> in my hardware in 5.10-rcx.

In fact there are several changes in v5.10 and tip/sched/core that
could help your topology

>
> >
> > >
> > > g=1 (seen cpu utilization around 50% for each core)
> > > Running in threaded mode with 1 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > performance improvement w/ patch: -1.01%
> > >
> > > g=2 (seen cpu utilization around 70% for each core)
> > > Running in threaded mode with 2 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > 9.955=10.1006
> > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > performance improvement w/ patch: 4.08%
> > >
> > > g=3 (seen cpu utilization around 90% for each core)
> > > Running in threaded mode with 3 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > 15.721=15.7727
> > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > 14.895=14.9005
> > > performance improvement w/ patch: 5.53%
> > >
> > > g=4
> > > Running in threaded mode with 4 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > 21.090=20.8187
> > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > 20.452=20.5647
> > > performance improvement w/ patch: 1.22%
> > >
> > > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > > g=1
> > > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > > performance improvement by patch: -2.2%
> > >
> > > g=2
> > > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > > performance improvement by patch: 9.3%
> > >
> > > g=3
> > > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> > 11.849=11.9557
> > > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > > performance improvement by patch: 20.2%
> > >
> > > g=4
> > > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> > 18.515=17.7647
> > > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> > 10.401=10.4927
> > > performance improvement by patch: 40.9%
> > >
> > > g=5
> > > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> > 25.819=27.4382
> > > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> > 13.907=13.9735
> > > performance improvement by patch: 49.1%
> > >
> > > It seems the patch can bring a huge increase on hackbench especially when
> > > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > > on single NUMA node(cpu0-cpu23)
> >
> > Interesting that this patch mainly impacts the numa case
> >
> > >
> > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > > ---
> > >  arch/arm64/Kconfig       |  7 +++++++
> > >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> > >  include/linux/topology.h |  7 +++++++
> > >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> > >  4 files changed, 66 insertions(+)
> > >
> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index 6d23283..3583c26 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -938,6 +938,13 @@ config SCHED_MC
> > >           making when dealing with multi-core CPU chips at a cost of slightly
> > >           increased overhead in some places. If unsure say N here.
> > >
> > > +config SCHED_CLUSTER
> > > +       bool "Cluster scheduler support"
> > > +       help
> > > +         Cluster scheduler support improves the CPU scheduler's decision
> > > +         making when dealing with machines that have clusters(sharing internal
> > > +         bus or sharing LLC cache tag). If unsure say N here.
> > > +
> > >  config SCHED_SMT
> > >         bool "SMT scheduler support"
> > >         help
> > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > index 355ee9e..5c8f026 100644
> > > --- a/arch/arm64/kernel/smp.c
> > > +++ b/arch/arm64/kernel/smp.c
> > > @@ -32,6 +32,7 @@
> > >  #include <linux/irq_work.h>
> > >  #include <linux/kexec.h>
> > >  #include <linux/kvm_host.h>
> > > +#include <linux/sched/topology.h>
> > >
> > >  #include <asm/alternative.h>
> > >  #include <asm/atomic.h>
> > > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> > >         }
> > >  }
> > >
> > > +static struct sched_domain_topology_level arm64_topology[] = {
> > > +#ifdef CONFIG_SCHED_SMT
> > > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > > +#endif
> > > +#ifdef CONFIG_SCHED_CLUSTER
> > > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > > +#endif
> > > +#ifdef CONFIG_SCHED_MC
> > > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > > +#endif
> > > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > > +        { NULL, },
> > > +};
> > > +
> > >  void __init smp_prepare_cpus(unsigned int max_cpus)
> > >  {
> > >         const struct cpu_operations *ops;
> > > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> > >
> > >         init_cpu_topology();
> > >
> > > +       set_sched_topology(arm64_topology);
> > > +
> > >         this_cpu = smp_processor_id();
> > >         store_cpu_topology(this_cpu);
> > >         numa_store_cpu_info(this_cpu);
> > > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > > index 5f66648..2c823c0 100644
> > > --- a/include/linux/topology.h
> > > +++ b/include/linux/topology.h
> > > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> > cpu)
> > >  }
> > >  #endif
> > >
> > > +#ifdef CONFIG_SCHED_CLUSTER
> > > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > > +{
> > > +       return topology_cluster_cpumask(cpu);
> > > +}
> > > +#endif
> > > +
> > >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> > >  {
> > >         return cpumask_of_node(cpu_to_node(cpu));
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 1a68a05..ae8ec910 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct
> > *p, int target)
> > >
> > >  #endif /* CONFIG_SCHED_SMT */
> > >
> > > +#ifdef CONFIG_SCHED_CLUSTER
> > > +/*
> > > + * Scan the local CLUSTER mask for idle CPUs.
> > > + */
> > > +static int select_idle_cluster(struct task_struct *p, int target)
> > > +{
> > > +       int cpu;
> > > +
> > > +       /* right now, no hardware with both cluster and smt to run */
> > > +       if (sched_smt_active())
> >
> > don't use smt static key but a dedicated one if needed
>
> Sure.
>
> >
> > > +               return -1;
> > > +
> > > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > > +                       continue;
> > > +               if (available_idle_cpu(cpu))
> > > +                       return cpu;
> > > +       }
> > > +
> > > +       return -1;
> > > +}
> > > +
> > > +#else /* CONFIG_SCHED_CLUSTER */
> > > +
> > > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > > +{
> > > +       return -1;
> > > +}
> > > +
> > > +#endif /* CONFIG_SCHED_CLUSTER */
> > > +
> > >  /*
> > >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> > >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> > the
> > > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct *p,
> > int prev, int target)
> > >         if ((unsigned)i < nr_cpumask_bits)
> > >                 return i;
> > >
> > > +       i = select_idle_cluster(p, target);
> > > +       if ((unsigned)i < nr_cpumask_bits)
> > > +               return i;
> >
> > This is yet another loop in the fast wake up path.
> >
> > I'm curious to know which part of this patch really gives the perf improvement ?
> > -Is it the new sched domain level with a shorter interval that is then
> > used by Load balance to better spread task in the cluster and between
> > clusters ?
> > -Or this new loop in the wake up path which tries to keep threads in
> > the same cluster ? which is at the opposite of the rest of the
> > scheduler which tries to spread
>
> If I don't scan cluster first for wake_affine, I almost don't see large
> hackbench change by the new sche_domain.
> For example:
> g=4 in hackbench on cpu0-cpu47(two numa)
> w/o patch: 17.7647 (average time in 10 times of hackbench)
> w/ the full patch: 10.4927
> w/ patch but drop select_idle_cluster(): 15.0931

And for the case with one numa node ?

I'd like to understand why this patch impacts much the numa case but
not the  one numa node case.

>
> So I don't think the hackbench increase mainly comes from the shorter
> interval of load balance of the new cluster domain.
>
> What does really matter is select_idle_cluster() according to my tests.
>
> >
> > Also could the sched_feat(SIS_PROP) impacts significantly your
> > topology because it  breaks before looking for all cores in the LLC ?
> > And this new loop extends the number of tested core ?
>
> In this case, cluster must belong to LLC. Cluster is the child of LLC.

Yes . My point is: in select_idle_cpu, we don't always loop on all
CPUs especially when you have a large number of CPUs in the LLC.
Instead, the loop can break after testing only 4 CPUs in some cases of
shart idle time (like hackbench). I don't know how the CPUs are
numbered but I can easily imagine that select_idle_cluster doesn't
loop on the same CPUs as the few ones that are then tested in
select_idle_cpu when it doesn't test all CPUs of the LLC

>
> Maybe the code Valentin suggested in his reply is a good way to
> keep the code align with the existing select_idle_cpu():
>
> static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -       struct sched_domain *this_sd;
> +       struct sched_domain *this_sd, *child = NULL;
>         u64 avg_cost, avg_idle;
>         u64 time;
>         int this = smp_processor_id();
> @@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>
>         time = cpu_clock(this);
>
> -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +       do {
> +               /* XXX: sd should start as SMT's parent */
> +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +               if (child)
> +                       cpumask_andnot(cpus, cpus, sched_domain_span(child));
> +
> +               for_each_cpu_wrap(cpu, cpus, target) {
> +                       if (!--nr)
> +                               return -1;
> +                       if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> +                               break;
> +               }
>
> -       for_each_cpu_wrap(cpu, cpus, target) {
> -               if (!--nr)
> -                       return -1;
> -               if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> -                       break;
> -       }
> +               child = sd;
> +               sd = sd->parent;
> +       } while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);
>
> >
> > > +
> > >         i = select_idle_cpu(p, sd, target);
> > >         if ((unsigned)i < nr_cpumask_bits)
> > >                 return i;
> > > --
> > > 2.7.4
> > >
>
> Thanks
> Barry
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-02 10:16         ` Vincent Guittot
@ 2020-12-02 10:45           ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02 10:45 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Wednesday, December 2, 2020 11:17 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Wed, 2 Dec 2020 at 10:20, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Wednesday, December 2, 2020 9:27 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > > > has local L3 tag. On the other hand, each clusters will share some
> > > > internal system bus. This means cache coherence overhead inside one cluster
> > > > is much less than the overhead across clusters.
> > > >
> > > > +-----------------------------------+                          +---------+
> > > > |  +------+    +------+            +---------------------------+         |
> > > > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |                                   +----+    L3     |         |         |
> > > > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > > > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |         |
> > > > +-----------------------------------+                          |         |
> > > > |  +------+    +------+             +--------------------------+         |
> > > > |  |      |    |      |             |    +-----------+         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |                                   |    |    L3     |         |         |
> > > > |  +------+    +------+             +----+    tag    |         |         |
> > > > |  |      |    |      |             |    |           |         |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |   L3    |
> > > >                                                                |   data  |
> > > > +-----------------------------------+                          |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |  |      |    |      |             |    |           |         |         |
> > > > |  +------+    +------+             +----+    L3     |         |         |
> > > > |                                   |    |    tag    |         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |  |      |    |      |            ++    +-----------+         |         |
> > > > |  +------+    +------+            |---------------------------+         |
> > > > +-----------------------------------|                          |         |
> > > > +-----------------------------------|                          |         |
> > > > |  +------+    +------+            +---------------------------+         |
> > > > |  |      |    |      |             |    +-----------+         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |                                   +----+    L3     |         |         |
> > > > |  +------+    +------+             |    |    tag    |         |         |
> > > > |  |      |    |      |             |    |           |         |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |         |
> > > > +-----------------------------------+                          |         |
> > > > |  +------+    +------+             +--------------------------+         |
> > > > |  |      |    |      |             |   +-----------+          |         |
> > > > |  +------+    +------+             |   |           |          |         |
> > > > |                                   |   |    L3     |          |         |
> > > > |  +------+    +------+             +---+    tag    |          |         |
> > > > |  |      |    |      |             |   |           |          |         |
> > > > |  +------+    +------+             |   +-----------+          |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |         |
> > > > +-----------------------------------+                         ++         |
> > > > |  +------+    +------+             +--------------------------+         |
> > > > |  |      |    |      |             |  +-----------+           |         |
> > > > |  +------+    +------+             |  |           |           |         |
> > > > |                                   |  |    L3     |           |         |
> > > > |  +------+    +------+             +--+    tag    |           |         |
> > > > |  |      |    |      |             |  |           |           |         |
> > > > |  +------+    +------+             |  +-----------+           |         |
> > > > |                                   |                          +---------+
> > > > +-----------------------------------+
> > > >
> > > > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > > > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > > > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > > > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> > > > max_interval=8.
> > > > This will affect load balance. For example, without this patch, while
> cpu0
> > > > becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> > > > will try to pull a task from cpu1-cpu3 first. This will have much less
> > > > overhead of task migration.
> > > >
> > > > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > > > a core in the target cluster before scanning the llc domain.
> > > > This means it will proactively use a core which has better affinity with
> > > > target core at first.
> > >
> > > Which is at the opposite of what we are usually trying to do in the
> > > fast wakeup path: trying to minimize resource sharing by finding an
> > > idle core with all smt idle as an example
> >
> > In wake_affine case, I guess we are actually want some kind of
> > resource sharing such as LLC to get waker and wakee get closer
> 
> In wake_affine, we don't want to move outside the LLC  but then in the
> LLC we tries to minimize resource sharing like looking for a core
> fully idle for SMT
> 
> > to each other. find_idlest_cpu() is really opposite.
> >
> > So the real question is that LLC is always the right choice of
> > idle sibling?
> 
> That's the eternal question: spread or gather

Indeed.

> 
> >
> > In this case, 6 clusters are in same LLC, but hardware has different
> > behavior for inside single cluster and across multiple clusters.
> >
> >
> > >
> > > >
> > > > Not much benchmark has been done yet. but here is a rough hackbench
> > > > result.
> > > > we run the below command with different -g parameter to increase system
> load
> > > > by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten
> times
> > > > and record the data to get the average time:
> > > >
> > > > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > > > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> > >
> > > What is your ref tree ? v5.10-rcX or tip/sched/core ?
> >
> > Actually I was using 5.9 release.  That must be weird.
> > But the reason is that disk driver is getting hang
> > in my hardware in 5.10-rcx.
> 
> In fact there are several changes in v5.10 and tip/sched/core that
> could help your topology

Will figure out some way to try.

> 
> >
> > >
> > > >
> > > > g=1 (seen cpu utilization around 50% for each core)
> > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > performance improvement w/ patch: -1.01%
> > > >
> > > > g=2 (seen cpu utilization around 70% for each core)
> > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > 9.955=10.1006
> > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > performance improvement w/ patch: 4.08%
> > > >
> > > > g=3 (seen cpu utilization around 90% for each core)
> > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > 15.721=15.7727
> > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > 14.895=14.9005
> > > > performance improvement w/ patch: 5.53%
> > > >
> > > > g=4
> > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > 21.090=20.8187
> > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > 20.452=20.5647
> > > > performance improvement w/ patch: 1.22%
> > > >
> > > > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > > > g=1
> > > > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > > > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > > > performance improvement by patch: -2.2%
> > > >
> > > > g=2
> > > > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > > > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > > > performance improvement by patch: 9.3%
> > > >
> > > > g=3
> > > > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> > > 11.849=11.9557
> > > > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > > > performance improvement by patch: 20.2%
> > > >
> > > > g=4
> > > > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> > > 18.515=17.7647
> > > > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> > > 10.401=10.4927
> > > > performance improvement by patch: 40.9%
> > > >
> > > > g=5
> > > > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> > > 25.819=27.4382
> > > > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> > > 13.907=13.9735
> > > > performance improvement by patch: 49.1%
> > > >
> > > > It seems the patch can bring a huge increase on hackbench especially when
> > > > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > > > on single NUMA node(cpu0-cpu23)
> > >
> > > Interesting that this patch mainly impacts the numa case
> > >
> > > >
> > > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > > > ---
> > > >  arch/arm64/Kconfig       |  7 +++++++
> > > >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> > > >  include/linux/topology.h |  7 +++++++
> > > >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> > > >  4 files changed, 66 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > index 6d23283..3583c26 100644
> > > > --- a/arch/arm64/Kconfig
> > > > +++ b/arch/arm64/Kconfig
> > > > @@ -938,6 +938,13 @@ config SCHED_MC
> > > >           making when dealing with multi-core CPU chips at a cost of slightly
> > > >           increased overhead in some places. If unsure say N here.
> > > >
> > > > +config SCHED_CLUSTER
> > > > +       bool "Cluster scheduler support"
> > > > +       help
> > > > +         Cluster scheduler support improves the CPU scheduler's decision
> > > > +         making when dealing with machines that have clusters(sharing
> internal
> > > > +         bus or sharing LLC cache tag). If unsure say N here.
> > > > +
> > > >  config SCHED_SMT
> > > >         bool "SMT scheduler support"
> > > >         help
> > > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > > index 355ee9e..5c8f026 100644
> > > > --- a/arch/arm64/kernel/smp.c
> > > > +++ b/arch/arm64/kernel/smp.c
> > > > @@ -32,6 +32,7 @@
> > > >  #include <linux/irq_work.h>
> > > >  #include <linux/kexec.h>
> > > >  #include <linux/kvm_host.h>
> > > > +#include <linux/sched/topology.h>
> > > >
> > > >  #include <asm/alternative.h>
> > > >  #include <asm/atomic.h>
> > > > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> > > >         }
> > > >  }
> > > >
> > > > +static struct sched_domain_topology_level arm64_topology[] = {
> > > > +#ifdef CONFIG_SCHED_SMT
> > > > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > > > +#endif
> > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > > > +#endif
> > > > +#ifdef CONFIG_SCHED_MC
> > > > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > > > +#endif
> > > > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > > > +        { NULL, },
> > > > +};
> > > > +
> > > >  void __init smp_prepare_cpus(unsigned int max_cpus)
> > > >  {
> > > >         const struct cpu_operations *ops;
> > > > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> > > >
> > > >         init_cpu_topology();
> > > >
> > > > +       set_sched_topology(arm64_topology);
> > > > +
> > > >         this_cpu = smp_processor_id();
> > > >         store_cpu_topology(this_cpu);
> > > >         numa_store_cpu_info(this_cpu);
> > > > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > > > index 5f66648..2c823c0 100644
> > > > --- a/include/linux/topology.h
> > > > +++ b/include/linux/topology.h
> > > > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> > > cpu)
> > > >  }
> > > >  #endif
> > > >
> > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > > > +{
> > > > +       return topology_cluster_cpumask(cpu);
> > > > +}
> > > > +#endif
> > > > +
> > > >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> > > >  {
> > > >         return cpumask_of_node(cpu_to_node(cpu));
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index 1a68a05..ae8ec910 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct
> > > *p, int target)
> > > >
> > > >  #endif /* CONFIG_SCHED_SMT */
> > > >
> > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > +/*
> > > > + * Scan the local CLUSTER mask for idle CPUs.
> > > > + */
> > > > +static int select_idle_cluster(struct task_struct *p, int target)
> > > > +{
> > > > +       int cpu;
> > > > +
> > > > +       /* right now, no hardware with both cluster and smt to run */
> > > > +       if (sched_smt_active())
> > >
> > > don't use smt static key but a dedicated one if needed
> >
> > Sure.
> >
> > >
> > > > +               return -1;
> > > > +
> > > > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > > > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > > > +                       continue;
> > > > +               if (available_idle_cpu(cpu))
> > > > +                       return cpu;
> > > > +       }
> > > > +
> > > > +       return -1;
> > > > +}
> > > > +
> > > > +#else /* CONFIG_SCHED_CLUSTER */
> > > > +
> > > > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > > > +{
> > > > +       return -1;
> > > > +}
> > > > +
> > > > +#endif /* CONFIG_SCHED_CLUSTER */
> > > > +
> > > >  /*
> > > >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> > > >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> > > the
> > > > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct
> *p,
> > > int prev, int target)
> > > >         if ((unsigned)i < nr_cpumask_bits)
> > > >                 return i;
> > > >
> > > > +       i = select_idle_cluster(p, target);
> > > > +       if ((unsigned)i < nr_cpumask_bits)
> > > > +               return i;
> > >
> > > This is yet another loop in the fast wake up path.
> > >
> > > I'm curious to know which part of this patch really gives the perf improvement ?
> > > -Is it the new sched domain level with a shorter interval that is then
> > > used by Load balance to better spread task in the cluster and between
> > > clusters ?
> > > -Or this new loop in the wake up path which tries to keep threads in
> > > the same cluster ? which is at the opposite of the rest of the
> > > scheduler which tries to spread
> >
> > If I don't scan cluster first for wake_affine, I almost don't see large
> > hackbench change by the new sche_domain.
> > For example:
> > g=4 in hackbench on cpu0-cpu47(two numa)
> > w/o patch: 17.7647 (average time in 10 times of hackbench)
> > w/ the full patch: 10.4927
> > w/ patch but drop select_idle_cluster(): 15.0931
> 
> And for the case with one numa node ?

That would be very frustrating as it is getting worse:

g=1
Running in threaded mode with 1 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
w/ but dropped select_idle_cluster:
     7.816 7.589 7.319 7.556 7.443 7.459 7.636 7.427 7.425 7.395=7.5065

g=2
Running in threaded mode with 2 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
9.955=10.1006
w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
w/ but dropped select_idle_cluster:
     10.222 10.078 10.063 10.317 9.963 10.060 10.089 9.934 10.152 10.077=10.0955

g=3
Running in threaded mode with 3 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
15.721=15.7727
w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
14.895=14.9005
w/ but dropped select_idle_cluster(getting worse than w/o):
     16.892 16.962 17.248 17.392 17.336 17.705 17.113 17.633 17.477
17.378=17.3136

g=4
Running in threaded mode with 4 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
21.090=20.8187
w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
20.452=20.5647
w/ but dropped select_idle_cluster(getting worse than w/o):
     24.075 24.122 24.243 24.000 24.223 23.791 23.246 24.904 23.990
24.431=24.1025

> 
> I'd like to understand why this patch impacts much the numa case but
> not the  one numa node case.
> 
> >
> > So I don't think the hackbench increase mainly comes from the shorter
> > interval of load balance of the new cluster domain.
> >
> > What does really matter is select_idle_cluster() according to my tests.
> >
> > >
> > > Also could the sched_feat(SIS_PROP) impacts significantly your
> > > topology because it  breaks before looking for all cores in the LLC ?
> > > And this new loop extends the number of tested core ?
> >
> > In this case, cluster must belong to LLC. Cluster is the child of LLC.
> 
> Yes . My point is: in select_idle_cpu, we don't always loop on all
> CPUs especially when you have a large number of CPUs in the LLC.
> Instead, the loop can break after testing only 4 CPUs in some cases of
> shart idle time (like hackbench). I don't know how the CPUs are
> numbered but I can easily imagine that select_idle_cluster doesn't
> loop on the same CPUs as the few ones that are then tested in
> select_idle_cpu when it doesn't test all CPUs of the LLC
> 
> >
> > Maybe the code Valentin suggested in his reply is a good way to
> > keep the code align with the existing select_idle_cpu():
> >
> > static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd,
> int target)
> >  {
> >         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> > -       struct sched_domain *this_sd;
> > +       struct sched_domain *this_sd, *child = NULL;
> >         u64 avg_cost, avg_idle;
> >         u64 time;
> >         int this = smp_processor_id();
> > @@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >
> >         time = cpu_clock(this);
> >
> > -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +       do {
> > +               /* XXX: sd should start as SMT's parent */
> > +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +               if (child)
> > +                       cpumask_andnot(cpus, cpus, sched_domain_span(child));
> > +
> > +               for_each_cpu_wrap(cpu, cpus, target) {
> > +                       if (!--nr)
> > +                               return -1;
> > +                       if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > +                               break;
> > +               }
> >
> > -       for_each_cpu_wrap(cpu, cpus, target) {
> > -               if (!--nr)
> > -                       return -1;
> > -               if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > -                       break;
> > -       }
> > +               child = sd;
> > +               sd = sd->parent;
> > +       } while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);
> >
> > >
> > > > +
> > > >         i = select_idle_cpu(p, sd, target);
> > > >         if ((unsigned)i < nr_cpumask_bits)
> > > >                 return i;
> > > > --
> > > > 2.7.4
> > > >
> >

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-02 10:45           ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02 10:45 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Wednesday, December 2, 2020 11:17 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Wed, 2 Dec 2020 at 10:20, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Wednesday, December 2, 2020 9:27 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > > > has local L3 tag. On the other hand, each clusters will share some
> > > > internal system bus. This means cache coherence overhead inside one cluster
> > > > is much less than the overhead across clusters.
> > > >
> > > > +-----------------------------------+                          +---------+
> > > > |  +------+    +------+            +---------------------------+         |
> > > > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |                                   +----+    L3     |         |         |
> > > > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > > > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |         |
> > > > +-----------------------------------+                          |         |
> > > > |  +------+    +------+             +--------------------------+         |
> > > > |  |      |    |      |             |    +-----------+         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |                                   |    |    L3     |         |         |
> > > > |  +------+    +------+             +----+    tag    |         |         |
> > > > |  |      |    |      |             |    |           |         |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |   L3    |
> > > >                                                                |   data  |
> > > > +-----------------------------------+                          |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |  |      |    |      |             |    |           |         |         |
> > > > |  +------+    +------+             +----+    L3     |         |         |
> > > > |                                   |    |    tag    |         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |  |      |    |      |            ++    +-----------+         |         |
> > > > |  +------+    +------+            |---------------------------+         |
> > > > +-----------------------------------|                          |         |
> > > > +-----------------------------------|                          |         |
> > > > |  +------+    +------+            +---------------------------+         |
> > > > |  |      |    |      |             |    +-----------+         |         |
> > > > |  +------+    +------+             |    |           |         |         |
> > > > |                                   +----+    L3     |         |         |
> > > > |  +------+    +------+             |    |    tag    |         |         |
> > > > |  |      |    |      |             |    |           |         |         |
> > > > |  +------+    +------+             |    +-----------+         |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |         |
> > > > +-----------------------------------+                          |         |
> > > > |  +------+    +------+             +--------------------------+         |
> > > > |  |      |    |      |             |   +-----------+          |         |
> > > > |  +------+    +------+             |   |           |          |         |
> > > > |                                   |   |    L3     |          |         |
> > > > |  +------+    +------+             +---+    tag    |          |         |
> > > > |  |      |    |      |             |   |           |          |         |
> > > > |  +------+    +------+             |   +-----------+          |         |
> > > > |                                   |                          |         |
> > > > +-----------------------------------+                          |         |
> > > > +-----------------------------------+                         ++         |
> > > > |  +------+    +------+             +--------------------------+         |
> > > > |  |      |    |      |             |  +-----------+           |         |
> > > > |  +------+    +------+             |  |           |           |         |
> > > > |                                   |  |    L3     |           |         |
> > > > |  +------+    +------+             +--+    tag    |           |         |
> > > > |  |      |    |      |             |  |           |           |         |
> > > > |  +------+    +------+             |  +-----------+           |         |
> > > > |                                   |                          +---------+
> > > > +-----------------------------------+
> > > >
> > > > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > > > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > > > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > > > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4 and
> > > > max_interval=8.
> > > > This will affect load balance. For example, without this patch, while
> cpu0
> > > > becomes idle, it will pull a task from cpu1-cpu15. With this patch, cpu0
> > > > will try to pull a task from cpu1-cpu3 first. This will have much less
> > > > overhead of task migration.
> > > >
> > > > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > > > a core in the target cluster before scanning the llc domain.
> > > > This means it will proactively use a core which has better affinity with
> > > > target core at first.
> > >
> > > Which is at the opposite of what we are usually trying to do in the
> > > fast wakeup path: trying to minimize resource sharing by finding an
> > > idle core with all smt idle as an example
> >
> > In wake_affine case, I guess we are actually want some kind of
> > resource sharing such as LLC to get waker and wakee get closer
> 
> In wake_affine, we don't want to move outside the LLC  but then in the
> LLC we tries to minimize resource sharing like looking for a core
> fully idle for SMT
> 
> > to each other. find_idlest_cpu() is really opposite.
> >
> > So the real question is that LLC is always the right choice of
> > idle sibling?
> 
> That's the eternal question: spread or gather

Indeed.

> 
> >
> > In this case, 6 clusters are in same LLC, but hardware has different
> > behavior for inside single cluster and across multiple clusters.
> >
> >
> > >
> > > >
> > > > Not much benchmark has been done yet. but here is a rough hackbench
> > > > result.
> > > > we run the below command with different -g parameter to increase system
> load
> > > > by changing g from 1 to 4, for each one of 1-4, we run the benchmark ten
> times
> > > > and record the data to get the average time:
> > > >
> > > > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > > > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> > >
> > > What is your ref tree ? v5.10-rcX or tip/sched/core ?
> >
> > Actually I was using 5.9 release.  That must be weird.
> > But the reason is that disk driver is getting hang
> > in my hardware in 5.10-rcx.
> 
> In fact there are several changes in v5.10 and tip/sched/core that
> could help your topology

Will figure out some way to try.

> 
> >
> > >
> > > >
> > > > g=1 (seen cpu utilization around 50% for each core)
> > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > performance improvement w/ patch: -1.01%
> > > >
> > > > g=2 (seen cpu utilization around 70% for each core)
> > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > 9.955=10.1006
> > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > performance improvement w/ patch: 4.08%
> > > >
> > > > g=3 (seen cpu utilization around 90% for each core)
> > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > 15.721=15.7727
> > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > 14.895=14.9005
> > > > performance improvement w/ patch: 5.53%
> > > >
> > > > g=4
> > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > 21.090=20.8187
> > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > 20.452=20.5647
> > > > performance improvement w/ patch: 1.22%
> > > >
> > > > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > > > g=1
> > > > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > > > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > > > performance improvement by patch: -2.2%
> > > >
> > > > g=2
> > > > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > > > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > > > performance improvement by patch: 9.3%
> > > >
> > > > g=3
> > > > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> > > 11.849=11.9557
> > > > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > > > performance improvement by patch: 20.2%
> > > >
> > > > g=4
> > > > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> > > 18.515=17.7647
> > > > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> > > 10.401=10.4927
> > > > performance improvement by patch: 40.9%
> > > >
> > > > g=5
> > > > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> > > 25.819=27.4382
> > > > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> > > 13.907=13.9735
> > > > performance improvement by patch: 49.1%
> > > >
> > > > It seems the patch can bring a huge increase on hackbench especially when
> > > > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > > > on single NUMA node(cpu0-cpu23)
> > >
> > > Interesting that this patch mainly impacts the numa case
> > >
> > > >
> > > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > > > ---
> > > >  arch/arm64/Kconfig       |  7 +++++++
> > > >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> > > >  include/linux/topology.h |  7 +++++++
> > > >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> > > >  4 files changed, 66 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > index 6d23283..3583c26 100644
> > > > --- a/arch/arm64/Kconfig
> > > > +++ b/arch/arm64/Kconfig
> > > > @@ -938,6 +938,13 @@ config SCHED_MC
> > > >           making when dealing with multi-core CPU chips at a cost of slightly
> > > >           increased overhead in some places. If unsure say N here.
> > > >
> > > > +config SCHED_CLUSTER
> > > > +       bool "Cluster scheduler support"
> > > > +       help
> > > > +         Cluster scheduler support improves the CPU scheduler's decision
> > > > +         making when dealing with machines that have clusters(sharing
> internal
> > > > +         bus or sharing LLC cache tag). If unsure say N here.
> > > > +
> > > >  config SCHED_SMT
> > > >         bool "SMT scheduler support"
> > > >         help
> > > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > > index 355ee9e..5c8f026 100644
> > > > --- a/arch/arm64/kernel/smp.c
> > > > +++ b/arch/arm64/kernel/smp.c
> > > > @@ -32,6 +32,7 @@
> > > >  #include <linux/irq_work.h>
> > > >  #include <linux/kexec.h>
> > > >  #include <linux/kvm_host.h>
> > > > +#include <linux/sched/topology.h>
> > > >
> > > >  #include <asm/alternative.h>
> > > >  #include <asm/atomic.h>
> > > > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> > > >         }
> > > >  }
> > > >
> > > > +static struct sched_domain_topology_level arm64_topology[] = {
> > > > +#ifdef CONFIG_SCHED_SMT
> > > > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > > > +#endif
> > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > > > +#endif
> > > > +#ifdef CONFIG_SCHED_MC
> > > > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > > > +#endif
> > > > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > > > +        { NULL, },
> > > > +};
> > > > +
> > > >  void __init smp_prepare_cpus(unsigned int max_cpus)
> > > >  {
> > > >         const struct cpu_operations *ops;
> > > > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> > > >
> > > >         init_cpu_topology();
> > > >
> > > > +       set_sched_topology(arm64_topology);
> > > > +
> > > >         this_cpu = smp_processor_id();
> > > >         store_cpu_topology(this_cpu);
> > > >         numa_store_cpu_info(this_cpu);
> > > > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > > > index 5f66648..2c823c0 100644
> > > > --- a/include/linux/topology.h
> > > > +++ b/include/linux/topology.h
> > > > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> > > cpu)
> > > >  }
> > > >  #endif
> > > >
> > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > > > +{
> > > > +       return topology_cluster_cpumask(cpu);
> > > > +}
> > > > +#endif
> > > > +
> > > >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> > > >  {
> > > >         return cpumask_of_node(cpu_to_node(cpu));
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index 1a68a05..ae8ec910 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct task_struct
> > > *p, int target)
> > > >
> > > >  #endif /* CONFIG_SCHED_SMT */
> > > >
> > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > +/*
> > > > + * Scan the local CLUSTER mask for idle CPUs.
> > > > + */
> > > > +static int select_idle_cluster(struct task_struct *p, int target)
> > > > +{
> > > > +       int cpu;
> > > > +
> > > > +       /* right now, no hardware with both cluster and smt to run */
> > > > +       if (sched_smt_active())
> > >
> > > don't use smt static key but a dedicated one if needed
> >
> > Sure.
> >
> > >
> > > > +               return -1;
> > > > +
> > > > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > > > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > > > +                       continue;
> > > > +               if (available_idle_cpu(cpu))
> > > > +                       return cpu;
> > > > +       }
> > > > +
> > > > +       return -1;
> > > > +}
> > > > +
> > > > +#else /* CONFIG_SCHED_CLUSTER */
> > > > +
> > > > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > > > +{
> > > > +       return -1;
> > > > +}
> > > > +
> > > > +#endif /* CONFIG_SCHED_CLUSTER */
> > > > +
> > > >  /*
> > > >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> > > >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> > > the
> > > > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct
> *p,
> > > int prev, int target)
> > > >         if ((unsigned)i < nr_cpumask_bits)
> > > >                 return i;
> > > >
> > > > +       i = select_idle_cluster(p, target);
> > > > +       if ((unsigned)i < nr_cpumask_bits)
> > > > +               return i;
> > >
> > > This is yet another loop in the fast wake up path.
> > >
> > > I'm curious to know which part of this patch really gives the perf improvement ?
> > > -Is it the new sched domain level with a shorter interval that is then
> > > used by Load balance to better spread task in the cluster and between
> > > clusters ?
> > > -Or this new loop in the wake up path which tries to keep threads in
> > > the same cluster ? which is at the opposite of the rest of the
> > > scheduler which tries to spread
> >
> > If I don't scan cluster first for wake_affine, I almost don't see large
> > hackbench change by the new sche_domain.
> > For example:
> > g=4 in hackbench on cpu0-cpu47(two numa)
> > w/o patch: 17.7647 (average time in 10 times of hackbench)
> > w/ the full patch: 10.4927
> > w/ patch but drop select_idle_cluster(): 15.0931
> 
> And for the case with one numa node ?

That would be very frustrating as it is getting worse:

g=1
Running in threaded mode with 1 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
w/ but dropped select_idle_cluster:
     7.816 7.589 7.319 7.556 7.443 7.459 7.636 7.427 7.425 7.395=7.5065

g=2
Running in threaded mode with 2 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
9.955=10.1006
w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
w/ but dropped select_idle_cluster:
     10.222 10.078 10.063 10.317 9.963 10.060 10.089 9.934 10.152 10.077=10.0955

g=3
Running in threaded mode with 3 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
15.721=15.7727
w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
14.895=14.9005
w/ but dropped select_idle_cluster(getting worse than w/o):
     16.892 16.962 17.248 17.392 17.336 17.705 17.113 17.633 17.477
17.378=17.3136

g=4
Running in threaded mode with 4 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
21.090=20.8187
w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
20.452=20.5647
w/ but dropped select_idle_cluster(getting worse than w/o):
     24.075 24.122 24.243 24.000 24.223 23.791 23.246 24.904 23.990
24.431=24.1025

> 
> I'd like to understand why this patch impacts much the numa case but
> not the  one numa node case.
> 
> >
> > So I don't think the hackbench increase mainly comes from the shorter
> > interval of load balance of the new cluster domain.
> >
> > What does really matter is select_idle_cluster() according to my tests.
> >
> > >
> > > Also could the sched_feat(SIS_PROP) impacts significantly your
> > > topology because it  breaks before looking for all cores in the LLC ?
> > > And this new loop extends the number of tested core ?
> >
> > In this case, cluster must belong to LLC. Cluster is the child of LLC.
> 
> Yes . My point is: in select_idle_cpu, we don't always loop on all
> CPUs especially when you have a large number of CPUs in the LLC.
> Instead, the loop can break after testing only 4 CPUs in some cases of
> shart idle time (like hackbench). I don't know how the CPUs are
> numbered but I can easily imagine that select_idle_cluster doesn't
> loop on the same CPUs as the few ones that are then tested in
> select_idle_cpu when it doesn't test all CPUs of the LLC
> 
> >
> > Maybe the code Valentin suggested in his reply is a good way to
> > keep the code align with the existing select_idle_cpu():
> >
> > static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd,
> int target)
> >  {
> >         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> > -       struct sched_domain *this_sd;
> > +       struct sched_domain *this_sd, *child = NULL;
> >         u64 avg_cost, avg_idle;
> >         u64 time;
> >         int this = smp_processor_id();
> > @@ -6150,14 +6150,22 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >
> >         time = cpu_clock(this);
> >
> > -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +       do {
> > +               /* XXX: sd should start as SMT's parent */
> > +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +               if (child)
> > +                       cpumask_andnot(cpus, cpus, sched_domain_span(child));
> > +
> > +               for_each_cpu_wrap(cpu, cpus, target) {
> > +                       if (!--nr)
> > +                               return -1;
> > +                       if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > +                               break;
> > +               }
> >
> > -       for_each_cpu_wrap(cpu, cpus, target) {
> > -               if (!--nr)
> > -                       return -1;
> > -               if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > -                       break;
> > -       }
> > +               child = sd;
> > +               sd = sd->parent;
> > +       } while (sd && sd->flags & SD_SHARE_PKG_RESOURCES);
> >
> > >
> > > > +
> > > >         i = select_idle_cpu(p, sd, target);
> > > >         if ((unsigned)i < nr_cpumask_bits)
> > > >                 return i;
> > > > --
> > > > 2.7.4
> > > >
> >

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-02 10:16         ` Vincent Guittot
@ 2020-12-02 10:48           ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02 10:48 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Wednesday, December 2, 2020 11:41 PM
> To: 'Vincent Guittot' <vincent.guittot@linaro.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> 
> 
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Wednesday, December 2, 2020 11:17 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Wed, 2 Dec 2020 at 10:20, Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > > Sent: Wednesday, December 2, 2020 9:27 PM
> > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J.
> Wysocki
> > > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > > gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>;
> > > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>;
> Juri
> > > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> > <dietmar.eggemann@arm.com>;
> > > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>;
> Mel
> > > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > > >
> > > > On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> > > > >
> > > > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and
> each
> > > > > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > > > > has local L3 tag. On the other hand, each clusters will share some
> > > > > internal system bus. This means cache coherence overhead inside one
> cluster
> > > > > is much less than the overhead across clusters.
> > > > >
> > > > > +-----------------------------------+                          +---------+
> > > > > |  +------+    +------+            +---------------------------+         |
> > > > > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |                                   +----+    L3     |         |         |
> > > > > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > > > > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > |  +------+    +------+             +--------------------------+         |
> > > > > |  |      |    |      |             |    +-----------+         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |                                   |    |    L3     |         |         |
> > > > > |  +------+    +------+             +----+    tag    |         |         |
> > > > > |  |      |    |      |             |    |           |         |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |   L3    |
> > > > >                                                                |   data  |
> > > > > +-----------------------------------+                          |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |  |      |    |      |             |    |           |         |         |
> > > > > |  +------+    +------+             +----+    L3     |         |         |
> > > > > |                                   |    |    tag    |         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |  |      |    |      |            ++    +-----------+         |         |
> > > > > |  +------+    +------+            |---------------------------+         |
> > > > > +-----------------------------------|                          |         |
> > > > > +-----------------------------------|                          |         |
> > > > > |  +------+    +------+            +---------------------------+         |
> > > > > |  |      |    |      |             |    +-----------+         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |                                   +----+    L3     |         |         |
> > > > > |  +------+    +------+             |    |    tag    |         |         |
> > > > > |  |      |    |      |             |    |           |         |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > |  +------+    +------+             +--------------------------+         |
> > > > > |  |      |    |      |             |   +-----------+          |         |
> > > > > |  +------+    +------+             |   |           |          |         |
> > > > > |                                   |   |    L3     |          |         |
> > > > > |  +------+    +------+             +---+    tag    |          |         |
> > > > > |  |      |    |      |             |   |           |          |         |
> > > > > |  +------+    +------+             |   +-----------+          |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > +-----------------------------------+                         ++         |
> > > > > |  +------+    +------+             +--------------------------+         |
> > > > > |  |      |    |      |             |  +-----------+           |         |
> > > > > |  +------+    +------+             |  |           |           |         |
> > > > > |                                   |  |    L3     |           |         |
> > > > > |  +------+    +------+             +--+    tag    |           |         |
> > > > > |  |      |    |      |             |  |           |           |         |
> > > > > |  +------+    +------+             |  +-----------+           |         |
> > > > > |                                   |                          +---------+
> > > > > +-----------------------------------+
> > > > >
> > > > > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > > > > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > > > > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > > > > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4
> and
> > > > > max_interval=8.
> > > > > This will affect load balance. For example, without this patch, while
> > cpu0
> > > > > becomes idle, it will pull a task from cpu1-cpu15. With this patch,
> cpu0
> > > > > will try to pull a task from cpu1-cpu3 first. This will have much less
> > > > > overhead of task migration.
> > > > >
> > > > > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > > > > a core in the target cluster before scanning the llc domain.
> > > > > This means it will proactively use a core which has better affinity
> with
> > > > > target core at first.
> > > >
> > > > Which is at the opposite of what we are usually trying to do in the
> > > > fast wakeup path: trying to minimize resource sharing by finding an
> > > > idle core with all smt idle as an example
> > >
> > > In wake_affine case, I guess we are actually want some kind of
> > > resource sharing such as LLC to get waker and wakee get closer
> >
> > In wake_affine, we don't want to move outside the LLC  but then in the
> > LLC we tries to minimize resource sharing like looking for a core
> > fully idle for SMT
> >
> > > to each other. find_idlest_cpu() is really opposite.
> > >
> > > So the real question is that LLC is always the right choice of
> > > idle sibling?
> >
> > That's the eternal question: spread or gather
> 
> Indeed.
> 
> >
> > >
> > > In this case, 6 clusters are in same LLC, but hardware has different
> > > behavior for inside single cluster and across multiple clusters.
> > >
> > >
> > > >
> > > > >
> > > > > Not much benchmark has been done yet. but here is a rough hackbench
> > > > > result.
> > > > > we run the below command with different -g parameter to increase system
> > load
> > > > > by changing g from 1 to 4, for each one of 1-4, we run the benchmark
> ten
> > times
> > > > > and record the data to get the average time:
> > > > >
> > > > > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > > > > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> > > >
> > > > What is your ref tree ? v5.10-rcX or tip/sched/core ?
> > >
> > > Actually I was using 5.9 release.  That must be weird.
> > > But the reason is that disk driver is getting hang
> > > in my hardware in 5.10-rcx.
> >
> > In fact there are several changes in v5.10 and tip/sched/core that
> > could help your topology
> 
> Will figure out some way to try.
> 
> >
> > >
> > > >
> > > > >
> > > > > g=1 (seen cpu utilization around 50% for each core)
> > > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > > performance improvement w/ patch: -1.01%
> > > > >
> > > > > g=2 (seen cpu utilization around 70% for each core)
> > > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > 9.955=10.1006
> > > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > > performance improvement w/ patch: 4.08%
> > > > >
> > > > > g=3 (seen cpu utilization around 90% for each core)
> > > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > 15.721=15.7727
> > > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > 14.895=14.9005
> > > > > performance improvement w/ patch: 5.53%
> > > > >
> > > > > g=4
> > > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > > 21.090=20.8187
> > > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > > 20.452=20.5647
> > > > > performance improvement w/ patch: 1.22%
> > > > >
> > > > > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > > > > g=1
> > > > > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > > > > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > > > > performance improvement by patch: -2.2%
> > > > >
> > > > > g=2
> > > > > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > > > > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > > > > performance improvement by patch: 9.3%
> > > > >
> > > > > g=3
> > > > > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> > > > 11.849=11.9557
> > > > > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > > > > performance improvement by patch: 20.2%
> > > > >
> > > > > g=4
> > > > > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> > > > 18.515=17.7647
> > > > > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> > > > 10.401=10.4927
> > > > > performance improvement by patch: 40.9%
> > > > >
> > > > > g=5
> > > > > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> > > > 25.819=27.4382
> > > > > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> > > > 13.907=13.9735
> > > > > performance improvement by patch: 49.1%
> > > > >
> > > > > It seems the patch can bring a huge increase on hackbench especially
> when
> > > > > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > > > > on single NUMA node(cpu0-cpu23)
> > > >
> > > > Interesting that this patch mainly impacts the numa case
> > > >
> > > > >
> > > > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > > > > ---
> > > > >  arch/arm64/Kconfig       |  7 +++++++
> > > > >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> > > > >  include/linux/topology.h |  7 +++++++
> > > > >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> > > > >  4 files changed, 66 insertions(+)
> > > > >
> > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > > index 6d23283..3583c26 100644
> > > > > --- a/arch/arm64/Kconfig
> > > > > +++ b/arch/arm64/Kconfig
> > > > > @@ -938,6 +938,13 @@ config SCHED_MC
> > > > >           making when dealing with multi-core CPU chips at a cost of slightly
> > > > >           increased overhead in some places. If unsure say N here.
> > > > >
> > > > > +config SCHED_CLUSTER
> > > > > +       bool "Cluster scheduler support"
> > > > > +       help
> > > > > +         Cluster scheduler support improves the CPU scheduler's decision
> > > > > +         making when dealing with machines that have clusters(sharing
> > internal
> > > > > +         bus or sharing LLC cache tag). If unsure say N here.
> > > > > +
> > > > >  config SCHED_SMT
> > > > >         bool "SMT scheduler support"
> > > > >         help
> > > > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > > > index 355ee9e..5c8f026 100644
> > > > > --- a/arch/arm64/kernel/smp.c
> > > > > +++ b/arch/arm64/kernel/smp.c
> > > > > @@ -32,6 +32,7 @@
> > > > >  #include <linux/irq_work.h>
> > > > >  #include <linux/kexec.h>
> > > > >  #include <linux/kvm_host.h>
> > > > > +#include <linux/sched/topology.h>
> > > > >
> > > > >  #include <asm/alternative.h>
> > > > >  #include <asm/atomic.h>
> > > > > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> > > > >         }
> > > > >  }
> > > > >
> > > > > +static struct sched_domain_topology_level arm64_topology[] = {
> > > > > +#ifdef CONFIG_SCHED_SMT
> > > > > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > > > > +#endif
> > > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > > > > +#endif
> > > > > +#ifdef CONFIG_SCHED_MC
> > > > > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > > > > +#endif
> > > > > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > > > > +        { NULL, },
> > > > > +};
> > > > > +
> > > > >  void __init smp_prepare_cpus(unsigned int max_cpus)
> > > > >  {
> > > > >         const struct cpu_operations *ops;
> > > > > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> > > > >
> > > > >         init_cpu_topology();
> > > > >
> > > > > +       set_sched_topology(arm64_topology);
> > > > > +
> > > > >         this_cpu = smp_processor_id();
> > > > >         store_cpu_topology(this_cpu);
> > > > >         numa_store_cpu_info(this_cpu);
> > > > > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > > > > index 5f66648..2c823c0 100644
> > > > > --- a/include/linux/topology.h
> > > > > +++ b/include/linux/topology.h
> > > > > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> > > > cpu)
> > > > >  }
> > > > >  #endif
> > > > >
> > > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > > > > +{
> > > > > +       return topology_cluster_cpumask(cpu);
> > > > > +}
> > > > > +#endif
> > > > > +
> > > > >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> > > > >  {
> > > > >         return cpumask_of_node(cpu_to_node(cpu));
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index 1a68a05..ae8ec910 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct
> task_struct
> > > > *p, int target)
> > > > >
> > > > >  #endif /* CONFIG_SCHED_SMT */
> > > > >
> > > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > > +/*
> > > > > + * Scan the local CLUSTER mask for idle CPUs.
> > > > > + */
> > > > > +static int select_idle_cluster(struct task_struct *p, int target)
> > > > > +{
> > > > > +       int cpu;
> > > > > +
> > > > > +       /* right now, no hardware with both cluster and smt to run */
> > > > > +       if (sched_smt_active())
> > > >
> > > > don't use smt static key but a dedicated one if needed
> > >
> > > Sure.
> > >
> > > >
> > > > > +               return -1;
> > > > > +
> > > > > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > > > > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > > > > +                       continue;
> > > > > +               if (available_idle_cpu(cpu))
> > > > > +                       return cpu;
> > > > > +       }
> > > > > +
> > > > > +       return -1;
> > > > > +}
> > > > > +
> > > > > +#else /* CONFIG_SCHED_CLUSTER */
> > > > > +
> > > > > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > > > > +{
> > > > > +       return -1;
> > > > > +}
> > > > > +
> > > > > +#endif /* CONFIG_SCHED_CLUSTER */
> > > > > +
> > > > >  /*
> > > > >   * Scan the LLC domain for idle CPUs; this is dynamically regulated
> by
> > > > >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> > > > the
> > > > > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct
> > *p,
> > > > int prev, int target)
> > > > >         if ((unsigned)i < nr_cpumask_bits)
> > > > >                 return i;
> > > > >
> > > > > +       i = select_idle_cluster(p, target);
> > > > > +       if ((unsigned)i < nr_cpumask_bits)
> > > > > +               return i;
> > > >
> > > > This is yet another loop in the fast wake up path.
> > > >
> > > > I'm curious to know which part of this patch really gives the perf
> improvement ?
> > > > -Is it the new sched domain level with a shorter interval that is then
> > > > used by Load balance to better spread task in the cluster and between
> > > > clusters ?
> > > > -Or this new loop in the wake up path which tries to keep threads in
> > > > the same cluster ? which is at the opposite of the rest of the
> > > > scheduler which tries to spread
> > >
> > > If I don't scan cluster first for wake_affine, I almost don't see large
> > > hackbench change by the new sche_domain.
> > > For example:
> > > g=4 in hackbench on cpu0-cpu47(two numa)
> > > w/o patch: 17.7647 (average time in 10 times of hackbench)
> > > w/ the full patch: 10.4927
> > > w/ patch but drop select_idle_cluster(): 15.0931
> >
> > And for the case with one numa node ?
> 
> That would be very frustrating as it is getting worse:
> 
> g=1
> Running in threaded mode with 1 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> w/ but dropped select_idle_cluster:
>      7.816 7.589 7.319 7.556 7.443 7.459 7.636 7.427 7.425 7.395=7.5065
> 
> g=2
> Running in threaded mode with 2 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> 9.955=10.1006
> w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> w/ but dropped select_idle_cluster:
>      10.222 10.078 10.063 10.317 9.963 10.060 10.089 9.934 10.152 10.077=10.0955
> 
> g=3
> Running in threaded mode with 3 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> 15.721=15.7727
> w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> 14.895=14.9005
> w/ but dropped select_idle_cluster(getting worse than w/o):
>      16.892 16.962 17.248 17.392 17.336 17.705 17.113 17.633 17.477
> 17.378=17.3136
> 
> g=4
> Running in threaded mode with 4 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> 21.090=20.8187
> w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> 20.452=20.5647
> w/ but dropped select_idle_cluster(getting worse than w/o):
>      24.075 24.122 24.243 24.000 24.223 23.791 23.246 24.904 23.990
> 24.431=24.1025

Sorry. Please ignore this. I added some printk here while testing
one numa. Will update you the data in another email.

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-02 10:48           ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02 10:48 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Wednesday, December 2, 2020 11:41 PM
> To: 'Vincent Guittot' <vincent.guittot@linaro.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> 
> 
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Wednesday, December 2, 2020 11:17 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Wed, 2 Dec 2020 at 10:20, Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > > Sent: Wednesday, December 2, 2020 9:27 PM
> > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J.
> Wysocki
> > > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > > gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>;
> > > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>;
> Juri
> > > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> > <dietmar.eggemann@arm.com>;
> > > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>;
> Mel
> > > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > > >
> > > > On Tue, 1 Dec 2020 at 04:04, Barry Song <song.bao.hua@hisilicon.com> wrote:
> > > > >
> > > > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and
> each
> > > > > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > > > > has local L3 tag. On the other hand, each clusters will share some
> > > > > internal system bus. This means cache coherence overhead inside one
> cluster
> > > > > is much less than the overhead across clusters.
> > > > >
> > > > > +-----------------------------------+                          +---------+
> > > > > |  +------+    +------+            +---------------------------+         |
> > > > > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |                                   +----+    L3     |         |         |
> > > > > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > > > > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > |  +------+    +------+             +--------------------------+         |
> > > > > |  |      |    |      |             |    +-----------+         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |                                   |    |    L3     |         |         |
> > > > > |  +------+    +------+             +----+    tag    |         |         |
> > > > > |  |      |    |      |             |    |           |         |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |   L3    |
> > > > >                                                                |   data  |
> > > > > +-----------------------------------+                          |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |  |      |    |      |             |    |           |         |         |
> > > > > |  +------+    +------+             +----+    L3     |         |         |
> > > > > |                                   |    |    tag    |         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |  |      |    |      |            ++    +-----------+         |         |
> > > > > |  +------+    +------+            |---------------------------+         |
> > > > > +-----------------------------------|                          |         |
> > > > > +-----------------------------------|                          |         |
> > > > > |  +------+    +------+            +---------------------------+         |
> > > > > |  |      |    |      |             |    +-----------+         |         |
> > > > > |  +------+    +------+             |    |           |         |         |
> > > > > |                                   +----+    L3     |         |         |
> > > > > |  +------+    +------+             |    |    tag    |         |         |
> > > > > |  |      |    |      |             |    |           |         |         |
> > > > > |  +------+    +------+             |    +-----------+         |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > |  +------+    +------+             +--------------------------+         |
> > > > > |  |      |    |      |             |   +-----------+          |         |
> > > > > |  +------+    +------+             |   |           |          |         |
> > > > > |                                   |   |    L3     |          |         |
> > > > > |  +------+    +------+             +---+    tag    |          |         |
> > > > > |  |      |    |      |             |   |           |          |         |
> > > > > |  +------+    +------+             |   +-----------+          |         |
> > > > > |                                   |                          |         |
> > > > > +-----------------------------------+                          |         |
> > > > > +-----------------------------------+                         ++         |
> > > > > |  +------+    +------+             +--------------------------+         |
> > > > > |  |      |    |      |             |  +-----------+           |         |
> > > > > |  +------+    +------+             |  |           |           |         |
> > > > > |                                   |  |    L3     |           |         |
> > > > > |  +------+    +------+             +--+    tag    |           |         |
> > > > > |  |      |    |      |             |  |           |           |         |
> > > > > |  +------+    +------+             |  +-----------+           |         |
> > > > > |                                   |                          +---------+
> > > > > +-----------------------------------+
> > > > >
> > > > > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > > > > this patch, domain0 of cpu0 would be MC for cpu0-cpu23 with
> > > > > min_interval=24, max_interval=48; with this patch, MC becomes domain1,
> > > > > a new domain0 "CL" including cpu0-cpu3 is added with min_interval=4
> and
> > > > > max_interval=8.
> > > > > This will affect load balance. For example, without this patch, while
> > cpu0
> > > > > becomes idle, it will pull a task from cpu1-cpu15. With this patch,
> cpu0
> > > > > will try to pull a task from cpu1-cpu3 first. This will have much less
> > > > > overhead of task migration.
> > > > >
> > > > > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > > > > a core in the target cluster before scanning the llc domain.
> > > > > This means it will proactively use a core which has better affinity
> with
> > > > > target core at first.
> > > >
> > > > Which is at the opposite of what we are usually trying to do in the
> > > > fast wakeup path: trying to minimize resource sharing by finding an
> > > > idle core with all smt idle as an example
> > >
> > > In wake_affine case, I guess we are actually want some kind of
> > > resource sharing such as LLC to get waker and wakee get closer
> >
> > In wake_affine, we don't want to move outside the LLC  but then in the
> > LLC we tries to minimize resource sharing like looking for a core
> > fully idle for SMT
> >
> > > to each other. find_idlest_cpu() is really opposite.
> > >
> > > So the real question is that LLC is always the right choice of
> > > idle sibling?
> >
> > That's the eternal question: spread or gather
> 
> Indeed.
> 
> >
> > >
> > > In this case, 6 clusters are in same LLC, but hardware has different
> > > behavior for inside single cluster and across multiple clusters.
> > >
> > >
> > > >
> > > > >
> > > > > Not much benchmark has been done yet. but here is a rough hackbench
> > > > > result.
> > > > > we run the below command with different -g parameter to increase system
> > load
> > > > > by changing g from 1 to 4, for each one of 1-4, we run the benchmark
> ten
> > times
> > > > > and record the data to get the average time:
> > > > >
> > > > > First, we run hackbench in only one NUMA node(cpu0-cpu23):
> > > > > $ numactl -N 0 hackbench -p -T -l 100000 -g $1
> > > >
> > > > What is your ref tree ? v5.10-rcX or tip/sched/core ?
> > >
> > > Actually I was using 5.9 release.  That must be weird.
> > > But the reason is that disk driver is getting hang
> > > in my hardware in 5.10-rcx.
> >
> > In fact there are several changes in v5.10 and tip/sched/core that
> > could help your topology
> 
> Will figure out some way to try.
> 
> >
> > >
> > > >
> > > > >
> > > > > g=1 (seen cpu utilization around 50% for each core)
> > > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > > performance improvement w/ patch: -1.01%
> > > > >
> > > > > g=2 (seen cpu utilization around 70% for each core)
> > > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > 9.955=10.1006
> > > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > > performance improvement w/ patch: 4.08%
> > > > >
> > > > > g=3 (seen cpu utilization around 90% for each core)
> > > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > 15.721=15.7727
> > > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > 14.895=14.9005
> > > > > performance improvement w/ patch: 5.53%
> > > > >
> > > > > g=4
> > > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > > 21.090=20.8187
> > > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > > 20.452=20.5647
> > > > > performance improvement w/ patch: 1.22%
> > > > >
> > > > > After that, we run the same hackbench in both NUMA nodes(cpu0-cpu47):
> > > > > g=1
> > > > > w/o: 7.351 7.416 7.486 7.358 7.516 7.403 7.413 7.411 7.421 7.454=7.4229
> > > > > w/ : 7.609 7.596 7.647 7.571 7.687 7.571 7.520 7.513 7.530 7.681=7.5925
> > > > > performance improvement by patch: -2.2%
> > > > >
> > > > > g=2
> > > > > w/o: 9.046 9.190 9.053 8.950 9.101 8.930 9.143 8.928 8.905 9.034=9.028
> > > > > w/ : 8.247 8.057 8.258 8.310 8.083 8.201 8.044 8.158 8.382 8.173=8.1913
> > > > > performance improvement by patch: 9.3%
> > > > >
> > > > > g=3
> > > > > w/o: 11.664 11.767 11.277 11.619 12.557 12.760 11.664 12.165 12.235
> > > > 11.849=11.9557
> > > > > w/ : 9.387 9.461 9.650 9.613 9.591 9.454 9.496 9.716 9.327 9.722=9.5417
> > > > > performance improvement by patch: 20.2%
> > > > >
> > > > > g=4
> > > > > w/o: 17.347 17.299 17.655 18.775 16.707 18.879 17.255 18.356 16.859
> > > > 18.515=17.7647
> > > > > w/ : 10.416 10.496 10.601 10.318 10.459 10.617 10.510 10.642 10.467
> > > > 10.401=10.4927
> > > > > performance improvement by patch: 40.9%
> > > > >
> > > > > g=5
> > > > > w/o: 27.805 26.633 24.138 28.086 24.405 27.922 30.043 28.458 31.073
> > > > 25.819=27.4382
> > > > > w/ : 13.817 13.976 14.166 13.688 14.132 14.095 14.003 13.997 13.954
> > > > 13.907=13.9735
> > > > > performance improvement by patch: 49.1%
> > > > >
> > > > > It seems the patch can bring a huge increase on hackbench especially
> when
> > > > > we bind hackbench to all of cpu0-cpu47, comparing to 5.53% while running
> > > > > on single NUMA node(cpu0-cpu23)
> > > >
> > > > Interesting that this patch mainly impacts the numa case
> > > >
> > > > >
> > > > > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > > > > ---
> > > > >  arch/arm64/Kconfig       |  7 +++++++
> > > > >  arch/arm64/kernel/smp.c  | 17 +++++++++++++++++
> > > > >  include/linux/topology.h |  7 +++++++
> > > > >  kernel/sched/fair.c      | 35 +++++++++++++++++++++++++++++++++++
> > > > >  4 files changed, 66 insertions(+)
> > > > >
> > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > > > index 6d23283..3583c26 100644
> > > > > --- a/arch/arm64/Kconfig
> > > > > +++ b/arch/arm64/Kconfig
> > > > > @@ -938,6 +938,13 @@ config SCHED_MC
> > > > >           making when dealing with multi-core CPU chips at a cost of slightly
> > > > >           increased overhead in some places. If unsure say N here.
> > > > >
> > > > > +config SCHED_CLUSTER
> > > > > +       bool "Cluster scheduler support"
> > > > > +       help
> > > > > +         Cluster scheduler support improves the CPU scheduler's decision
> > > > > +         making when dealing with machines that have clusters(sharing
> > internal
> > > > > +         bus or sharing LLC cache tag). If unsure say N here.
> > > > > +
> > > > >  config SCHED_SMT
> > > > >         bool "SMT scheduler support"
> > > > >         help
> > > > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> > > > > index 355ee9e..5c8f026 100644
> > > > > --- a/arch/arm64/kernel/smp.c
> > > > > +++ b/arch/arm64/kernel/smp.c
> > > > > @@ -32,6 +32,7 @@
> > > > >  #include <linux/irq_work.h>
> > > > >  #include <linux/kexec.h>
> > > > >  #include <linux/kvm_host.h>
> > > > > +#include <linux/sched/topology.h>
> > > > >
> > > > >  #include <asm/alternative.h>
> > > > >  #include <asm/atomic.h>
> > > > > @@ -726,6 +727,20 @@ void __init smp_init_cpus(void)
> > > > >         }
> > > > >  }
> > > > >
> > > > > +static struct sched_domain_topology_level arm64_topology[] = {
> > > > > +#ifdef CONFIG_SCHED_SMT
> > > > > +        { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> > > > > +#endif
> > > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > > +       { cpu_clustergroup_mask, cpu_core_flags, SD_INIT_NAME(CL) },
> > > > > +#endif
> > > > > +#ifdef CONFIG_SCHED_MC
> > > > > +        { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> > > > > +#endif
> > > > > +       { cpu_cpu_mask, SD_INIT_NAME(DIE) },
> > > > > +        { NULL, },
> > > > > +};
> > > > > +
> > > > >  void __init smp_prepare_cpus(unsigned int max_cpus)
> > > > >  {
> > > > >         const struct cpu_operations *ops;
> > > > > @@ -735,6 +750,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
> > > > >
> > > > >         init_cpu_topology();
> > > > >
> > > > > +       set_sched_topology(arm64_topology);
> > > > > +
> > > > >         this_cpu = smp_processor_id();
> > > > >         store_cpu_topology(this_cpu);
> > > > >         numa_store_cpu_info(this_cpu);
> > > > > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > > > > index 5f66648..2c823c0 100644
> > > > > --- a/include/linux/topology.h
> > > > > +++ b/include/linux/topology.h
> > > > > @@ -211,6 +211,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> > > > cpu)
> > > > >  }
> > > > >  #endif
> > > > >
> > > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > > > > +{
> > > > > +       return topology_cluster_cpumask(cpu);
> > > > > +}
> > > > > +#endif
> > > > > +
> > > > >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> > > > >  {
> > > > >         return cpumask_of_node(cpu_to_node(cpu));
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index 1a68a05..ae8ec910 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -6106,6 +6106,37 @@ static inline int select_idle_smt(struct
> task_struct
> > > > *p, int target)
> > > > >
> > > > >  #endif /* CONFIG_SCHED_SMT */
> > > > >
> > > > > +#ifdef CONFIG_SCHED_CLUSTER
> > > > > +/*
> > > > > + * Scan the local CLUSTER mask for idle CPUs.
> > > > > + */
> > > > > +static int select_idle_cluster(struct task_struct *p, int target)
> > > > > +{
> > > > > +       int cpu;
> > > > > +
> > > > > +       /* right now, no hardware with both cluster and smt to run */
> > > > > +       if (sched_smt_active())
> > > >
> > > > don't use smt static key but a dedicated one if needed
> > >
> > > Sure.
> > >
> > > >
> > > > > +               return -1;
> > > > > +
> > > > > +       for_each_cpu_wrap(cpu, cpu_cluster_mask(target), target) {
> > > > > +               if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> > > > > +                       continue;
> > > > > +               if (available_idle_cpu(cpu))
> > > > > +                       return cpu;
> > > > > +       }
> > > > > +
> > > > > +       return -1;
> > > > > +}
> > > > > +
> > > > > +#else /* CONFIG_SCHED_CLUSTER */
> > > > > +
> > > > > +static inline int select_idle_cluster(struct task_struct *p, int target)
> > > > > +{
> > > > > +       return -1;
> > > > > +}
> > > > > +
> > > > > +#endif /* CONFIG_SCHED_CLUSTER */
> > > > > +
> > > > >  /*
> > > > >   * Scan the LLC domain for idle CPUs; this is dynamically regulated
> by
> > > > >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> > > > the
> > > > > @@ -6270,6 +6301,10 @@ static int select_idle_sibling(struct task_struct
> > *p,
> > > > int prev, int target)
> > > > >         if ((unsigned)i < nr_cpumask_bits)
> > > > >                 return i;
> > > > >
> > > > > +       i = select_idle_cluster(p, target);
> > > > > +       if ((unsigned)i < nr_cpumask_bits)
> > > > > +               return i;
> > > >
> > > > This is yet another loop in the fast wake up path.
> > > >
> > > > I'm curious to know which part of this patch really gives the perf
> improvement ?
> > > > -Is it the new sched domain level with a shorter interval that is then
> > > > used by Load balance to better spread task in the cluster and between
> > > > clusters ?
> > > > -Or this new loop in the wake up path which tries to keep threads in
> > > > the same cluster ? which is at the opposite of the rest of the
> > > > scheduler which tries to spread
> > >
> > > If I don't scan cluster first for wake_affine, I almost don't see large
> > > hackbench change by the new sche_domain.
> > > For example:
> > > g=4 in hackbench on cpu0-cpu47(two numa)
> > > w/o patch: 17.7647 (average time in 10 times of hackbench)
> > > w/ the full patch: 10.4927
> > > w/ patch but drop select_idle_cluster(): 15.0931
> >
> > And for the case with one numa node ?
> 
> That would be very frustrating as it is getting worse:
> 
> g=1
> Running in threaded mode with 1 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> w/ but dropped select_idle_cluster:
>      7.816 7.589 7.319 7.556 7.443 7.459 7.636 7.427 7.425 7.395=7.5065
> 
> g=2
> Running in threaded mode with 2 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> 9.955=10.1006
> w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> w/ but dropped select_idle_cluster:
>      10.222 10.078 10.063 10.317 9.963 10.060 10.089 9.934 10.152 10.077=10.0955
> 
> g=3
> Running in threaded mode with 3 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> 15.721=15.7727
> w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> 14.895=14.9005
> w/ but dropped select_idle_cluster(getting worse than w/o):
>      16.892 16.962 17.248 17.392 17.336 17.705 17.113 17.633 17.477
> 17.378=17.3136
> 
> g=4
> Running in threaded mode with 4 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> 21.090=20.8187
> w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> 20.452=20.5647
> w/ but dropped select_idle_cluster(getting worse than w/o):
>      24.075 24.122 24.243 24.000 24.223 23.791 23.246 24.904 23.990
> 24.431=24.1025

Sorry. Please ignore this. I added some printk here while testing
one numa. Will update you the data in another email.

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-02 10:16         ` Vincent Guittot
@ 2020-12-02 20:58           ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02 20:58 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)

> 
> Sorry. Please ignore this. I added some printk here while testing
> one numa. Will update you the data in another email.

Re-tested in one NUMA node(cpu0-cpu23):

g=1
Running in threaded mode with 1 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
w/ but dropped select_idle_cluster:
     7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611

g=2
Running in threaded mode with 2 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
9.955=10.1006
w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
w/ but dropped select_idle_cluster:
     9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047

g=3
Running in threaded mode with 3 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
15.721=15.7727
w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
14.895=14.9005
w/ but dropped select_idle_cluster:
     15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228 15.325=15.3591

g=4
Running in threaded mode with 4 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
w/ but dropped select_idle_cluster:
     19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226 20.562=20.2306

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-02 20:58           ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-02 20:58 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown

> 
> Sorry. Please ignore this. I added some printk here while testing
> one numa. Will update you the data in another email.

Re-tested in one NUMA node(cpu0-cpu23):

g=1
Running in threaded mode with 1 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
w/ but dropped select_idle_cluster:
     7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611

g=2
Running in threaded mode with 2 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
9.955=10.1006
w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
w/ but dropped select_idle_cluster:
     9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047

g=3
Running in threaded mode with 3 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
15.721=15.7727
w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
14.895=14.9005
w/ but dropped select_idle_cluster:
     15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228 15.325=15.3591

g=4
Running in threaded mode with 4 groups using 40 file descriptors
Each sender will pass 100000 messages of 100 bytes
w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
w/ but dropped select_idle_cluster:
     19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226 20.562=20.2306

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-02 20:58           ` Song Bao Hua (Barry Song)
@ 2020-12-03  9:03             ` Vincent Guittot
  -1 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-03  9:03 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)

On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
> >
> > Sorry. Please ignore this. I added some printk here while testing
> > one numa. Will update you the data in another email.
>
> Re-tested in one NUMA node(cpu0-cpu23):
>
> g=1
> Running in threaded mode with 1 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> w/ but dropped select_idle_cluster:
>      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
>
> g=2
> Running in threaded mode with 2 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> 9.955=10.1006
> w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> w/ but dropped select_idle_cluster:
>      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
>
> g=3
> Running in threaded mode with 3 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> 15.721=15.7727
> w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> 14.895=14.9005
> w/ but dropped select_idle_cluster:
>      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228 15.325=15.3591
>
> g=4
> Running in threaded mode with 4 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
> w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
> w/ but dropped select_idle_cluster:
>      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226 20.562=20.2306
>

I assume that you have run this on v5.9 as previous tests.
The results don't show any real benefit of select_idle_cluster()
inside a node whereas this is where we could expect most of the
benefit. We have to understand why we have such an impact on numa
tests only.

> Thanks
> Barry
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:03             ` Vincent Guittot
  0 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-03  9:03 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown

On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
> >
> > Sorry. Please ignore this. I added some printk here while testing
> > one numa. Will update you the data in another email.
>
> Re-tested in one NUMA node(cpu0-cpu23):
>
> g=1
> Running in threaded mode with 1 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> w/ but dropped select_idle_cluster:
>      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
>
> g=2
> Running in threaded mode with 2 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> 9.955=10.1006
> w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> w/ but dropped select_idle_cluster:
>      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
>
> g=3
> Running in threaded mode with 3 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> 15.721=15.7727
> w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> 14.895=14.9005
> w/ but dropped select_idle_cluster:
>      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228 15.325=15.3591
>
> g=4
> Running in threaded mode with 4 groups using 40 file descriptors
> Each sender will pass 100000 messages of 100 bytes
> w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090 21.090=20.8187
> w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381 20.452=20.5647
> w/ but dropped select_idle_cluster:
>      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226 20.562=20.2306
>

I assume that you have run this on v5.9 as previous tests.
The results don't show any real benefit of select_idle_cluster()
inside a node whereas this is where we could expect most of the
benefit. We have to understand why we have such an impact on numa
tests only.

> Thanks
> Barry
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:03             ` Vincent Guittot
@ 2020-12-03  9:11               ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-03  9:11 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Thursday, December 3, 2020 10:04 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> > >
> > > Sorry. Please ignore this. I added some printk here while testing
> > > one numa. Will update you the data in another email.
> >
> > Re-tested in one NUMA node(cpu0-cpu23):
> >
> > g=1
> > Running in threaded mode with 1 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > w/ but dropped select_idle_cluster:
> >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> >
> > g=2
> > Running in threaded mode with 2 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > 9.955=10.1006
> > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > w/ but dropped select_idle_cluster:
> >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> >
> > g=3
> > Running in threaded mode with 3 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > 15.721=15.7727
> > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > 14.895=14.9005
> > w/ but dropped select_idle_cluster:
> >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> 15.325=15.3591
> >
> > g=4
> > Running in threaded mode with 4 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> 21.090=20.8187
> > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> 20.452=20.5647
> > w/ but dropped select_idle_cluster:
> >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> 20.562=20.2306
> >
> 
> I assume that you have run this on v5.9 as previous tests.

Yep

> The results don't show any real benefit of select_idle_cluster()
> inside a node whereas this is where we could expect most of the
> benefit. We have to understand why we have such an impact on numa
> tests only.

There is a 4-5.5% increase while g=2 and g=3.

Regarding the huge increase in NUMA case,  at the first beginning, I suspect
we have wrong llc domain. For example, if cpu0's llc domain span
cpu0-cpu47, then select_idle_cpu() is running in wrong range while
it should run in cpu0-cpu23.

But after printing the llc domain's span, I find it is completely right.
Cpu0's llc span: cpu0-cpu23
Cpu24's llc span: cpu24-cpu47

Maybe I need more trace data to figure out if select_idle_cpu() is running
correctly. For example, maybe I can figure out if it is always returning -1,
or it returns -1 very often?

Or do you have any idea?


> 
> > Thanks
> > Barry

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:11               ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-03  9:11 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Thursday, December 3, 2020 10:04 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> > >
> > > Sorry. Please ignore this. I added some printk here while testing
> > > one numa. Will update you the data in another email.
> >
> > Re-tested in one NUMA node(cpu0-cpu23):
> >
> > g=1
> > Running in threaded mode with 1 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > w/ but dropped select_idle_cluster:
> >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> >
> > g=2
> > Running in threaded mode with 2 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > 9.955=10.1006
> > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > w/ but dropped select_idle_cluster:
> >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> >
> > g=3
> > Running in threaded mode with 3 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > 15.721=15.7727
> > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > 14.895=14.9005
> > w/ but dropped select_idle_cluster:
> >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> 15.325=15.3591
> >
> > g=4
> > Running in threaded mode with 4 groups using 40 file descriptors
> > Each sender will pass 100000 messages of 100 bytes
> > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> 21.090=20.8187
> > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> 20.452=20.5647
> > w/ but dropped select_idle_cluster:
> >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> 20.562=20.2306
> >
> 
> I assume that you have run this on v5.9 as previous tests.

Yep

> The results don't show any real benefit of select_idle_cluster()
> inside a node whereas this is where we could expect most of the
> benefit. We have to understand why we have such an impact on numa
> tests only.

There is a 4-5.5% increase while g=2 and g=3.

Regarding the huge increase in NUMA case,  at the first beginning, I suspect
we have wrong llc domain. For example, if cpu0's llc domain span
cpu0-cpu47, then select_idle_cpu() is running in wrong range while
it should run in cpu0-cpu23.

But after printing the llc domain's span, I find it is completely right.
Cpu0's llc span: cpu0-cpu23
Cpu24's llc span: cpu24-cpu47

Maybe I need more trace data to figure out if select_idle_cpu() is running
correctly. For example, maybe I can figure out if it is always returning -1,
or it returns -1 very often?

Or do you have any idea?


> 
> > Thanks
> > Barry

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-01 16:04     ` Valentin Schneider
@ 2020-12-03  9:28       ` Peter Zijlstra
  -1 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-12-03  9:28 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Barry Song, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan.Cameron, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi, linuxarm, xuwei5,
	prime.zeng

On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> 
> Gating this behind this new config only leveraged by arm64 doesn't make it
> very generic. Note that powerpc also has this newish "CACHE" level which
> seems to overlap in function with your "CLUSTER" one (both are arch
> specific, though).
> 
> I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> i.e. scan CPUs by increasing cache "distance". We already have it in some
> form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> for said powerpc's CACHE thingie.

There's some intel chips with a smaller L2, but I don't think we ever
bothered.

There's also the extended topology stuff from Intel: SMT, Core, Module,
Tile, Die, of which we've only partially used Die I think.

Whatever we do, it might make sense to not all use different names.

Also, I think Mel said he was cooking something for
select_idle_balance().

Also, I've previously posted patches that fold all the iterations into
one, so it might make sense to revisit some of that if Mel also already
didn.t

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:28       ` Peter Zijlstra
  0 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-12-03  9:28 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Barry Song, juri.lelli, vincent.guittot, prime.zeng,
	catalin.marinas, Jonathan.Cameron, rjw, linux-kernel, rostedt,
	linuxarm, bsegall, linux-acpi, mingo, mgorman, xuwei5, gregkh,
	mark.rutland, will, dietmar.eggemann, linux-arm-kernel, lenb

On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> 
> Gating this behind this new config only leveraged by arm64 doesn't make it
> very generic. Note that powerpc also has this newish "CACHE" level which
> seems to overlap in function with your "CLUSTER" one (both are arch
> specific, though).
> 
> I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> i.e. scan CPUs by increasing cache "distance". We already have it in some
> form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> for said powerpc's CACHE thingie.

There's some intel chips with a smaller L2, but I don't think we ever
bothered.

There's also the extended topology stuff from Intel: SMT, Core, Module,
Tile, Die, of which we've only partially used Die I think.

Whatever we do, it might make sense to not all use different names.

Also, I think Mel said he was cooking something for
select_idle_balance().

Also, I've previously posted patches that fold all the iterations into
one, so it might make sense to revisit some of that if Mel also already
didn.t

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:11               ` Song Bao Hua (Barry Song)
@ 2020-12-03  9:39                 ` Vincent Guittot
  -1 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-03  9:39 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)

On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Thursday, December 3, 2020 10:04 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > > >
> > > > Sorry. Please ignore this. I added some printk here while testing
> > > > one numa. Will update you the data in another email.
> > >
> > > Re-tested in one NUMA node(cpu0-cpu23):
> > >
> > > g=1
> > > Running in threaded mode with 1 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > w/ but dropped select_idle_cluster:
> > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > >
> > > g=2
> > > Running in threaded mode with 2 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > 9.955=10.1006
> > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > w/ but dropped select_idle_cluster:
> > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > >
> > > g=3
> > > Running in threaded mode with 3 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > 15.721=15.7727
> > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > 14.895=14.9005
> > > w/ but dropped select_idle_cluster:
> > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > 15.325=15.3591
> > >
> > > g=4
> > > Running in threaded mode with 4 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > 21.090=20.8187
> > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > 20.452=20.5647
> > > w/ but dropped select_idle_cluster:
> > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > 20.562=20.2306
> > >
> >
> > I assume that you have run this on v5.9 as previous tests.
>
> Yep
>
> > The results don't show any real benefit of select_idle_cluster()
> > inside a node whereas this is where we could expect most of the
> > benefit. We have to understand why we have such an impact on numa
> > tests only.
>
> There is a 4-5.5% increase while g=2 and g=3.

my point was with vs without select_idle_cluster() but still having a
cluster domain level
In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
-1.7% for g=4

>
> Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> we have wrong llc domain. For example, if cpu0's llc domain span
> cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> it should run in cpu0-cpu23.
>
> But after printing the llc domain's span, I find it is completely right.
> Cpu0's llc span: cpu0-cpu23
> Cpu24's llc span: cpu24-cpu47

Have you checked that the cluster mask was also correct ?

>
> Maybe I need more trace data to figure out if select_idle_cpu() is running
> correctly. For example, maybe I can figure out if it is always returning -1,
> or it returns -1 very often?

yes, could be interesting to check how often select_idle_cpu return -1

>
> Or do you have any idea?

tracking migration across nod could help to understand too

Vincent
>
>
> >
> > > Thanks
> > > Barry
>
> Thanks
> Barry
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:39                 ` Vincent Guittot
  0 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-03  9:39 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown

On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Thursday, December 3, 2020 10:04 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > > >
> > > > Sorry. Please ignore this. I added some printk here while testing
> > > > one numa. Will update you the data in another email.
> > >
> > > Re-tested in one NUMA node(cpu0-cpu23):
> > >
> > > g=1
> > > Running in threaded mode with 1 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > w/ but dropped select_idle_cluster:
> > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > >
> > > g=2
> > > Running in threaded mode with 2 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > 9.955=10.1006
> > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > w/ but dropped select_idle_cluster:
> > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > >
> > > g=3
> > > Running in threaded mode with 3 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > 15.721=15.7727
> > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > 14.895=14.9005
> > > w/ but dropped select_idle_cluster:
> > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > 15.325=15.3591
> > >
> > > g=4
> > > Running in threaded mode with 4 groups using 40 file descriptors
> > > Each sender will pass 100000 messages of 100 bytes
> > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > 21.090=20.8187
> > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > 20.452=20.5647
> > > w/ but dropped select_idle_cluster:
> > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > 20.562=20.2306
> > >
> >
> > I assume that you have run this on v5.9 as previous tests.
>
> Yep
>
> > The results don't show any real benefit of select_idle_cluster()
> > inside a node whereas this is where we could expect most of the
> > benefit. We have to understand why we have such an impact on numa
> > tests only.
>
> There is a 4-5.5% increase while g=2 and g=3.

my point was with vs without select_idle_cluster() but still having a
cluster domain level
In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
-1.7% for g=4

>
> Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> we have wrong llc domain. For example, if cpu0's llc domain span
> cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> it should run in cpu0-cpu23.
>
> But after printing the llc domain's span, I find it is completely right.
> Cpu0's llc span: cpu0-cpu23
> Cpu24's llc span: cpu24-cpu47

Have you checked that the cluster mask was also correct ?

>
> Maybe I need more trace data to figure out if select_idle_cpu() is running
> correctly. For example, maybe I can figure out if it is always returning -1,
> or it returns -1 very often?

yes, could be interesting to check how often select_idle_cpu return -1

>
> Or do you have any idea?

tracking migration across nod could help to understand too

Vincent
>
>
> >
> > > Thanks
> > > Barry
>
> Thanks
> Barry
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:28       ` Peter Zijlstra
@ 2020-12-03  9:49         ` Mel Gorman
  -1 siblings, 0 replies; 48+ messages in thread
From: Mel Gorman @ 2020-12-03  9:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Valentin Schneider, Barry Song, catalin.marinas, will, rjw, lenb,
	gregkh, Jonathan.Cameron, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi, linuxarm, xuwei5,
	prime.zeng

On Thu, Dec 03, 2020 at 10:28:31AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> > 
> > Gating this behind this new config only leveraged by arm64 doesn't make it
> > very generic. Note that powerpc also has this newish "CACHE" level which
> > seems to overlap in function with your "CLUSTER" one (both are arch
> > specific, though).
> > 
> > I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> > i.e. scan CPUs by increasing cache "distance". We already have it in some
> > form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> > for said powerpc's CACHE thingie.
> 
> There's some intel chips with a smaller L2, but I don't think we ever
> bothered.
> 
> There's also the extended topology stuff from Intel: SMT, Core, Module,
> Tile, Die, of which we've only partially used Die I think.
> 
> Whatever we do, it might make sense to not all use different names.
> 
> Also, I think Mel said he was cooking something for
> select_idle_balance().
> 
> Also, I've previously posted patches that fold all the iterations into
> one, so it might make sense to revisit some of that if Mel also already
> didn.t

I didn't. The NUMA/lb reconcilation took most of my attention and right
now I'm looking at select_idle_sibling again in preparation for tracking
the idle cpumask in a sensible fashion. The main idea I had in mind was
special casing EPYC as it has multiple small L3 caches that may benefit
from select_idle_sibling looking slightly beyond the LLC as a search
domain but it has not even started yet.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:49         ` Mel Gorman
  0 siblings, 0 replies; 48+ messages in thread
From: Mel Gorman @ 2020-12-03  9:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Barry Song, juri.lelli, vincent.guittot, prime.zeng,
	catalin.marinas, Jonathan.Cameron, rjw, linux-kernel, rostedt,
	dietmar.eggemann, bsegall, linux-acpi, mingo, linuxarm, xuwei5,
	gregkh, mark.rutland, will, Valentin Schneider, linux-arm-kernel,
	lenb

On Thu, Dec 03, 2020 at 10:28:31AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> > 
> > Gating this behind this new config only leveraged by arm64 doesn't make it
> > very generic. Note that powerpc also has this newish "CACHE" level which
> > seems to overlap in function with your "CLUSTER" one (both are arch
> > specific, though).
> > 
> > I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> > i.e. scan CPUs by increasing cache "distance". We already have it in some
> > form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> > for said powerpc's CACHE thingie.
> 
> There's some intel chips with a smaller L2, but I don't think we ever
> bothered.
> 
> There's also the extended topology stuff from Intel: SMT, Core, Module,
> Tile, Die, of which we've only partially used Die I think.
> 
> Whatever we do, it might make sense to not all use different names.
> 
> Also, I think Mel said he was cooking something for
> select_idle_balance().
> 
> Also, I've previously posted patches that fold all the iterations into
> one, so it might make sense to revisit some of that if Mel also already
> didn.t

I didn't. The NUMA/lb reconcilation took most of my attention and right
now I'm looking at select_idle_sibling again in preparation for tracking
the idle cpumask in a sensible fashion. The main idea I had in mind was
special casing EPYC as it has multiple small L3 caches that may benefit
from select_idle_sibling looking slightly beyond the LLC as a search
domain but it has not even started yet.

-- 
Mel Gorman
SUSE Labs

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:39                 ` Vincent Guittot
@ 2020-12-03  9:54                   ` Vincent Guittot
  -1 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-03  9:54 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)

On Thu, 3 Dec 2020 at 10:39, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Thursday, December 3, 2020 10:04 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > > >
> > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > one numa. Will update you the data in another email.
> > > >
> > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > >
> > > > g=1
> > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > w/ but dropped select_idle_cluster:
> > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > >
> > > > g=2
> > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > 9.955=10.1006
> > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > w/ but dropped select_idle_cluster:
> > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > >
> > > > g=3
> > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > 15.721=15.7727
> > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > 14.895=14.9005
> > > > w/ but dropped select_idle_cluster:
> > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > 15.325=15.3591
> > > >
> > > > g=4
> > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > 21.090=20.8187
> > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > 20.452=20.5647
> > > > w/ but dropped select_idle_cluster:
> > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > 20.562=20.2306
> > > >
> > >
> > > I assume that you have run this on v5.9 as previous tests.
> >
> > Yep
> >
> > > The results don't show any real benefit of select_idle_cluster()
> > > inside a node whereas this is where we could expect most of the
> > > benefit. We have to understand why we have such an impact on numa
> > > tests only.
> >
> > There is a 4-5.5% increase while g=2 and g=3.
>
> my point was with vs without select_idle_cluster() but still having a
> cluster domain level
> In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> -1.7% for g=4
>
> >
> > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > we have wrong llc domain. For example, if cpu0's llc domain span
> > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > it should run in cpu0-cpu23.
> >
> > But after printing the llc domain's span, I find it is completely right.
> > Cpu0's llc span: cpu0-cpu23
> > Cpu24's llc span: cpu24-cpu47
>
> Have you checked that the cluster mask was also correct ?
>
> >
> > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > correctly. For example, maybe I can figure out if it is always returning -1,
> > or it returns -1 very often?
>
> yes, could be interesting to check how often select_idle_cpu return -1
>
> >
> > Or do you have any idea?
>
> tracking migration across nod could help to understand too

Also the v6 of https://lkml.org/lkml/2020/11/26/187 might also help you

>
> Vincent
> >
> >
> > >
> > > > Thanks
> > > > Barry
> >
> > Thanks
> > Barry
> >

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:54                   ` Vincent Guittot
  0 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-03  9:54 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown

On Thu, 3 Dec 2020 at 10:39, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Thursday, December 3, 2020 10:04 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > > >
> > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > one numa. Will update you the data in another email.
> > > >
> > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > >
> > > > g=1
> > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > w/ but dropped select_idle_cluster:
> > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > >
> > > > g=2
> > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > 9.955=10.1006
> > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > w/ but dropped select_idle_cluster:
> > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > >
> > > > g=3
> > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > 15.721=15.7727
> > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > 14.895=14.9005
> > > > w/ but dropped select_idle_cluster:
> > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > 15.325=15.3591
> > > >
> > > > g=4
> > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > 21.090=20.8187
> > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > 20.452=20.5647
> > > > w/ but dropped select_idle_cluster:
> > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > 20.562=20.2306
> > > >
> > >
> > > I assume that you have run this on v5.9 as previous tests.
> >
> > Yep
> >
> > > The results don't show any real benefit of select_idle_cluster()
> > > inside a node whereas this is where we could expect most of the
> > > benefit. We have to understand why we have such an impact on numa
> > > tests only.
> >
> > There is a 4-5.5% increase while g=2 and g=3.
>
> my point was with vs without select_idle_cluster() but still having a
> cluster domain level
> In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> -1.7% for g=4
>
> >
> > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > we have wrong llc domain. For example, if cpu0's llc domain span
> > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > it should run in cpu0-cpu23.
> >
> > But after printing the llc domain's span, I find it is completely right.
> > Cpu0's llc span: cpu0-cpu23
> > Cpu24's llc span: cpu24-cpu47
>
> Have you checked that the cluster mask was also correct ?
>
> >
> > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > correctly. For example, maybe I can figure out if it is always returning -1,
> > or it returns -1 very often?
>
> yes, could be interesting to check how often select_idle_cpu return -1
>
> >
> > Or do you have any idea?
>
> tracking migration across nod could help to understand too

Also the v6 of https://lkml.org/lkml/2020/11/26/187 might also help you

>
> Vincent
> >
> >
> > >
> > > > Thanks
> > > > Barry
> >
> > Thanks
> > Barry
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:28       ` Peter Zijlstra
@ 2020-12-03  9:57         ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-03  9:57 UTC (permalink / raw)
  To: Peter Zijlstra, Valentin Schneider
  Cc: catalin.marinas, will, rjw, lenb, gregkh, Jonathan Cameron,
	mingo, juri.lelli, vincent.guittot, dietmar.eggemann, rostedt,
	bsegall, mgorman, mark.rutland, linux-arm-kernel, linux-kernel,
	linux-acpi, Linuxarm, xuwei (O), Zengtao (B)



> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Thursday, December 3, 2020 10:29 PM
> To: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; lenb@kernel.org;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> mingo@redhat.com; juri.lelli@redhat.com; vincent.guittot@linaro.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; mark.rutland@arm.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; Linuxarm
> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>; Zengtao (B)
> <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> >
> > Gating this behind this new config only leveraged by arm64 doesn't make it
> > very generic. Note that powerpc also has this newish "CACHE" level which
> > seems to overlap in function with your "CLUSTER" one (both are arch
> > specific, though).
> >
> > I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> > i.e. scan CPUs by increasing cache "distance". We already have it in some
> > form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> > for said powerpc's CACHE thingie.
> 
> There's some intel chips with a smaller L2, but I don't think we ever
> bothered.
> 
> There's also the extended topology stuff from Intel: SMT, Core, Module,
> Tile, Die, of which we've only partially used Die I think.
> 
> Whatever we do, it might make sense to not all use different names.

Yep. Valentin was actually recommending the same SD_SHARE_PKG_RESOURCES sd flags
by ignoring the actual names of the hardware.
But the question is where we should start, in case we have 3 domains under llc,
maybe it is not good to scan from the first level domain as it is gathering
too much.

> 
> Also, I think Mel said he was cooking something for
> select_idle_balance().
> 
> Also, I've previously posted patches that fold all the iterations into
> one, so it might make sense to revisit some of that if Mel also already
> didn.t

Would you point out the link of your previous patches?

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03  9:57         ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-03  9:57 UTC (permalink / raw)
  To: Peter Zijlstra, Valentin Schneider
  Cc: juri.lelli, mark.rutland, vincent.guittot, Zengtao (B),
	catalin.marinas, Jonathan Cameron, rjw, linux-kernel, rostedt,
	Linuxarm, bsegall, linux-acpi, mingo, mgorman, xuwei (O),
	gregkh, will, dietmar.eggemann, linux-arm-kernel, lenb



> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Thursday, December 3, 2020 10:29 PM
> To: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net; lenb@kernel.org;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> mingo@redhat.com; juri.lelli@redhat.com; vincent.guittot@linaro.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; mark.rutland@arm.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; Linuxarm
> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>; Zengtao (B)
> <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Tue, Dec 01, 2020 at 04:04:04PM +0000, Valentin Schneider wrote:
> >
> > Gating this behind this new config only leveraged by arm64 doesn't make it
> > very generic. Note that powerpc also has this newish "CACHE" level which
> > seems to overlap in function with your "CLUSTER" one (both are arch
> > specific, though).
> >
> > I think what you are after here is an SD_SHARE_PKG_RESOURCES domain walk,
> > i.e. scan CPUs by increasing cache "distance". We already have it in some
> > form, as we scan SMT & LLC domains; AFAICT LLC always maps to MC, except
> > for said powerpc's CACHE thingie.
> 
> There's some intel chips with a smaller L2, but I don't think we ever
> bothered.
> 
> There's also the extended topology stuff from Intel: SMT, Core, Module,
> Tile, Die, of which we've only partially used Die I think.
> 
> Whatever we do, it might make sense to not all use different names.

Yep. Valentin was actually recommending the same SD_SHARE_PKG_RESOURCES sd flags
by ignoring the actual names of the hardware.
But the question is where we should start, in case we have 3 domains under llc,
maybe it is not good to scan from the first level domain as it is gathering
too much.

> 
> Also, I think Mel said he was cooking something for
> select_idle_balance().
> 
> Also, I've previously posted patches that fold all the iterations into
> one, so it might make sense to revisit some of that if Mel also already
> didn.t

Would you point out the link of your previous patches?

Thanks
Barry


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:57         ` Song Bao Hua (Barry Song)
@ 2020-12-03 10:07           ` Peter Zijlstra
  -1 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-12-03 10:07 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Valentin Schneider, catalin.marinas, will, rjw, lenb, gregkh,
	Jonathan Cameron, mingo, juri.lelli, vincent.guittot,
	dietmar.eggemann, rostedt, bsegall, mgorman, mark.rutland,
	linux-arm-kernel, linux-kernel, linux-acpi, Linuxarm, xuwei (O),
	Zengtao (B)

On Thu, Dec 03, 2020 at 09:57:09AM +0000, Song Bao Hua (Barry Song) wrote:

> Would you point out the link of your previous patches?

https://lkml.kernel.org/r/20180530142236.667774973@infradead.org


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-03 10:07           ` Peter Zijlstra
  0 siblings, 0 replies; 48+ messages in thread
From: Peter Zijlstra @ 2020-12-03 10:07 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: juri.lelli, mark.rutland, vincent.guittot, Zengtao (B),
	catalin.marinas, Jonathan Cameron, rjw, linux-kernel, rostedt,
	dietmar.eggemann, bsegall, linux-acpi, mingo, Linuxarm, mgorman,
	xuwei (O),
	gregkh, will, Valentin Schneider, linux-arm-kernel, lenb

On Thu, Dec 03, 2020 at 09:57:09AM +0000, Song Bao Hua (Barry Song) wrote:

> Would you point out the link of your previous patches?

https://lkml.kernel.org/r/20180530142236.667774973@infradead.org


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-03  9:39                 ` Vincent Guittot
@ 2020-12-07  9:59                   ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-07  9:59 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Thursday, December 3, 2020 10:39 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Thursday, December 3, 2020 10:04 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > > >
> > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > one numa. Will update you the data in another email.
> > > >
> > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > >
> > > > g=1
> > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > w/ but dropped select_idle_cluster:
> > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > >
> > > > g=2
> > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > 9.955=10.1006
> > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > w/ but dropped select_idle_cluster:
> > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > >
> > > > g=3
> > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > 15.721=15.7727
> > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > 14.895=14.9005
> > > > w/ but dropped select_idle_cluster:
> > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > 15.325=15.3591
> > > >
> > > > g=4
> > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > 21.090=20.8187
> > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > 20.452=20.5647
> > > > w/ but dropped select_idle_cluster:
> > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > 20.562=20.2306
> > > >
> > >
> > > I assume that you have run this on v5.9 as previous tests.
> >
> > Yep
> >
> > > The results don't show any real benefit of select_idle_cluster()
> > > inside a node whereas this is where we could expect most of the
> > > benefit. We have to understand why we have such an impact on numa
> > > tests only.
> >
> > There is a 4-5.5% increase while g=2 and g=3.
> 
> my point was with vs without select_idle_cluster() but still having a
> cluster domain level
> In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> -1.7% for g=4
> 
> >
> > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > we have wrong llc domain. For example, if cpu0's llc domain span
> > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > it should run in cpu0-cpu23.
> >
> > But after printing the llc domain's span, I find it is completely right.
> > Cpu0's llc span: cpu0-cpu23
> > Cpu24's llc span: cpu24-cpu47
> 
> Have you checked that the cluster mask was also correct ?
> 
> >
> > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > correctly. For example, maybe I can figure out if it is always returning -1,
> > or it returns -1 very often?
> 
> yes, could be interesting to check how often select_idle_cpu return -1
> 
> >
> > Or do you have any idea?
> 
> tracking migration across nod could help to understand too

I set a bootargs mem=4G to do swapping test before working on cluster
scheduler issue. but I forgot to remove the parameter.

The huge increase on across-numa case can only be reproduced while
i use this mem=4G cmdline which means numa1 has no memory.
After removing the limitation, I can't reproduce the huge increase
for two NUMAs any more.

Guess select_idle_cluster() somehow workaround an scheduler issue
for numa without memory.

> 
> Vincent
> >
> >

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-07  9:59                   ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-07  9:59 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Thursday, December 3, 2020 10:39 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Thursday, December 3, 2020 10:04 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > > >
> > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > one numa. Will update you the data in another email.
> > > >
> > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > >
> > > > g=1
> > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > w/ but dropped select_idle_cluster:
> > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > >
> > > > g=2
> > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > 9.955=10.1006
> > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > w/ but dropped select_idle_cluster:
> > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > >
> > > > g=3
> > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > 15.721=15.7727
> > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > 14.895=14.9005
> > > > w/ but dropped select_idle_cluster:
> > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > 15.325=15.3591
> > > >
> > > > g=4
> > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > Each sender will pass 100000 messages of 100 bytes
> > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > 21.090=20.8187
> > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > 20.452=20.5647
> > > > w/ but dropped select_idle_cluster:
> > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > 20.562=20.2306
> > > >
> > >
> > > I assume that you have run this on v5.9 as previous tests.
> >
> > Yep
> >
> > > The results don't show any real benefit of select_idle_cluster()
> > > inside a node whereas this is where we could expect most of the
> > > benefit. We have to understand why we have such an impact on numa
> > > tests only.
> >
> > There is a 4-5.5% increase while g=2 and g=3.
> 
> my point was with vs without select_idle_cluster() but still having a
> cluster domain level
> In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> -1.7% for g=4
> 
> >
> > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > we have wrong llc domain. For example, if cpu0's llc domain span
> > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > it should run in cpu0-cpu23.
> >
> > But after printing the llc domain's span, I find it is completely right.
> > Cpu0's llc span: cpu0-cpu23
> > Cpu24's llc span: cpu24-cpu47
> 
> Have you checked that the cluster mask was also correct ?
> 
> >
> > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > correctly. For example, maybe I can figure out if it is always returning -1,
> > or it returns -1 very often?
> 
> yes, could be interesting to check how often select_idle_cpu return -1
> 
> >
> > Or do you have any idea?
> 
> tracking migration across nod could help to understand too

I set a bootargs mem=4G to do swapping test before working on cluster
scheduler issue. but I forgot to remove the parameter.

The huge increase on across-numa case can only be reproduced while
i use this mem=4G cmdline which means numa1 has no memory.
After removing the limitation, I can't reproduce the huge increase
for two NUMAs any more.

Guess select_idle_cluster() somehow workaround an scheduler issue
for numa without memory.

> 
> Vincent
> >
> >

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-07  9:59                   ` Song Bao Hua (Barry Song)
@ 2020-12-07 15:29                     ` Vincent Guittot
  -1 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-07 15:29 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)

On Mon, 7 Dec 2020 at 10:59, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Thursday, December 3, 2020 10:39 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > > Sent: Thursday, December 3, 2020 10:04 PM
> > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > > gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>;
> > > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> > <dietmar.eggemann@arm.com>;
> > > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > > >
> > > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > > <song.bao.hua@hisilicon.com> wrote:
> > > > >
> > > > > >
> > > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > > one numa. Will update you the data in another email.
> > > > >
> > > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > > >
> > > > > g=1
> > > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > > w/ but dropped select_idle_cluster:
> > > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > > >
> > > > > g=2
> > > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > > 9.955=10.1006
> > > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > > w/ but dropped select_idle_cluster:
> > > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > > >
> > > > > g=3
> > > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > > 15.721=15.7727
> > > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > > 14.895=14.9005
> > > > > w/ but dropped select_idle_cluster:
> > > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > > 15.325=15.3591
> > > > >
> > > > > g=4
> > > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > > 21.090=20.8187
> > > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > > 20.452=20.5647
> > > > > w/ but dropped select_idle_cluster:
> > > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > > 20.562=20.2306
> > > > >
> > > >
> > > > I assume that you have run this on v5.9 as previous tests.
> > >
> > > Yep
> > >
> > > > The results don't show any real benefit of select_idle_cluster()
> > > > inside a node whereas this is where we could expect most of the
> > > > benefit. We have to understand why we have such an impact on numa
> > > > tests only.
> > >
> > > There is a 4-5.5% increase while g=2 and g=3.
> >
> > my point was with vs without select_idle_cluster() but still having a
> > cluster domain level
> > In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> > -1.7% for g=4
> >
> > >
> > > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > > we have wrong llc domain. For example, if cpu0's llc domain span
> > > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > > it should run in cpu0-cpu23.
> > >
> > > But after printing the llc domain's span, I find it is completely right.
> > > Cpu0's llc span: cpu0-cpu23
> > > Cpu24's llc span: cpu24-cpu47
> >
> > Have you checked that the cluster mask was also correct ?
> >
> > >
> > > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > > correctly. For example, maybe I can figure out if it is always returning -1,
> > > or it returns -1 very often?
> >
> > yes, could be interesting to check how often select_idle_cpu return -1
> >
> > >
> > > Or do you have any idea?
> >
> > tracking migration across nod could help to understand too
>
> I set a bootargs mem=4G to do swapping test before working on cluster
> scheduler issue. but I forgot to remove the parameter.
>
> The huge increase on across-numa case can only be reproduced while
> i use this mem=4G cmdline which means numa1 has no memory.
> After removing the limitation, I can't reproduce the huge increase
> for two NUMAs any more.

Ok. Make more sense

>
> Guess select_idle_cluster() somehow workaround an scheduler issue
> for numa without memory.
>
> >
> > Vincent
> > >
> > >
>
> Thanks
> Barry
>

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-07 15:29                     ` Vincent Guittot
  0 siblings, 0 replies; 48+ messages in thread
From: Vincent Guittot @ 2020-12-07 15:29 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown

On Mon, 7 Dec 2020 at 10:59, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > Sent: Thursday, December 3, 2020 10:39 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> >
> > On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > > Sent: Thursday, December 3, 2020 10:04 PM
> > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > > gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>;
> > > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> > <dietmar.eggemann@arm.com>;
> > > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > > >
> > > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > > <song.bao.hua@hisilicon.com> wrote:
> > > > >
> > > > > >
> > > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > > one numa. Will update you the data in another email.
> > > > >
> > > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > > >
> > > > > g=1
> > > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > > w/ but dropped select_idle_cluster:
> > > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > > >
> > > > > g=2
> > > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > > 9.955=10.1006
> > > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > > w/ but dropped select_idle_cluster:
> > > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > > >
> > > > > g=3
> > > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > > 15.721=15.7727
> > > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > > 14.895=14.9005
> > > > > w/ but dropped select_idle_cluster:
> > > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > > 15.325=15.3591
> > > > >
> > > > > g=4
> > > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > > 21.090=20.8187
> > > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > > 20.452=20.5647
> > > > > w/ but dropped select_idle_cluster:
> > > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > > 20.562=20.2306
> > > > >
> > > >
> > > > I assume that you have run this on v5.9 as previous tests.
> > >
> > > Yep
> > >
> > > > The results don't show any real benefit of select_idle_cluster()
> > > > inside a node whereas this is where we could expect most of the
> > > > benefit. We have to understand why we have such an impact on numa
> > > > tests only.
> > >
> > > There is a 4-5.5% increase while g=2 and g=3.
> >
> > my point was with vs without select_idle_cluster() but still having a
> > cluster domain level
> > In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> > -1.7% for g=4
> >
> > >
> > > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > > we have wrong llc domain. For example, if cpu0's llc domain span
> > > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > > it should run in cpu0-cpu23.
> > >
> > > But after printing the llc domain's span, I find it is completely right.
> > > Cpu0's llc span: cpu0-cpu23
> > > Cpu24's llc span: cpu24-cpu47
> >
> > Have you checked that the cluster mask was also correct ?
> >
> > >
> > > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > > correctly. For example, maybe I can figure out if it is always returning -1,
> > > or it returns -1 very often?
> >
> > yes, could be interesting to check how often select_idle_cpu return -1
> >
> > >
> > > Or do you have any idea?
> >
> > tracking migration across nod could help to understand too
>
> I set a bootargs mem=4G to do swapping test before working on cluster
> scheduler issue. but I forgot to remove the parameter.
>
> The huge increase on across-numa case can only be reproduced while
> i use this mem=4G cmdline which means numa1 has no memory.
> After removing the limitation, I can't reproduce the huge increase
> for two NUMAs any more.

Ok. Make more sense

>
> Guess select_idle_cluster() somehow workaround an scheduler issue
> for numa without memory.
>
> >
> > Vincent
> > >
> > >
>
> Thanks
> Barry
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
  2020-12-07 15:29                     ` Vincent Guittot
@ 2020-12-09 11:35                       ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-09 11:35 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Catalin Marinas, Will Deacon,
	Rafael J. Wysocki, Cc: Len Brown, gregkh, Jonathan Cameron,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman, Mark Rutland, LAK,
	linux-kernel, ACPI Devel Maling List, Linuxarm, xuwei (O),
	Zengtao (B)



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Tuesday, December 8, 2020 4:29 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Mon, 7 Dec 2020 at 10:59, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Thursday, December 3, 2020 10:39 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> > > <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > > > Sent: Thursday, December 3, 2020 10:04 PM
> > > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J.
> Wysocki
> > > > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > > > gregkh@linuxfoundation.org; Jonathan Cameron
> > > <jonathan.cameron@huawei.com>;
> > > > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>;
> Juri
> > > > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> > > <dietmar.eggemann@arm.com>;
> > > > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>;
> Mel
> > > > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei
> (O)
> > > > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > > > >
> > > > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > > > <song.bao.hua@hisilicon.com> wrote:
> > > > > >
> > > > > > >
> > > > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > > > one numa. Will update you the data in another email.
> > > > > >
> > > > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > > > >
> > > > > > g=1
> > > > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > > > >
> > > > > > g=2
> > > > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > > > 9.955=10.1006
> > > > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > > > >
> > > > > > g=3
> > > > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > > > 15.721=15.7727
> > > > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > > > 14.895=14.9005
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > > > 15.325=15.3591
> > > > > >
> > > > > > g=4
> > > > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > > > 21.090=20.8187
> > > > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > > > 20.452=20.5647
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > > > 20.562=20.2306
> > > > > >
> > > > >
> > > > > I assume that you have run this on v5.9 as previous tests.
> > > >
> > > > Yep
> > > >
> > > > > The results don't show any real benefit of select_idle_cluster()
> > > > > inside a node whereas this is where we could expect most of the
> > > > > benefit. We have to understand why we have such an impact on numa
> > > > > tests only.
> > > >
> > > > There is a 4-5.5% increase while g=2 and g=3.
> > >
> > > my point was with vs without select_idle_cluster() but still having a
> > > cluster domain level
> > > In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> > > -1.7% for g=4
> > >
> > > >
> > > > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > > > we have wrong llc domain. For example, if cpu0's llc domain span
> > > > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > > > it should run in cpu0-cpu23.
> > > >
> > > > But after printing the llc domain's span, I find it is completely right.
> > > > Cpu0's llc span: cpu0-cpu23
> > > > Cpu24's llc span: cpu24-cpu47
> > >
> > > Have you checked that the cluster mask was also correct ?
> > >
> > > >
> > > > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > > > correctly. For example, maybe I can figure out if it is always returning
> -1,
> > > > or it returns -1 very often?
> > >
> > > yes, could be interesting to check how often select_idle_cpu return -1
> > >
> > > >
> > > > Or do you have any idea?
> > >
> > > tracking migration across nod could help to understand too
> >
> > I set a bootargs mem=4G to do swapping test before working on cluster
> > scheduler issue. but I forgot to remove the parameter.
> >
> > The huge increase on across-numa case can only be reproduced while
> > i use this mem=4G cmdline which means numa1 has no memory.
> > After removing the limitation, I can't reproduce the huge increase
> > for two NUMAs any more.
> 
> Ok. Make more sense

I managed to use linux-next to test after fixing the disk hang.

But I am still quite struggling with how to leverage the cluster
topology in select_idle_cpu() to make huge improvement on benchmark.

If I disable the influence of scheduler by taskset, there is
obviously a large difference in hackbench inside cluster and
across clusters:

inside a cluster:
root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 4.285

Across clusters:
root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 5.524

But no matter how I tune the code of kernel/sched/fair.c, I
don't see this large difference by running hackbench in the
whole numa node:
for i in {1..10}
do
	numactl -N 0 hackbench -p -T -l 20000 -g $1
done

usually, the difference is under (-5%~+5%).

Then I made a major change as below:
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
	...


	time = cpu_clock(this);

#if 0
	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);

	for_each_cpu_wrap(cpu, cpus, target) {
		if (!--nr)
			return -1;
		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
			break;
	}
#else
	if ((cpu=select_idle_cluster(p,target)) = -1)
		return -1;
#endif

	time = cpu_clock(this) - time;
	update_avg(&this_sd->avg_scan_cost, time);

	return cpu;
}

That means I don't fall back to llc if cluster has no idle
cpu.

With this, I am getting 20% major difference as I am always expecting:

g=     1      2        3        4        5        6        7      8       9        10
w/o  1.5494 2.0641 3.1640 4.2438 5.3445 6.3098 7.5086 8.4721 9.7115  10.8588
w/   1.6801 2.0280 2.7890 3.7339 4.5748 5.2998 6.1413 6.6206 7.7641  8.4782

I guess my original patch is very easy to fall back to llc as
cluster is not easy to idle. Once system is busy, the original
patch is nop as it is always falling back to llc.

> 
> >
> > Guess select_idle_cluster() somehow workaround an scheduler issue
> > for numa without memory.
> >
> > >
> > > Vincent

Thanks
Barry


^ permalink raw reply	[flat|nested] 48+ messages in thread

* RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
@ 2020-12-09 11:35                       ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 48+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-09 11:35 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Juri Lelli, Mark Rutland, Zengtao (B),
	Peter Zijlstra, Catalin Marinas, Jonathan Cameron,
	Rafael J. Wysocki, linux-kernel, Steven Rostedt,
	Dietmar Eggemann, Ben Segall, ACPI Devel Maling List,
	Ingo Molnar, Linuxarm, Mel Gorman, xuwei (O),
	gregkh, Will Deacon, Valentin Schneider, LAK, Cc: Len Brown



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Tuesday, December 8, 2020 4:29 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> Lelli <juri.lelli@redhat.com>; Dietmar Eggemann <dietmar.eggemann@arm.com>;
> Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> <linux-arm-kernel@lists.infradead.org>; linux-kernel
> <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> 
> On Mon, 7 Dec 2020 at 10:59, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > Sent: Thursday, December 3, 2020 10:39 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>;
> > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Juri
> > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>;
> > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>; Mel
> > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei (O)
> > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > >
> > > On Thu, 3 Dec 2020 at 10:11, Song Bao Hua (Barry Song)
> > > <song.bao.hua@hisilicon.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> > > > > Sent: Thursday, December 3, 2020 10:04 PM
> > > > > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > > > > Cc: Valentin Schneider <valentin.schneider@arm.com>; Catalin Marinas
> > > > > <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J.
> Wysocki
> > > > > <rjw@rjwysocki.net>; Cc: Len Brown <lenb@kernel.org>;
> > > > > gregkh@linuxfoundation.org; Jonathan Cameron
> > > <jonathan.cameron@huawei.com>;
> > > > > Ingo Molnar <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>;
> Juri
> > > > > Lelli <juri.lelli@redhat.com>; Dietmar Eggemann
> > > <dietmar.eggemann@arm.com>;
> > > > > Steven Rostedt <rostedt@goodmis.org>; Ben Segall <bsegall@google.com>;
> Mel
> > > > > Gorman <mgorman@suse.de>; Mark Rutland <mark.rutland@arm.com>; LAK
> > > > > <linux-arm-kernel@lists.infradead.org>; linux-kernel
> > > > > <linux-kernel@vger.kernel.org>; ACPI Devel Maling List
> > > > > <linux-acpi@vger.kernel.org>; Linuxarm <linuxarm@huawei.com>; xuwei
> (O)
> > > > > <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>
> > > > > Subject: Re: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters
> > > > >
> > > > > On Wed, 2 Dec 2020 at 21:58, Song Bao Hua (Barry Song)
> > > > > <song.bao.hua@hisilicon.com> wrote:
> > > > > >
> > > > > > >
> > > > > > > Sorry. Please ignore this. I added some printk here while testing
> > > > > > > one numa. Will update you the data in another email.
> > > > > >
> > > > > > Re-tested in one NUMA node(cpu0-cpu23):
> > > > > >
> > > > > > g=1
> > > > > > Running in threaded mode with 1 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 7.689 7.485 7.485 7.458 7.524 7.539 7.738 7.693 7.568 7.674=7.5853
> > > > > > w/ : 7.516 7.941 7.374 7.963 7.881 7.910 7.420 7.556 7.695 7.441=7.6697
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      7.752 7.739 7.739 7.571 7.545 7.685 7.407 7.580 7.605 7.487=7.611
> > > > > >
> > > > > > g=2
> > > > > > Running in threaded mode with 2 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 10.127 10.119 10.070 10.196 10.057 10.111 10.045 10.164 10.162
> > > > > > 9.955=10.1006
> > > > > > w/ : 9.694 9.654 9.612 9.649 9.686 9.734 9.607 9.842 9.690 9.710=9.6878
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      9.877 10.069 9.951 9.918 9.947 9.790 9.906 9.820 9.863 9.906=9.9047
> > > > > >
> > > > > > g=3
> > > > > > Running in threaded mode with 3 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 15.885 15.254 15.932 15.647 16.120 15.878 15.857 15.759 15.674
> > > > > > 15.721=15.7727
> > > > > > w/ : 14.974 14.657 13.969 14.985 14.728 15.665 15.191 14.995 14.946
> > > > > > 14.895=14.9005
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      15.405 15.177 15.373 15.187 15.450 15.540 15.278 15.628 15.228
> > > > > 15.325=15.3591
> > > > > >
> > > > > > g=4
> > > > > > Running in threaded mode with 4 groups using 40 file descriptors
> > > > > > Each sender will pass 100000 messages of 100 bytes
> > > > > > w/o: 20.014 21.025 21.119 21.235 19.767 20.971 20.962 20.914 21.090
> > > > > 21.090=20.8187
> > > > > > w/ : 20.331 20.608 20.338 20.445 20.456 20.146 20.693 20.797 21.381
> > > > > 20.452=20.5647
> > > > > > w/ but dropped select_idle_cluster:
> > > > > >      19.814 20.126 20.229 20.350 20.750 20.404 19.957 19.888 20.226
> > > > > 20.562=20.2306
> > > > > >
> > > > >
> > > > > I assume that you have run this on v5.9 as previous tests.
> > > >
> > > > Yep
> > > >
> > > > > The results don't show any real benefit of select_idle_cluster()
> > > > > inside a node whereas this is where we could expect most of the
> > > > > benefit. We have to understand why we have such an impact on numa
> > > > > tests only.
> > > >
> > > > There is a 4-5.5% increase while g=2 and g=3.
> > >
> > > my point was with vs without select_idle_cluster() but still having a
> > > cluster domain level
> > > In this case, the diff is -0.8% for g=1 +2.2% for g=2, +3% for g=3 and
> > > -1.7% for g=4
> > >
> > > >
> > > > Regarding the huge increase in NUMA case,  at the first beginning, I suspect
> > > > we have wrong llc domain. For example, if cpu0's llc domain span
> > > > cpu0-cpu47, then select_idle_cpu() is running in wrong range while
> > > > it should run in cpu0-cpu23.
> > > >
> > > > But after printing the llc domain's span, I find it is completely right.
> > > > Cpu0's llc span: cpu0-cpu23
> > > > Cpu24's llc span: cpu24-cpu47
> > >
> > > Have you checked that the cluster mask was also correct ?
> > >
> > > >
> > > > Maybe I need more trace data to figure out if select_idle_cpu() is running
> > > > correctly. For example, maybe I can figure out if it is always returning
> -1,
> > > > or it returns -1 very often?
> > >
> > > yes, could be interesting to check how often select_idle_cpu return -1
> > >
> > > >
> > > > Or do you have any idea?
> > >
> > > tracking migration across nod could help to understand too
> >
> > I set a bootargs mem=4G to do swapping test before working on cluster
> > scheduler issue. but I forgot to remove the parameter.
> >
> > The huge increase on across-numa case can only be reproduced while
> > i use this mem=4G cmdline which means numa1 has no memory.
> > After removing the limitation, I can't reproduce the huge increase
> > for two NUMAs any more.
> 
> Ok. Make more sense

I managed to use linux-next to test after fixing the disk hang.

But I am still quite struggling with how to leverage the cluster
topology in select_idle_cpu() to make huge improvement on benchmark.

If I disable the influence of scheduler by taskset, there is
obviously a large difference in hackbench inside cluster and
across clusters:

inside a cluster:
root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 4.285

Across clusters:
root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 5.524

But no matter how I tune the code of kernel/sched/fair.c, I
don't see this large difference by running hackbench in the
whole numa node:
for i in {1..10}
do
	numactl -N 0 hackbench -p -T -l 20000 -g $1
done

usually, the difference is under (-5%~+5%).

Then I made a major change as below:
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
	...


	time = cpu_clock(this);

#if 0
	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);

	for_each_cpu_wrap(cpu, cpus, target) {
		if (!--nr)
			return -1;
		if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
			break;
	}
#else
	if ((cpu=select_idle_cluster(p,target)) = -1)
		return -1;
#endif

	time = cpu_clock(this) - time;
	update_avg(&this_sd->avg_scan_cost, time);

	return cpu;
}

That means I don't fall back to llc if cluster has no idle
cpu.

With this, I am getting 20% major difference as I am always expecting:

g=     1      2        3        4        5        6        7      8       9        10
w/o  1.5494 2.0641 3.1640 4.2438 5.3445 6.3098 7.5086 8.4721 9.7115  10.8588
w/   1.6801 2.0280 2.7890 3.7339 4.5748 5.2998 6.1413 6.6206 7.7641  8.4782

I guess my original patch is very easy to fall back to llc as
cluster is not easy to idle. Once system is busy, the original
patch is nop as it is always falling back to llc.

> 
> >
> > Guess select_idle_cluster() somehow workaround an scheduler issue
> > for numa without memory.
> >
> > >
> > > Vincent

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2020-12-09 11:37 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-01  2:59 [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2020-12-01  2:59 ` Barry Song
2020-12-01  2:59 ` [RFC PATCH v2 1/2] topology: Represent clusters of CPUs within a die Barry Song
2020-12-01  2:59   ` Barry Song
2020-12-01 16:03   ` Valentin Schneider
2020-12-01 16:03     ` Valentin Schneider
2020-12-02  9:55     ` Sudeep Holla
2020-12-02  9:55       ` Sudeep Holla
2020-12-01  2:59 ` [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters Barry Song
2020-12-01  2:59   ` Barry Song
2020-12-01 16:04   ` Valentin Schneider
2020-12-01 16:04     ` Valentin Schneider
2020-12-03  9:28     ` Peter Zijlstra
2020-12-03  9:28       ` Peter Zijlstra
2020-12-03  9:49       ` Mel Gorman
2020-12-03  9:49         ` Mel Gorman
2020-12-03  9:57       ` Song Bao Hua (Barry Song)
2020-12-03  9:57         ` Song Bao Hua (Barry Song)
2020-12-03 10:07         ` Peter Zijlstra
2020-12-03 10:07           ` Peter Zijlstra
2020-12-02  8:27   ` Vincent Guittot
2020-12-02  8:27     ` Vincent Guittot
2020-12-02  9:20     ` Song Bao Hua (Barry Song)
2020-12-02  9:20       ` Song Bao Hua (Barry Song)
2020-12-02 10:16       ` Vincent Guittot
2020-12-02 10:16         ` Vincent Guittot
2020-12-02 10:45         ` Song Bao Hua (Barry Song)
2020-12-02 10:45           ` Song Bao Hua (Barry Song)
2020-12-02 10:48         ` Song Bao Hua (Barry Song)
2020-12-02 10:48           ` Song Bao Hua (Barry Song)
2020-12-02 20:58         ` Song Bao Hua (Barry Song)
2020-12-02 20:58           ` Song Bao Hua (Barry Song)
2020-12-03  9:03           ` Vincent Guittot
2020-12-03  9:03             ` Vincent Guittot
2020-12-03  9:11             ` Song Bao Hua (Barry Song)
2020-12-03  9:11               ` Song Bao Hua (Barry Song)
2020-12-03  9:39               ` Vincent Guittot
2020-12-03  9:39                 ` Vincent Guittot
2020-12-03  9:54                 ` Vincent Guittot
2020-12-03  9:54                   ` Vincent Guittot
2020-12-07  9:59                 ` Song Bao Hua (Barry Song)
2020-12-07  9:59                   ` Song Bao Hua (Barry Song)
2020-12-07 15:29                   ` Vincent Guittot
2020-12-07 15:29                     ` Vincent Guittot
2020-12-09 11:35                     ` Song Bao Hua (Barry Song)
2020-12-09 11:35                       ` Song Bao Hua (Barry Song)
2020-12-01 10:46 ` [RFC PATCH v2 0/2] scheduler: expose the topology of clusters and add cluster scheduler Dietmar Eggemann
2020-12-01 10:46   ` Dietmar Eggemann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.