linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler
@ 2021-04-20  0:18 Barry Song
  2021-04-20  0:18 ` [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Barry Song
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Barry Song @ 2021-04-20  0:18 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

ARM64 server chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster has
local L3 tag. On the other hand, each cluster will share some internal system
bus. This means cache is much more affine inside one cluster than across
clusters.

    +-----------------------------------+                          +---------+
    |  +------+    +------+            +---------------------------+         |
    |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+   cluster   |    |    tag    |         |         |
    |  | CPU2 |    | CPU3 |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   |    |    L3     |         |         |
    |  +------+    +------+             +----+    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |   L3    |
                                                                   |   data  |
    +-----------------------------------+                          |         |
    |  +------+    +------+             |    +-----------+         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             +----+    L3     |         |         |
    |                                   |    |    tag    |         |         |
    |  +------+    +------+             |    |           |         |         |
    |  |      |    |      |            ++    +-----------+         |         |
    |  +------+    +------+            |---------------------------+         |
    +-----------------------------------|                          |         |
    +-----------------------------------|                          |         |
    |  +------+    +------+            +---------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+             |    |    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |   +-----------+          |         |
    |  +------+    +------+             |   |           |          |         |


There is a similar need for clustering in x86.  Some x86 cores could share L2 caches
that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).  

Having a sched_domain for clusters will bring two aspects of improvement:
1. spreading unrelated tasks among clusters, which decreases the contention of resources
and improve the throughput.
unrelated tasks might be put randomly without cluster sched_domain:
+-------------------+            +-----------------+
| +----+   +----+   |            |                 |
| |task|   |task|   |            |                 |
| |1   |   |2   |   |            |                 |
| +----+   +----+   |            |                 |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

but with cluster sched_domain, they are likely to spread due to LB:
+-------------------+            +-----------------+
| +----+            |            | +----+          |
| |task|            |            | |task|          |
| |1   |            |            | |2   |          |
| +----+            |            | +----+          |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

2. gathering related tasks within a cluster, which improves the cache affinity of tasks
talking with each other.
Without cluster sched_domain, related tasks might be put randomly. In case task1-8 have
relationship as below:
Task1 talks with task5
Task2 talks with task6
Task3 talks with task7
Task4 talks with task8
With the tuning of select_idle_cpu() to scan local cluster first, those tasks might
get a chance to be gathered like:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 5   |     |    | |3   |      |7    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |2    |       |  6   |    |    | |4    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+
Otherwise, the result might be:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 2   |     |    | |5   |      |6    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  4   |    |    | |7    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+

-v6:
  * added topology_cluster_cpumask() for x86, code provided by Tim.

  * emulated a two-level spreading/packing heuristic by only scanning cluster
    in wake_affine path for tasks running in same LLC(also NUMA).

    This partially addressed Dietmar's comment in RFC v3:
    "In case we would like to further distinguish between llc-packing and
     even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
     packing vs. spreading heuristic further down in sis().
   
     IMHO, Barry's current implementation doesn't do this right now. Instead
     he's trying to pack on cluster first and if not successful look further
     among the remaining llc CPUs for an idle CPU."

  * adjusted the hackbench parameter to make relatively low and high load.
    previous patchsets with "-f 10" ran under an extremely high load with
    hundreds of threads, which seems not real use cases.

    This also addressed Vincent's question in RFC v4:
    "In particular, I'm still not convinced that the modification of the wakeup
    path is the root of the hackbench improvement; especially with g=14 where
    there should not be much idle CPUs with 14*40 tasks on at most 32 CPUs."

-v5:
  * split "add scheduler level for clusters" into two patches to evaluate the
    impact of spreading and gathering separately;
  * add a tracepoint of select_idle_cpu for debug purpose; add bcc script in
    commit log;
  * add cluster_id = -1 in reset_cpu_topology()
  * rebased to tip/sched/core

-v4:
  * rebased to tip/sched/core with the latest unified code of select_idle_cpu
  * added Tim's patch for x86 Jacobsville
  * also added benchmark data of spreading unrelated tasks
  * avoided the iteration of sched_domain by moving to static_key(addressing
    Vincent's comment
  * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)

Barry Song (2):
  scheduler: add scheduler level for clusters
  scheduler: scan idle cpu in cluster for tasks within one LLC

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die

Tim Chen (1):
  scheduler: Add cluster scheduler level for x86

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/topology.c              |  2 +
 arch/x86/Kconfig                          |  8 ++++
 arch/x86/include/asm/smp.h                |  7 ++++
 arch/x86/include/asm/topology.h           |  2 +
 arch/x86/kernel/cpu/cacheinfo.c           |  1 +
 arch/x86/kernel/cpu/common.c              |  3 ++
 arch/x86/kernel/smpboot.c                 | 43 ++++++++++++++++++++-
 block/blk-mq.c                            |  2 +-
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 15 ++++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/sched/cluster.h             | 19 ++++++++++
 include/linux/sched/sd_flags.h            |  9 +++++
 include/linux/sched/topology.h            | 12 +++++-
 include/linux/topology.h                  | 13 +++++++
 kernel/sched/core.c                       | 29 ++++++++++++--
 kernel/sched/fair.c                       | 51 +++++++++++++++----------
 kernel/sched/sched.h                      |  4 ++
 kernel/sched/topology.c                   | 18 +++++++++
 23 files changed, 324 insertions(+), 30 deletions(-)
 create mode 100644 include/linux/sched/cluster.h

-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die
  2021-04-20  0:18 [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
@ 2021-04-20  0:18 ` Barry Song
  2021-04-28  9:48   ` Andrew Jones
  2021-04-20  0:18 ` [RFC PATCH v6 2/4] scheduler: add scheduler level for clusters Barry Song
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Barry Song @ 2021-04-20  0:18 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Jonathan Cameron, Barry Song

From: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.

For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.

This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].

Note this patch only handle the ACPI case.

Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).

Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT.  Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.

Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.

[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
    structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 -v6:
 * the topology ABI documents required by Greg is not completed yet.
   will have a separate patch for that.

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
 arch/arm64/kernel/topology.c              |  2 +
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 15 ++++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/topology.h                  |  6 +++
 8 files changed, 128 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
 	identifier (rather than the kernel's).  The actual value is
 	architecture and platform dependent.
 
+cluster_id:
+
+	the Cluster ID of cpuX.  Typically it is the hardware platform's
+	identifier (rather than the kernel's).  The actual value is
+	architecture and platform dependent.
+
 book_id:
 
 	the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
 	human-readable list of CPUs sharing the same physical_package_id.
 	(deprecated name: "core_siblings_list")
 
+cluster_cpus:
+
+	internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+	human-readable list of CPUs within the same cluster.
+
 die_cpus:
 
 	internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
 
 	#define topology_physical_package_id(cpu)
 	#define topology_die_id(cpu)
+	#define topology_cluster_id(cpu)
 	#define topology_core_id(cpu)
 	#define topology_book_id(cpu)
 	#define topology_drawer_id(cpu)
 	#define topology_sibling_cpumask(cpu)
 	#define topology_core_cpumask(cpu)
+	#define topology_cluster_cpumask(cpu)
 	#define topology_die_cpumask(cpu)
 	#define topology_book_cpumask(cpu)
 	#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
 
 1) topology_physical_package_id: -1
 2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
 
 For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
 default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index e08a412..d72eb8d 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
 			cpu_topology[cpu].thread_id  = -1;
 			cpu_topology[cpu].core_id    = topology_id;
 		}
+		topology_id = find_acpi_cpu_topology_cluster(cpu);
+		cpu_topology[cpu].cluster_id = topology_id;
 		topology_id = find_acpi_cpu_topology_package(cpu);
 		cpu_topology[cpu].package_id = topology_id;
 
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..11f8b02 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
 }
 
 /**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	struct acpi_pptt_processor *cpu_node, *cluster_node;
+	u32 acpi_cpu_id;
+	int retval;
+	int is_thread;
+
+	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+	if (ACPI_FAILURE(status)) {
+		acpi_pptt_warn_missing();
+		return -ENOENT;
+	}
+
+	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+	if (cpu_node == NULL || !cpu_node->parent) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+
+	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+	cluster_node = fetch_pptt_node(table, cpu_node->parent);
+	if (cluster_node == NULL) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+	if (is_thread) {
+		if (!cluster_node->parent) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+		cluster_node = fetch_pptt_node(table, cluster_node->parent);
+		if (cluster_node == NULL) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+	}
+	retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+	acpi_put_table(table);
+
+	return retval;
+}
+
+/**
  * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
  * @cpu: Kernel logical CPU number
  *
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..ca3b8c1 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return core_mask;
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].cluster_sibling;
+}
+
 void update_siblings_masks(unsigned int cpuid)
 {
 	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;
 
+		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+		}
+
 		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
 		cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
 
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
 	cpumask_clear(&cpu_topo->llc_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
 
+	cpumask_clear(&cpu_topo->cluster_sibling);
+	cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
 	cpumask_clear(&cpu_topo->core_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
 	cpumask_clear(&cpu_topo->thread_sibling);
@@ -556,6 +569,7 @@ void __init reset_cpu_topology(void)
 
 		cpu_topo->thread_id = -1;
 		cpu_topo->core_id = -1;
+		cpu_topo->cluster_id = -1;
 		cpu_topo->package_id = -1;
 		cpu_topo->llc_id = -1;
 
@@ -571,6 +585,7 @@ void remove_cpu_topology(unsigned int cpu)
 		cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
 	for_each_cpu(sibling, topology_sibling_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
 	for_each_cpu(sibling, topology_llc_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
 
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@
 define_id_show_func(die_id);
 static DEVICE_ATTR_RO(die_id);
 
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
 define_id_show_func(core_id);
 static DEVICE_ATTR_RO(core_id);
 
@@ -61,6 +64,10 @@
 static DEVICE_ATTR_RO(core_siblings);
 static DEVICE_ATTR_RO(core_siblings_list);
 
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
 define_siblings_show_func(die_cpus, die_cpumask);
 static DEVICE_ATTR_RO(die_cpus);
 static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@
 static struct attribute *default_attrs[] = {
 	&dev_attr_physical_package_id.attr,
 	&dev_attr_die_id.attr,
+	&dev_attr_cluster_id.attr,
 	&dev_attr_core_id.attr,
 	&dev_attr_thread_siblings.attr,
 	&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@
 	&dev_attr_core_cpus_list.attr,
 	&dev_attr_core_siblings.attr,
 	&dev_attr_core_siblings_list.attr,
+	&dev_attr_cluster_cpus.attr,
+	&dev_attr_cluster_cpus_list.attr,
 	&dev_attr_die_cpus.attr,
 	&dev_attr_die_cpus_list.attr,
 	&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 9f43241..138b779 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1307,6 +1307,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
 #ifdef CONFIG_ACPI_PPTT
 int acpi_pptt_cpu_is_thread(unsigned int cpu);
 int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
 int find_acpi_cpu_topology_package(unsigned int cpu);
 int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
 int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1319,6 +1320,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
 {
 	return -EINVAL;
 }
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	return -EINVAL;
+}
 static inline int find_acpi_cpu_topology_package(unsigned int cpu)
 {
 	return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
 struct cpu_topology {
 	int thread_id;
 	int core_id;
+	int cluster_id;
 	int package_id;
 	int llc_id;
 	cpumask_t thread_sibling;
 	cpumask_t core_sibling;
+	cpumask_t cluster_sibling;
 	cpumask_t llc_sibling;
 };
 
@@ -60,13 +62,16 @@ struct cpu_topology {
 extern struct cpu_topology cpu_topology[NR_CPUS];
 
 #define topology_physical_package_id(cpu)	(cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu)	(cpu_topology[cpu].cluster_id)
 #define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
 #define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
 #define topology_sibling_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu)	(&cpu_topology[cpu].cluster_sibling)
 #define topology_llc_cpumask(cpu)	(&cpu_topology[cpu].llc_sibling)
 void init_cpu_topology(void);
 void store_cpu_topology(unsigned int cpuid);
 const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
 void update_siblings_masks(unsigned int cpu);
 void remove_cpu_topology(unsigned int cpuid);
 void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7634cd7..80d27d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_die_id
 #define topology_die_id(cpu)			((void)(cpu), -1)
 #endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu)		((void)(cpu), -1)
+#endif
 #ifndef topology_core_id
 #define topology_core_id(cpu)			((void)(cpu), 0)
 #endif
@@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_core_cpumask
 #define topology_core_cpumask(cpu)		cpumask_of(cpu)
 #endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu)		cpumask_of(cpu)
+#endif
 #ifndef topology_die_cpumask
 #define topology_die_cpumask(cpu)		cpumask_of(cpu)
 #endif
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v6 2/4] scheduler: add scheduler level for clusters
  2021-04-20  0:18 [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
  2021-04-20  0:18 ` [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Barry Song
@ 2021-04-20  0:18 ` Barry Song
  2021-04-20  0:18 ` [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC Barry Song
  2021-04-20  0:18 ` [RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86 Barry Song
  3 siblings, 0 replies; 19+ messages in thread
From: Barry Song @ 2021-04-20  0:18 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus. This means cache coherence overhead inside one
cluster is much less than the overhead across clusters.

This patch adds the sched_domain for clusters. On kunpeng 920, without
this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.

This will help spread unrelated tasks among clusters, thus decrease the
contention and improve the throughput, for example, stream benchmark can
improve 20%+ while parallelism is 6 and improve around 5% while paralle-
lism is 12:

(1) -P <parallelism> 6
$ numactl -N 0 /usr/lib/lmbench/bin/stream -P 6 -M 1024M -N 5

w/o patch:
STREAM copy latency: 2.46 nanoseconds
STREAM copy bandwidth: 39096.28 MB/sec
STREAM scale latency: 2.46 nanoseconds
STREAM scale bandwidth: 38970.26 MB/sec
STREAM add latency: 4.45 nanoseconds
STREAM add bandwidth: 32332.04 MB/sec
STREAM triad latency: 4.07 nanoseconds
STREAM triad bandwidth: 35387.69 MB/sec

w/ patch:
STREAM copy latency: 2.02 nanoseconds
STREAM copy bandwidth: 47604.47 MB/sec   +21.7%
STREAM scale latency: 2.04 nanoseconds
STREAM scale bandwidth: 47066.84 MB/sec  +20.8%
STREAM add latency: 3.35 nanoseconds
STREAM add bandwidth: 42942.15 MB/sec    +32.8%
STREAM triad latency: 3.16 nanoseconds
STREAM triad bandwidth: 45619.18 MB/sec  +28.9%

On the other hand,stream result could change significantly during different
tests without the patch, eg:
a.
STREAM copy latency: 2.16 nanoseconds
STREAM copy bandwidth: 44448.45 MB/sec
STREAM scale latency: 2.17 nanoseconds
STREAM scale bandwidth: 44320.77 MB/sec
STREAM add latency: 3.77 nanoseconds
STREAM add bandwidth: 38230.54 MB/sec
STREAM triad latency: 3.88 nanoseconds
STREAM triad bandwidth: 37072.10 MB/sec

b.
STREAM copy latency: 2.16 nanoseconds
STREAM copy bandwidth: 44403.22 MB/sec
STREAM scale latency: 2.39 nanoseconds
STREAM scale bandwidth: 40173.69 MB/sec
STREAM add latency: 3.77 nanoseconds
STREAM add bandwidth: 38232.56 MB/sec
STREAM triad latency: 3.38 nanoseconds
STREAM triad bandwidth: 42592.04 MB/sec

Obviously it is because the 6 threads are put randomly in 6 cores. Sometimes
they are packed in clusters, sometimes they are spread widely.

(2) -P <parallelism> 12
$ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5

w/o patch:
STREAM copy latency: 3.37 nanoseconds
STREAM copy bandwidth: 57008.80 MB/sec
STREAM scale latency: 3.38 nanoseconds
STREAM scale bandwidth: 56848.47 MB/sec
STREAM add latency: 5.50 nanoseconds
STREAM add bandwidth: 52398.62 MB/sec
STREAM triad latency: 5.09 nanoseconds
STREAM triad bandwidth: 56591.60 MB/sec

w/ patch:
STREAM copy latency: 3.24 nanoseconds
STREAM copy bandwidth: 59338.60 MB/sec  +4.1%
STREAM scale latency: 3.25 nanoseconds
STREAM scale bandwidth: 58993.23 MB/sec +3.7%
STREAM add latency: 5.19 nanoseconds
STREAM add bandwidth: 55517.45 MB/sec   +5.9%
STREAM triad latency: 4.86 nanoseconds
STREAM triad bandwidth: 59245.34 MB/sec +4.7%

Obviously the load balance between clusters help improve the parallelism
of unrelated tasks.

To evaluate the performance impact to related tasks talking with each
other, we run the below hackbench with different -g parameter from 6
to 32 in a NUMA node with 24 cores, for each different g, we run the
command 20 times and get the average time:
$ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
As -f is set to 1, this means all threads are talking with each other
monogamously.

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g 6
Running in threaded mode with 6 groups using 2 file descriptors each (== 12 tasks)
Each sender will pass 1000000 messages of 100 bytes

The below is the result of hackbench w/ and w/o the patch:
g=    6       12    18      24    28     32
w/o: 1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
w/ : 1.1458 1.3309 1.3416 1.4990 1.9212 2.3411

It seems this patch benefits hackbench when the load is relatively low,
while it hurts hackbench much when the load is relatively high(56 and
64 threads in 24 cores).

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/arm64/Kconfig             |  7 +++++++
 include/linux/sched/cluster.h  | 19 +++++++++++++++++++
 include/linux/sched/sd_flags.h |  9 +++++++++
 include/linux/sched/topology.h |  7 +++++++
 include/linux/topology.h       |  7 +++++++
 kernel/sched/core.c            | 20 ++++++++++++++++++++
 kernel/sched/fair.c            |  4 ++++
 kernel/sched/sched.h           |  1 +
 kernel/sched/topology.c        |  6 ++++++
 9 files changed, 80 insertions(+)
 create mode 100644 include/linux/sched/cluster.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1f212b4..9432a30 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -977,6 +977,13 @@ config SCHED_MC
 	  making when dealing with multi-core CPU chips at a cost of slightly
 	  increased overhead in some places. If unsure say N here.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	help
+	  Cluster scheduler support improves the CPU scheduler's decision
+	  making when dealing with machines that have clusters(sharing internal
+	  bus or sharing LLC cache tag). If unsure say N here.
+
 config SCHED_SMT
 	bool "SMT scheduler support"
 	help
diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
new file mode 100644
index 0000000..ea6c475
--- /dev/null
+++ b/include/linux/sched/cluster.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_SCHED_CLUSTER_H
+#define _LINUX_SCHED_CLUSTER_H
+
+#include <linux/static_key.h>
+
+#ifdef CONFIG_SCHED_CLUSTER
+extern struct static_key_false sched_cluster_present;
+
+static __always_inline bool sched_cluster_active(void)
+{
+	return static_branch_likely(&sched_cluster_present);
+}
+#else
+static inline bool sched_cluster_active(void) { return false; }
+
+#endif
+
+#endif
diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
index 34b21e9..fc3c894 100644
--- a/include/linux/sched/sd_flags.h
+++ b/include/linux/sched/sd_flags.h
@@ -100,6 +100,15 @@
 SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
 
 /*
+ * Domain members share CPU cluster resources (i.e. llc cache tags)
+ *
+ * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
+ *               the cluster resouces (such as llc tags and internal bus)
+ * NEEDS_GROUPS: Caches are shared between groups.
+ */
+SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
+
+/*
  * Domain members share CPU package resources (i.e. caches)
  *
  * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 8f0f778..846fcac 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
 }
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+static inline int cpu_cluster_flags(void)
+{
+	return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
+}
+#endif
+
 #ifdef CONFIG_SCHED_MC
 static inline int cpu_core_flags(void)
 {
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 80d27d7..0b3704a 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
 }
 #endif
 
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
+static inline const struct cpumask *cpu_cluster_mask(int cpu)
+{
+	return topology_cluster_cpumask(cpu);
+}
+#endif
+
 static inline const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95bd6ab..30c300c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7840,6 +7840,17 @@ int sched_cpu_activate(unsigned int cpu)
 	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
 		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going up, increment the number of cluster cpus with
+	 * cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > cpumask_weight(cpu_smt_mask(cpu)) &&
+	    cpumask_weight(cpu_cluster_mask(cpu)) < cpumask_weight(cpu_coregroup_mask(cpu)))
+		static_branch_inc_cpuslocked(&sched_cluster_present);
+#endif
+
 	set_cpu_active(cpu, true);
 
 	if (sched_smp_initialized) {
@@ -7916,6 +7927,15 @@ int sched_cpu_deactivate(unsigned int cpu)
 		static_branch_dec_cpuslocked(&sched_smt_present);
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going down, decrement the number of cpus with cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > cpumask_weight(cpu_smt_mask(cpu)) &&
+	    cpumask_weight(cpu_cluster_mask(cpu)) < cpumask_weight(cpu_coregroup_mask(cpu)))
+		static_branch_dec_cpuslocked(&sched_cluster_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d73bdb..a327746 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6021,6 +6021,10 @@ static inline int __select_idle_cpu(int cpu)
 	return -1;
 }
 
+#ifdef CONFIG_SCHED_CLUSTER
+DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
+#endif
+
 #ifdef CONFIG_SCHED_SMT
 DEFINE_STATIC_KEY_FALSE(sched_smt_present);
 EXPORT_SYMBOL_GPL(sched_smt_present);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index cbb0b01..4e938ba 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -6,6 +6,7 @@
 
 #include <linux/sched/autogroup.h>
 #include <linux/sched/clock.h>
+#include <linux/sched/cluster.h>
 #include <linux/sched/coredump.h>
 #include <linux/sched/cpufreq.h>
 #include <linux/sched/cputime.h>
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index d1aec24..829ac9d 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1392,6 +1392,7 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
  */
 #define TOPOLOGY_SD_FLAGS		\
 	(SD_SHARE_CPUCAPACITY	|	\
+	 SD_SHARE_CLS_RESOURCES	|	\
 	 SD_SHARE_PKG_RESOURCES |	\
 	 SD_NUMA		|	\
 	 SD_ASYM_PACKING)
@@ -1511,6 +1512,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
+
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
 #endif
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-20  0:18 [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
  2021-04-20  0:18 ` [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Barry Song
  2021-04-20  0:18 ` [RFC PATCH v6 2/4] scheduler: add scheduler level for clusters Barry Song
@ 2021-04-20  0:18 ` Barry Song
  2021-04-27 11:35   ` Dietmar Eggemann
  2021-04-20  0:18 ` [RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86 Barry Song
  3 siblings, 1 reply; 19+ messages in thread
From: Barry Song @ 2021-04-20  0:18 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

On kunpeng920, cpus within one cluster can communicate wit each other
much faster than cpus across different clusters. A simple hackbench
can prove that.
hackbench running on 4 cpus in single one cluster and 4 cpus in
different clusters shows a large contrast:
(1) within a cluster:
root@ubuntu:~# taskset -c 0,1,2,3 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 4.285

(2) across clusters:
root@ubuntu:~# taskset -c 0,4,8,12 hackbench -p -T -l 20000 -g 1
Running in threaded mode with 1 groups using 40 file descriptors each
(== 40 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 5.524

This inspires us to change the wake_affine path to scan cluster to pack
the related tasks. Ideally, a two-level packing vs. spreading heuristic
could be introduced to distinguish between llc-packing and even narrower
(cluster or MC-L2)-packing. But this way would be quite trivial. So this
patch begins from those tasks running in same LLC. This is actually quite
common in real use cases when tasks are bound in a NUMA.

If users use "numactl -N 0" to bind tasks, this patch will scan cluster
rather than llc to select idle cpu. A hackbench running with some groups
of monogamous sender-receiver model shows a major improvement.

To evaluate the performance impact to related tasks talking with each
other, we run the below hackbench with different -g parameter from 6
to 32 in a NUMA node with 24 cores, for each different g, we run the
command 20 times and get the average time:
$ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
As -f is set to 1, this means all threads are talking with each other
monogamously.

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g 6
Running in threaded mode with 6 groups using 2 file descriptors each (== 12 tasks)
Each sender will pass 1000000 messages of 100 bytes

The below is the result of hackbench:
g                =    6      12      18     24    28    32
w/o                 1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
w/domain            1.1458 1.3309 1.3416 1.4990 1.9212 2.3411
w/domain+affine     0.9500 1.0728 1.1756 1.2201 1.4166 1.5464

w/o: without any change
w/domain: added cluster domain without changing wake_affine
w/domain+affine: added cluster domain, changed wake_affine

while g=6, if we use top -H to show the cpus which tasks are running on,
we can easily find couples are running in same CCL.

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 -v6:
  * emulated a two-level spreading/packing heuristic by only scanning cluster
    in wake_affine path for tasks running in same LLC(also NUMA).

    This partially addressed Dietmar's comment in RFC v3:
    "In case we would like to further distinguish between llc-packing and
     even narrower (cluster or MC-L2)-packing, we would introduce a 2. level
     packing vs. spreading heuristic further down in sis().
   
     IMHO, Barry's current implementation doesn't do this right now. Instead
     he's trying to pack on cluster first and if not successful look further
     among the remaining llc CPUs for an idle CPU."

  * adjusted the hackbench parameter to make relatively low and high load.
    previous patchsets with "-f 10" ran under an extremely high load with
    hundreds of threads, which seems not real use cases.

    This also addressed Vincent's question in RFC v4:
    "In particular, I'm still not convinced that the modification of the wakeup
    path is the root of the hackbench improvement; especially with g=14 where
    there should not be much idle CPUs with 14*40 tasks on at most 32 CPUs."

 block/blk-mq.c                 |  2 +-
 include/linux/sched/topology.h |  5 +++--
 kernel/sched/core.c            |  9 +++++---
 kernel/sched/fair.c            | 47 +++++++++++++++++++++++++-----------------
 kernel/sched/sched.h           |  3 +++
 kernel/sched/topology.c        | 12 +++++++++++
 6 files changed, 53 insertions(+), 25 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d4d7c1c..1418981 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -611,7 +611,7 @@ static inline bool blk_mq_complete_need_ipi(struct request *rq)
 	/* same CPU or cache domain?  Complete locally */
 	if (cpu == rq->mq_ctx->cpu ||
 	    (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) &&
-	     cpus_share_cache(cpu, rq->mq_ctx->cpu)))
+	     cpus_share_cache(cpu, rq->mq_ctx->cpu, 0)))
 		return false;
 
 	/* don't try to IPI to an offline CPU */
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 846fcac..d63d6b8 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -176,7 +176,8 @@ extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
-bool cpus_share_cache(int this_cpu, int that_cpu);
+/* return true if cpus share cluster(while cluster=1) or llc cache */
+bool cpus_share_cache(int this_cpu, int that_cpu, int cluster);
 
 typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
 typedef int (*sched_domain_flags_f)(void);
@@ -225,7 +226,7 @@ struct sched_domain_topology_level {
 {
 }
 
-static inline bool cpus_share_cache(int this_cpu, int that_cpu)
+static inline bool cpus_share_cache(int this_cpu, int that_cpu, int cluster)
 {
 	return true;
 }
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30c300c..c74812a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3126,9 +3126,12 @@ void wake_up_if_idle(int cpu)
 	rcu_read_unlock();
 }
 
-bool cpus_share_cache(int this_cpu, int that_cpu)
+bool cpus_share_cache(int this_cpu, int that_cpu, int cluster)
 {
-	return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
+	if (cluster)
+		return per_cpu(sd_cluster_id, this_cpu) == per_cpu(sd_cluster_id, that_cpu);
+	else
+		return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
 }
 
 static inline bool ttwu_queue_cond(int cpu, int wake_flags)
@@ -3144,7 +3147,7 @@ static inline bool ttwu_queue_cond(int cpu, int wake_flags)
 	 * If the CPU does not share cache, then queue the task on the
 	 * remote rqs wakelist to avoid accessing remote data.
 	 */
-	if (!cpus_share_cache(smp_processor_id(), cpu))
+	if (!cpus_share_cache(smp_processor_id(), cpu, 0))
 		return true;
 
 	/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a327746..69a1704 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -718,7 +718,7 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
 #include "pelt.h"
 #ifdef CONFIG_SMP
 
-static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu);
+static int select_idle_sibling(struct task_struct *p, int prev_cpu, int cpu, int cluster);
 static unsigned long task_h_load(struct task_struct *p);
 static unsigned long capacity_of(int cpu);
 
@@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
  * whatever is irrelevant, spread criteria is apparent partner count exceeds
  * socket size.
  */
-static int wake_wide(struct task_struct *p)
+static int wake_wide(struct task_struct *p, int cluster)
 {
 	unsigned int master = current->wakee_flips;
 	unsigned int slave = p->wakee_flips;
-	int factor = __this_cpu_read(sd_llc_size);
+	int factor = cluster ? __this_cpu_read(sd_cluster_size) :
+		__this_cpu_read(sd_llc_size);
 
 	if (master < slave)
 		swap(master, slave);
@@ -5812,7 +5813,7 @@ static int wake_wide(struct task_struct *p)
  *			  for the overloaded case.
  */
 static int
-wake_affine_idle(int this_cpu, int prev_cpu, int sync)
+wake_affine_idle(int this_cpu, int prev_cpu, int sync, int cluster)
 {
 	/*
 	 * If this_cpu is idle, it implies the wakeup is from interrupt
@@ -5826,7 +5827,7 @@ static int wake_wide(struct task_struct *p)
 	 * a cpufreq perspective, it's better to have higher utilisation
 	 * on one CPU.
 	 */
-	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
+	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu, cluster))
 		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
 
 	if (sync && cpu_rq(this_cpu)->nr_running == 1)
@@ -5882,12 +5883,12 @@ static int wake_wide(struct task_struct *p)
 }
 
 static int wake_affine(struct sched_domain *sd, struct task_struct *p,
-		       int this_cpu, int prev_cpu, int sync)
+		       int this_cpu, int prev_cpu, int sync, int cluster)
 {
 	int target = nr_cpumask_bits;
 
 	if (sched_feat(WA_IDLE))
-		target = wake_affine_idle(this_cpu, prev_cpu, sync);
+		target = wake_affine_idle(this_cpu, prev_cpu, sync, cluster);
 
 	if (sched_feat(WA_WEIGHT) && target == nr_cpumask_bits)
 		target = wake_affine_weight(sd, p, this_cpu, prev_cpu, sync);
@@ -6139,7 +6140,8 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
  * average idle time for this rq (as found in rq->avg_idle).
  */
-static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
+static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target,
+	int cluster)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
 	int i, cpu, idle_cpu = -1, nr = INT_MAX;
@@ -6154,7 +6156,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 
 	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
 
-	if (sched_feat(SIS_PROP) && !smt) {
+	/* cluster is usually quite small like 4, no need SIS_PROP */
+	if (sched_feat(SIS_PROP) && !smt && !cluster) {
 		u64 avg_cost, avg_idle, span_avg;
 
 		/*
@@ -6191,7 +6194,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	if (smt)
 		set_idle_cores(this, false);
 
-	if (sched_feat(SIS_PROP) && !smt) {
+	if (sched_feat(SIS_PROP) && !smt && !cluster) {
 		time = cpu_clock(this) - time;
 		update_avg(&this_sd->avg_scan_cost, time);
 	}
@@ -6244,7 +6247,7 @@ static inline bool asym_fits_capacity(int task_util, int cpu)
 /*
  * Try and locate an idle core/thread in the LLC cache domain.
  */
-static int select_idle_sibling(struct task_struct *p, int prev, int target)
+static int select_idle_sibling(struct task_struct *p, int prev, int target, int cluster)
 {
 	struct sched_domain *sd;
 	unsigned long task_util;
@@ -6266,7 +6269,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	/*
 	 * If the previous CPU is cache affine and idle, don't be stupid:
 	 */
-	if (prev != target && cpus_share_cache(prev, target) &&
+	if (prev != target && cpus_share_cache(prev, target, cluster) &&
 	    (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
 	    asym_fits_capacity(task_util, prev))
 		return prev;
@@ -6289,7 +6292,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	recent_used_cpu = p->recent_used_cpu;
 	if (recent_used_cpu != prev &&
 	    recent_used_cpu != target &&
-	    cpus_share_cache(recent_used_cpu, target) &&
+	    cpus_share_cache(recent_used_cpu, target, cluster) &&
 	    (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
 	    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
 	    asym_fits_capacity(task_util, recent_used_cpu)) {
@@ -6321,11 +6324,11 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 		}
 	}
 
-	sd = rcu_dereference(per_cpu(sd_llc, target));
+	sd = cluster ? rcu_dereference(per_cpu(sd_cluster, target)) :
+			rcu_dereference(per_cpu(sd_llc, target));
 	if (!sd)
 		return target;
-
-	i = select_idle_cpu(p, sd, target);
+	i = select_idle_cpu(p, sd, target, cluster);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
 
@@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 	int want_affine = 0;
 	/* SD_flags and WF_flags share the first nibble */
 	int sd_flag = wake_flags & 0xF;
+	/*
+	 * if cpu and prev_cpu share LLC, consider cluster sibling rather
+	 * than llc. this is typically true while tasks are bound within
+	 * one numa
+	 */
+	int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
 
 	if (wake_flags & WF_TTWU) {
 		record_wakee(p);
@@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 			new_cpu = prev_cpu;
 		}
 
-		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
+		want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr);
 	}
 
 	rcu_read_lock();
@@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
 			if (cpu != prev_cpu)
-				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
+				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
 
 			sd = NULL; /* Prefer wake_affine over balance flags */
 			break;
@@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
 		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
 	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
 		/* Fast path */
-		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
+		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
 
 		if (want_affine)
 			current->recent_used_cpu = cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4e938ba..b4b7d95 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1487,6 +1487,9 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag)
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DECLARE_PER_CPU(int, sd_llc_size);
 DECLARE_PER_CPU(int, sd_llc_id);
+DECLARE_PER_CPU(struct sched_domain __rcu *, sd_cluster);
+DECLARE_PER_CPU(int, sd_cluster_size);
+DECLARE_PER_CPU(int, sd_cluster_id);
 DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 829ac9d..28a2032 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -644,6 +644,9 @@ static void destroy_sched_domains(struct sched_domain *sd)
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DEFINE_PER_CPU(int, sd_llc_size);
 DEFINE_PER_CPU(int, sd_llc_id);
+DEFINE_PER_CPU(struct sched_domain __rcu *, sd_cluster);
+DEFINE_PER_CPU(int, sd_cluster_size);
+DEFINE_PER_CPU(int, sd_cluster_id);
 DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
@@ -657,6 +660,15 @@ static void update_top_cache_domain(int cpu)
 	int id = cpu;
 	int size = 1;
 
+	sd = highest_flag_domain(cpu, SD_SHARE_CLS_RESOURCES);
+	if (sd) {
+		id = cpumask_first(sched_domain_span(sd));
+		size = cpumask_weight(sched_domain_span(sd));
+	}
+	rcu_assign_pointer(per_cpu(sd_cluster, cpu), sd);
+	per_cpu(sd_cluster_size, cpu) = size;
+	per_cpu(sd_cluster_id, cpu) = id;
+
 	sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES);
 	if (sd) {
 		id = cpumask_first(sched_domain_span(sd));
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86
  2021-04-20  0:18 [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
                   ` (2 preceding siblings ...)
  2021-04-20  0:18 ` [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC Barry Song
@ 2021-04-20  0:18 ` Barry Song
  3 siblings, 0 replies; 19+ messages in thread
From: Barry Song @ 2021-04-20  0:18 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

From: Tim Chen <tim.c.chen@linux.intel.com>

There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a cluster of cores instead of being exclusive
to one single core.

To prevent oversubscription of L2 cache, load should be
balanced between such L2 clusters, especially for tasks with
no shared data.

Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.

Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 -v6:
  * added topology_cluster_cpumask() for x86, code provided by Tim.

 arch/x86/Kconfig                |  8 ++++++++
 arch/x86/include/asm/smp.h      |  7 +++++++
 arch/x86/include/asm/topology.h |  2 ++
 arch/x86/kernel/cpu/cacheinfo.c |  1 +
 arch/x86/kernel/cpu/common.c    |  3 +++
 arch/x86/kernel/smpboot.c       | 43 ++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 63 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2792879..d597de2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1002,6 +1002,14 @@ config NR_CPUS
 	  This is purely to save memory: each supported CPU adds about 8KB
 	  to the kernel image.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	default n
+	help
+	 Cluster scheduler support improves the CPU scheduler's decision
+	 making when dealing with machines that have clusters of CPUs
+	 sharing L2 cache. If unsure say N here.
+
 config SCHED_SMT
 	def_bool y if SMP
 
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index c0538f8..9cbc4ae 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -16,7 +16,9 @@
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map);
 /* cpus sharing the last level cache: */
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
 static inline struct cpumask *cpu_llc_shared_mask(int cpu)
@@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu)
 	return per_cpu(cpu_llc_shared_map, cpu);
 }
 
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+{
+	return per_cpu(cpu_l2c_shared_map, cpu);
+}
+
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 9239399..800fa48 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
 #include <asm-generic/topology.h>
 
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
 #define topology_logical_package_id(cpu)	(cpu_data(cpu).logical_proc_id)
 #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
@@ -114,6 +115,7 @@ static inline void setup_node_to_cpumask_map(void) { }
 
 #ifdef CONFIG_SMP
 #define topology_die_cpumask(cpu)		(per_cpu(cpu_die_map, cpu))
+#define topology_cluster_cpumask(cpu)		(cpu_clustergroup_mask(cpu))
 #define topology_core_cpumask(cpu)		(per_cpu(cpu_core_map, cpu))
 #define topology_sibling_cpumask(cpu)		(per_cpu(cpu_sibling_map, cpu))
 
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3ca9be4..0d03a71 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
 		l2 = new_l2;
 #ifdef CONFIG_SMP
 		per_cpu(cpu_llc_id, cpu) = l2_id;
+		per_cpu(cpu_l2c_id, cpu) = l2_id;
 #endif
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index ab640ab..0ba282d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -78,6 +78,9 @@
 /* Last level cache ID of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
 
+/* L2 cache ID of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID;
+
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 02813a7..c85ffa8 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,8 @@
 
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
 
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
+
 /* Per CPU bogomips and other parameters */
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
@@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 	return topology_sane(c, o, "llc");
 }
 
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+{
+	int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
+
+	/* Do not match if we do not have a valid APICID for cpu: */
+	if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID)
+		return false;
+
+	/* Do not match if L2 cache id does not match: */
+	if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2))
+		return false;
+
+	return topology_sane(c, o, "l2c");
+}
+
 /*
  * Unlike the other levels, we do not enforce keeping a
  * multicore group inside a NUMA node.  If this happens, we will
@@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 }
 
 
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC)
 static inline int x86_sched_itmt_flags(void)
 {
 	return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0;
@@ -540,12 +557,21 @@ static int x86_smt_flags(void)
 	return cpu_smt_flags() | x86_sched_itmt_flags();
 }
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+static int x86_cluster_flags(void)
+{
+	return cpu_cluster_flags() | x86_sched_itmt_flags();
+}
+#endif
 #endif
 
 static struct sched_domain_topology_level x86_numa_in_package_topology[] = {
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -556,6 +582,9 @@ static int x86_smt_flags(void)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu)
 	if (!has_mp) {
 		cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu));
 		cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu));
+		cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu));
 		cpumask_set_cpu(cpu, topology_core_cpumask(cpu));
 		cpumask_set_cpu(cpu, topology_die_cpumask(cpu));
 		c->booted_cores = 1;
@@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu)
 		if ((i == cpu) || (has_mp && match_llc(c, o)))
 			link_mask(cpu_llc_shared_mask, cpu, i);
 
+		if ((i == cpu) || (has_mp && match_l2c(c, o)))
+			link_mask(cpu_l2c_shared_mask, cpu, i);
 	}
 
 	/*
@@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return cpu_llc_shared_mask(cpu);
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return cpu_l2c_shared_mask(cpu);
+}
+
 static void impress_friends(void)
 {
 	int cpu;
@@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 		zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL);
 	}
 
 	/*
@@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu)
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
 	for_each_cpu(sibling, cpu_llc_shared_mask(cpu))
 		cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling));
+	for_each_cpu(sibling, cpu_l2c_shared_mask(cpu))
+		cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling));
 	cpumask_clear(cpu_llc_shared_mask(cpu));
+	cpumask_clear(cpu_l2c_shared_mask(cpu));
 	cpumask_clear(topology_sibling_cpumask(cpu));
 	cpumask_clear(topology_core_cpumask(cpu));
 	cpumask_clear(topology_die_cpumask(cpu));
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-20  0:18 ` [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC Barry Song
@ 2021-04-27 11:35   ` Dietmar Eggemann
  2021-04-28  9:51     ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 19+ messages in thread
From: Dietmar Eggemann @ 2021-04-27 11:35 UTC (permalink / raw)
  To: Barry Song, tim.c.chen, catalin.marinas, will, rjw,
	vincent.guittot, bp, tglx, mingo, lenb, peterz, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa

On 20/04/2021 02:18, Barry Song wrote:

[...]

> @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
>   * whatever is irrelevant, spread criteria is apparent partner count exceeds
>   * socket size.
>   */
> -static int wake_wide(struct task_struct *p)
> +static int wake_wide(struct task_struct *p, int cluster)
>  {
>  	unsigned int master = current->wakee_flips;
>  	unsigned int slave = p->wakee_flips;
> -	int factor = __this_cpu_read(sd_llc_size);
> +	int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> +		__this_cpu_read(sd_llc_size);

I don't see that the wake_wide() change has any effect here. None of the
sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
end up in the slow path.
Have you seen a diff when running your `lmbench stream` workload in what
wake_wide() returns when you use `sd cluster size` instead of `sd llc
size` as factor?

I guess for you,  wakeups are now subdivided into faster (cluster = 4
CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
(find_idlest_cpu()).

>  
>  	if (master < slave)
>  		swap(master, slave);

[...]

> @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  	int want_affine = 0;
>  	/* SD_flags and WF_flags share the first nibble */
>  	int sd_flag = wake_flags & 0xF;
> +	/*
> +	 * if cpu and prev_cpu share LLC, consider cluster sibling rather
> +	 * than llc. this is typically true while tasks are bound within
> +	 * one numa
> +	 */
> +	int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);

So you changed from scanning cluster before LLC to scan either cluster
or LLC.

And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
or not. So you only see an effect when running the workload with
`numactl -N X ...`.

>  
>  	if (wake_flags & WF_TTWU) {
>  		record_wakee(p);
> @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  			new_cpu = prev_cpu;
>  		}
>  
> -		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> +		want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu, p->cpus_ptr);
>  	}
>  
>  	rcu_read_lock();
> @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
>  		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
>  			if (cpu != prev_cpu)
> -				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> +				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
>  
>  			sd = NULL; /* Prefer wake_affine over balance flags */
>  			break;
> @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>  		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
>  	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
>  		/* Fast path */
> -		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> +		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
>  
>  		if (want_affine)
>  			current->recent_used_cpu = cpu;

[...]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die
  2021-04-20  0:18 ` [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Barry Song
@ 2021-04-28  9:48   ` Andrew Jones
  2021-04-30  3:46     ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Jones @ 2021-04-28  9:48 UTC (permalink / raw)
  To: song.bao.hua
  Cc: aubrey.li, bp, bsegall, catalin.marinas, dietmar.eggemann,
	gregkh, guodong.xu, hpa, jonathan.cameron, juri.lelli, lenb,
	liguozhu, linux-acpi, linux-arm-kernel, linux-kernel, linuxarm,
	mark.rutland, mgorman, mingo, msys.mizuma, peterz, prime.zeng,
	rjw, rostedt, sudeep.holla, tglx, tim.c.chen, valentin.schneider,
	vincent.guittot, will, x86, xuwei5, yangyicong

On 20/04/2021 12:18, Barry Song wrote:
...
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
> 

The ACPI table offsets are consistent with how other topology IDs are
generated. I once tried to make them a little more human friendly with
[1], but it was nacked.

[1] https://lore.kernel.org/lkml/20180629132934.GA16282@e107155-lin/t/

Thanks,
drew


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-27 11:35   ` Dietmar Eggemann
@ 2021-04-28  9:51     ` Song Bao Hua (Barry Song)
  2021-04-28 13:04       ` Vincent Guittot
  0 siblings, 1 reply; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-04-28  9:51 UTC (permalink / raw)
  To: Dietmar Eggemann, tim.c.chen, catalin.marinas, will, rjw,
	vincent.guittot, bp, tglx, mingo, lenb, peterz, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Tuesday, April 27, 2021 11:36 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 20/04/2021 02:18, Barry Song wrote:
> 
> [...]
> 
> > @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
> >   * whatever is irrelevant, spread criteria is apparent partner count exceeds
> >   * socket size.
> >   */
> > -static int wake_wide(struct task_struct *p)
> > +static int wake_wide(struct task_struct *p, int cluster)
> >  {
> >  	unsigned int master = current->wakee_flips;
> >  	unsigned int slave = p->wakee_flips;
> > -	int factor = __this_cpu_read(sd_llc_size);
> > +	int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> > +		__this_cpu_read(sd_llc_size);
> 
> I don't see that the wake_wide() change has any effect here. None of the
> sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
> end up in the slow path.

I am really confused. The whole code has only checked if wake_flags
has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.

static int
select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
{
	...

	if (wake_flags & WF_TTWU) {
		record_wakee(p);

		if (sched_energy_enabled()) {
			new_cpu = find_energy_efficient_cpu(p, prev_cpu);
			if (new_cpu >= 0)
				return new_cpu;
			new_cpu = prev_cpu;
		}

		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
	}
}

And try_to_wake_up() has always set WF_TTWU:
static int
try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) 
{
	cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
	...
}

So the change in wake_wide will actually affect the value of want_affine.
And I did also see code entered slow path during my benchmark.

One issue I mentioned during linaro open discussion is that
since I have moved to use cluster size to decide the value
of wake_wide, relatively less tasks will make wake_wide()
decide to go to slow path, thus, tasks begin to spread to
other NUMA,  but actually llc_size might be able to contain
those tasks. So a possible model might be:
static int wake_wide(struct task_struct *p)
{
	tasksize < cluster : scan cluster
	tasksize > llc      : slow path
	tasksize > cluster && tasksize < llc: scan llc
}

thoughts?

> Have you seen a diff when running your `lmbench stream` workload in what
> wake_wide() returns when you use `sd cluster size` instead of `sd llc
> size` as factor?
> 
> I guess for you,  wakeups are now subdivided into faster (cluster = 4
> CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
> (find_idlest_cpu()).
> 
> >
> >  	if (master < slave)
> >  		swap(master, slave);
> 
> [...]
> 
> > @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct
> task_struct *p, int prev_cpu)
> >  	int want_affine = 0;
> >  	/* SD_flags and WF_flags share the first nibble */
> >  	int sd_flag = wake_flags & 0xF;
> > +	/*
> > +	 * if cpu and prev_cpu share LLC, consider cluster sibling rather
> > +	 * than llc. this is typically true while tasks are bound within
> > +	 * one numa
> > +	 */
> > +	int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
> 
> So you changed from scanning cluster before LLC to scan either cluster
> or LLC.

Yes, I have seen two ugly things for scanning cluster before scanning LLC
in select_idle_cpu.
1. avg_scan_cost is actually measuring the scanning time. But if we scan
cluster before scanning LLC, during the gap of these two different
domains, we need a huge bit operation and this bit operation is not
a scanning operation at all. This makes the scan_cost quite
untrustworthy particularly "nr" can sometimes be < cluster size, sometimes
> cluster size.

2. select_idle_cpu() is actually the last step of wake_affine, before
that, wake_affine code has been totally depending on cpus_share_cache()
to decide the target to scan from. When waker and wakee have been already
in one LLC, if we only change select_idle_cpu(), at that time, decision
has been made. we may lose some chance to choose the right target to scan
from. So it should be more sensible to let cpus_share_cache() check cluster
when related tasks have been in one same LLC.

> 
> And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
> or not. So you only see an effect when running the workload with
> `numactl -N X ...`.

Ideally, I'd expect this can also positively affect tasks located in
different LLCs.
For example, if taskA and taskB are in different NUMAs(also LLC for both
kunpeng920 and Tim's hardware) at the first beginning, a two-stage packing
might make them take the advantage of cluster:
For the first wake-up, taskA and taskB will be put in one LLC by scanning
LLC; 
For the second wake-up, they might be put in one cluster by scanning
cluster.
So ideally, scanning LLC and scanning cluster can work in two stages
for related tasks and pack them step by step. Practically, this
could happen. But LB between NUMA might take the opposite way. Actually,
for a kernel completely *without* cluster patch, I have seen some
serious ping-pong of tasks in two numa nodes due to the conflict
of wake_affine and LB. this kind of ping-pong could seriously affect
the performance.
For example, for g=6,12,18,24,28,32, I have found running same workload
on 2numa shows so much worse latency than doing that on single one
numa(each numa has 24 cpus).
1numa command: numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
2numa command: numactl -N 0-1 hackbench -p -T -l 1000000 -f 1 -g $1

Measured the average latency of 20 times for each command.

*)without cluster scheduler, 2numa vs 1numa:
g      =     6     12     18    24      28     32
1numa      1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
2numa      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419

BTW, my cluster patch actually also improves 2numa:
*)with cluster scheduler 2numa vs 1numa:
g      =     6     12     18    24      28     32
1numa      0.9500 1.0728 1.1756 1.2201 1.4166 1.5464
2numa      3.5404 4.3744 4.3988 4.6994 5.3117 5.4908

*) 2numa  w/ and w/o cluster:
g          =     6     12     18    24      28     32
2numa w/o      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419
2numa w/       3.5404 4.3744 4.3988 4.6994 5.3117 5.4908

Ideally, load balance should try to pull unmarried tasks rather than
married tasks. I mean, if we have
groupA: task1+task2 as couple, task3 as bachelor
groupB: task4.
groupB should try to pull task3. But I feel it is extremely hard to let
LB understand who is married and who is unmarried.

I assume 2numa worse than 1numa should be a different topic
which might be worth more investigation.

On the other hand, use cases I'm now targeting at are really using
"numactl -N x" to bind processes in one NUMA. If we ignore other NUMA
(also other LLCs) and think one NUMA as a whole system, cluster would
be the last-level topology scheduler can use. And the code could be
quite clean to directly leverage the existing select_sibling code for
LLC by simply changing cpus_share_cache() to cluster level.

> 
> >
> >  	if (wake_flags & WF_TTWU) {
> >  		record_wakee(p);
> > @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct
> *p, int prev_cpu)
> >  			new_cpu = prev_cpu;
> >  		}
> >
> > -		want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> > +		want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu,
> p->cpus_ptr);
> >  	}
> >
> >  	rcu_read_lock();
> > @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct
> *p, int prev_cpu)
> >  		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> >  		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> >  			if (cpu != prev_cpu)
> > -				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> > +				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
> >
> >  			sd = NULL; /* Prefer wake_affine over balance flags */
> >  			break;
> > @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct
> *p, int prev_cpu)
> >  		new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> >  	} else if (wake_flags & WF_TTWU) { /* XXX always ? */
> >  		/* Fast path */
> > -		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> > +		new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
> >
> >  		if (want_affine)
> >  			current->recent_used_cpu = cpu;
> 
> [...]

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-28  9:51     ` Song Bao Hua (Barry Song)
@ 2021-04-28 13:04       ` Vincent Guittot
  2021-04-28 16:47         ` Dietmar Eggemann
  0 siblings, 1 reply; 19+ messages in thread
From: Vincent Guittot @ 2021-04-28 13:04 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: Dietmar Eggemann, tim.c.chen, catalin.marinas, will, rjw, bp,
	tglx, mingo, lenb, peterz, rostedt, bsegall, mgorman,
	msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On Wed, 28 Apr 2021 at 11:51, Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Tuesday, April 27, 2021 11:36 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> > tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de
> > Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 20/04/2021 02:18, Barry Song wrote:
> >
> > [...]
> >
> > > @@ -5786,11 +5786,12 @@ static void record_wakee(struct task_struct *p)
> > >   * whatever is irrelevant, spread criteria is apparent partner count exceeds
> > >   * socket size.
> > >   */
> > > -static int wake_wide(struct task_struct *p)
> > > +static int wake_wide(struct task_struct *p, int cluster)
> > >  {
> > >     unsigned int master = current->wakee_flips;
> > >     unsigned int slave = p->wakee_flips;
> > > -   int factor = __this_cpu_read(sd_llc_size);
> > > +   int factor = cluster ? __this_cpu_read(sd_cluster_size) :
> > > +           __this_cpu_read(sd_llc_size);
> >
> > I don't see that the wake_wide() change has any effect here. None of the
> > sched domains has SD_BALANCE_WAKE set so a wakeup (WF_TTWU) can never
> > end up in the slow path.
>
> I am really confused. The whole code has only checked if wake_flags
> has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.

look at :
#define WF_TTWU     0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */

so  when wake_wide return false, we use the wake_affine mecanism but
if it's false then we fllback to default mode which looks for:
if (tmp->flags & sd_flag)

This means looking for SD_BALANCE_WAKE which is never set

so sd will stay NULL and you will end up calling select_idle_sibling anyway

>
> static int
> select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
> {
>         ...
>
>         if (wake_flags & WF_TTWU) {
>                 record_wakee(p);
>
>                 if (sched_energy_enabled()) {
>                         new_cpu = find_energy_efficient_cpu(p, prev_cpu);
>                         if (new_cpu >= 0)
>                                 return new_cpu;
>                         new_cpu = prev_cpu;
>                 }
>
>                 want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
>         }
> }
>
> And try_to_wake_up() has always set WF_TTWU:
> static int
> try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
> {
>         cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
>         ...
> }
>
> So the change in wake_wide will actually affect the value of want_affine.
> And I did also see code entered slow path during my benchmark.
>
> One issue I mentioned during linaro open discussion is that
> since I have moved to use cluster size to decide the value
> of wake_wide, relatively less tasks will make wake_wide()
> decide to go to slow path, thus, tasks begin to spread to
> other NUMA,  but actually llc_size might be able to contain
> those tasks. So a possible model might be:
> static int wake_wide(struct task_struct *p)
> {
>         tasksize < cluster : scan cluster
>         tasksize > llc      : slow path
>         tasksize > cluster && tasksize < llc: scan llc
> }
>
> thoughts?
>
> > Have you seen a diff when running your `lmbench stream` workload in what
> > wake_wide() returns when you use `sd cluster size` instead of `sd llc
> > size` as factor?
> >
> > I guess for you,  wakeups are now subdivided into faster (cluster = 4
> > CPUs) and fast (llc = 24 CPUs) via sis(), not into fast (sis()) and slow
> > (find_idlest_cpu()).
> >
> > >
> > >     if (master < slave)
> > >             swap(master, slave);
> >
> > [...]
> >
> > > @@ -6745,6 +6748,12 @@ static int find_energy_efficient_cpu(struct
> > task_struct *p, int prev_cpu)
> > >     int want_affine = 0;
> > >     /* SD_flags and WF_flags share the first nibble */
> > >     int sd_flag = wake_flags & 0xF;
> > > +   /*
> > > +    * if cpu and prev_cpu share LLC, consider cluster sibling rather
> > > +    * than llc. this is typically true while tasks are bound within
> > > +    * one numa
> > > +    */
> > > +   int cluster = sched_cluster_active() && cpus_share_cache(cpu, prev_cpu, 0);
> >
> > So you changed from scanning cluster before LLC to scan either cluster
> > or LLC.
>
> Yes, I have seen two ugly things for scanning cluster before scanning LLC
> in select_idle_cpu.
> 1. avg_scan_cost is actually measuring the scanning time. But if we scan
> cluster before scanning LLC, during the gap of these two different
> domains, we need a huge bit operation and this bit operation is not
> a scanning operation at all. This makes the scan_cost quite
> untrustworthy particularly "nr" can sometimes be < cluster size, sometimes
> > cluster size.
>
> 2. select_idle_cpu() is actually the last step of wake_affine, before
> that, wake_affine code has been totally depending on cpus_share_cache()
> to decide the target to scan from. When waker and wakee have been already
> in one LLC, if we only change select_idle_cpu(), at that time, decision
> has been made. we may lose some chance to choose the right target to scan
> from. So it should be more sensible to let cpus_share_cache() check cluster
> when related tasks have been in one same LLC.
>
> >
> > And this is based on whether `this_cpu` and `prev_cpu` are sharing LLC
> > or not. So you only see an effect when running the workload with
> > `numactl -N X ...`.
>
> Ideally, I'd expect this can also positively affect tasks located in
> different LLCs.
> For example, if taskA and taskB are in different NUMAs(also LLC for both
> kunpeng920 and Tim's hardware) at the first beginning, a two-stage packing
> might make them take the advantage of cluster:
> For the first wake-up, taskA and taskB will be put in one LLC by scanning
> LLC;
> For the second wake-up, they might be put in one cluster by scanning
> cluster.
> So ideally, scanning LLC and scanning cluster can work in two stages
> for related tasks and pack them step by step. Practically, this
> could happen. But LB between NUMA might take the opposite way. Actually,
> for a kernel completely *without* cluster patch, I have seen some
> serious ping-pong of tasks in two numa nodes due to the conflict
> of wake_affine and LB. this kind of ping-pong could seriously affect
> the performance.
> For example, for g=6,12,18,24,28,32, I have found running same workload
> on 2numa shows so much worse latency than doing that on single one
> numa(each numa has 24 cpus).
> 1numa command: numactl -N 0 hackbench -p -T -l 1000000 -f 1 -g $1
> 2numa command: numactl -N 0-1 hackbench -p -T -l 1000000 -f 1 -g $1
>
> Measured the average latency of 20 times for each command.
>
> *)without cluster scheduler, 2numa vs 1numa:
> g      =     6     12     18    24      28     32
> 1numa      1.2474 1.5635 1.5133 1.4796 1.6177 1.7898
> 2numa      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419
>
> BTW, my cluster patch actually also improves 2numa:
> *)with cluster scheduler 2numa vs 1numa:
> g      =     6     12     18    24      28     32
> 1numa      0.9500 1.0728 1.1756 1.2201 1.4166 1.5464
> 2numa      3.5404 4.3744 4.3988 4.6994 5.3117 5.4908
>
> *) 2numa  w/ and w/o cluster:
> g          =     6     12     18    24      28     32
> 2numa w/o      4.1997 5.8172 6.0159 7.2343 6.8264 6.5419
> 2numa w/       3.5404 4.3744 4.3988 4.6994 5.3117 5.4908
>
> Ideally, load balance should try to pull unmarried tasks rather than
> married tasks. I mean, if we have
> groupA: task1+task2 as couple, task3 as bachelor
> groupB: task4.
> groupB should try to pull task3. But I feel it is extremely hard to let
> LB understand who is married and who is unmarried.
>
> I assume 2numa worse than 1numa should be a different topic
> which might be worth more investigation.
>
> On the other hand, use cases I'm now targeting at are really using
> "numactl -N x" to bind processes in one NUMA. If we ignore other NUMA
> (also other LLCs) and think one NUMA as a whole system, cluster would
> be the last-level topology scheduler can use. And the code could be
> quite clean to directly leverage the existing select_sibling code for
> LLC by simply changing cpus_share_cache() to cluster level.
>
> >
> > >
> > >     if (wake_flags & WF_TTWU) {
> > >             record_wakee(p);
> > > @@ -6756,7 +6765,7 @@ static int find_energy_efficient_cpu(struct task_struct
> > *p, int prev_cpu)
> > >                     new_cpu = prev_cpu;
> > >             }
> > >
> > > -           want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
> > > +           want_affine = !wake_wide(p, cluster) && cpumask_test_cpu(cpu,
> > p->cpus_ptr);
> > >     }
> > >
> > >     rcu_read_lock();
> > > @@ -6768,7 +6777,7 @@ static int find_energy_efficient_cpu(struct task_struct
> > *p, int prev_cpu)
> > >             if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> > >                 cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> > >                     if (cpu != prev_cpu)
> > > -                           new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> > > +                           new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync, cluster);
> > >
> > >                     sd = NULL; /* Prefer wake_affine over balance flags */
> > >                     break;
> > > @@ -6785,7 +6794,7 @@ static int find_energy_efficient_cpu(struct task_struct
> > *p, int prev_cpu)
> > >             new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> > >     } else if (wake_flags & WF_TTWU) { /* XXX always ? */
> > >             /* Fast path */
> > > -           new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);
> > > +           new_cpu = select_idle_sibling(p, prev_cpu, new_cpu, cluster);
> > >
> > >             if (want_affine)
> > >                     current->recent_used_cpu = cpu;
> >
> > [...]
>
> Thanks
> Barry
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-28 13:04       ` Vincent Guittot
@ 2021-04-28 16:47         ` Dietmar Eggemann
       [not found]           ` <185746c4d02a485ca8f3509439328b26@hisilicon.com>
  0 siblings, 1 reply; 19+ messages in thread
From: Dietmar Eggemann @ 2021-04-28 16:47 UTC (permalink / raw)
  To: Vincent Guittot, Song Bao Hua (Barry Song)
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On 28/04/2021 15:04, Vincent Guittot wrote:
> On Wed, 28 Apr 2021 at 11:51, Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
>>
>>> -----Original Message-----
>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

>> I am really confused. The whole code has only checked if wake_flags
>> has WF_TTWU, it has never checked if sd_domain has SD_BALANCE_WAKE flag.
> 
> look at :
> #define WF_TTWU     0x08 /* Wakeup;            maps to SD_BALANCE_WAKE */
> 
> so  when wake_wide return false, we use the wake_affine mecanism but
> if it's false then we fllback to default mode which looks for:
> if (tmp->flags & sd_flag)
> 
> This means looking for SD_BALANCE_WAKE which is never set
> 
> so sd will stay NULL and you will end up calling select_idle_sibling anyway
> 
>>
>> static int
>> select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
>> {
>>         ...
>>
>>         if (wake_flags & WF_TTWU) {
>>                 record_wakee(p);
>>
>>                 if (sched_energy_enabled()) {
>>                         new_cpu = find_energy_efficient_cpu(p, prev_cpu);
>>                         if (new_cpu >= 0)
>>                                 return new_cpu;
>>                         new_cpu = prev_cpu;
>>                 }
>>
>>                 want_affine = !wake_wide(p) && cpumask_test_cpu(cpu, p->cpus_ptr);
>>         }
>> }
>>
>> And try_to_wake_up() has always set WF_TTWU:
>> static int
>> try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
>> {
>>         cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU);
>>         ...
>> }
>>
>> So the change in wake_wide will actually affect the value of want_affine.
>> And I did also see code entered slow path during my benchmark.

Yes, this is happening but IMHO not for wakeups. Check wake_flags for
the tasks which go through `slow path` on your machine. They should have
WF_EXEC or WF_FORK, not WF_TTWU (& WF_SYNC).

>> One issue I mentioned during linaro open discussion is that
>> since I have moved to use cluster size to decide the value
>> of wake_wide, relatively less tasks will make wake_wide()
>> decide to go to slow path, thus, tasks begin to spread to
>> other NUMA,  but actually llc_size might be able to contain
>> those tasks. So a possible model might be:
>> static int wake_wide(struct task_struct *p)
>> {
>>         tasksize < cluster : scan cluster
>>         tasksize > llc      : slow path
>>         tasksize > cluster && tasksize < llc: scan llc
>> }
>>
>> thoughts?

Like Vincent explained, the return value of wake_wide() doesn't matter.
For wakeups you always end up in sis().

[...]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die
  2021-04-28  9:48   ` Andrew Jones
@ 2021-04-30  3:46     ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-04-30  3:46 UTC (permalink / raw)
  To: Andrew Jones
  Cc: aubrey.li, bp, bsegall, catalin.marinas, dietmar.eggemann,
	gregkh, guodong.xu, hpa, Jonathan Cameron, juri.lelli, lenb,
	Liguozhu (Kenneth),
	linux-acpi, linux-arm-kernel, linux-kernel, linuxarm,
	mark.rutland, mgorman, mingo, msys.mizuma, peterz, Zengtao (B),
	rjw, rostedt, sudeep.holla, tglx, tim.c.chen, valentin.schneider,
	vincent.guittot, will, x86, xuwei (O), yangyicong, tiantao (H)



> -----Original Message-----
> From: Andrew Jones [mailto:drjones@redhat.com]
> Sent: Wednesday, April 28, 2021 9:48 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: aubrey.li@linux.intel.com; bp@alien8.de; bsegall@google.com;
> catalin.marinas@arm.com; dietmar.eggemann@arm.com;
> gregkh@linuxfoundation.org; guodong.xu@linaro.org; hpa@zytor.com; Jonathan
> Cameron <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> lenb@kernel.org; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linux-acpi@vger.kernel.org; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linuxarm@openeuler.org; mark.rutland@arm.com;
> mgorman@suse.de; mingo@redhat.com; msys.mizuma@gmail.com;
> peterz@infradead.org; Zengtao (B) <prime.zeng@hisilicon.com>;
> rjw@rjwysocki.net; rostedt@goodmis.org; sudeep.holla@arm.com;
> tglx@linutronix.de; tim.c.chen@linux.intel.com; valentin.schneider@arm.com;
> vincent.guittot@linaro.org; will@kernel.org; x86@kernel.org; xuwei (O)
> <xuwei5@huawei.com>; yangyicong <yangyicong@huawei.com>
> Subject: Re: [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within
> a die
> 
> On 20/04/2021 12:18, Barry Song wrote:
> ...
> > Currently the ID provided is the offset of the Processor
> > Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> > it is not terribly elegant so alternative suggestions welcome.
> >
> 
> The ACPI table offsets are consistent with how other topology IDs are
> generated. I once tried to make them a little more human friendly with
> [1], but it was nacked.
> 
> [1] https://lore.kernel.org/lkml/20180629132934.GA16282@e107155-lin/t/
> 

Ideally, we are going to check if cluster node has a valid UID,
if yes, read this ID; otherwise, fall back to use offset.

Will move to that way in next version.

> Thanks,
> drew

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
       [not found]           ` <185746c4d02a485ca8f3509439328b26@hisilicon.com>
@ 2021-04-30 10:42             ` Dietmar Eggemann
  2021-05-03  6:19               ` Song Bao Hua (Barry Song)
  2021-05-03 11:35               ` Song Bao Hua (Barry Song)
  0 siblings, 2 replies; 19+ messages in thread
From: Dietmar Eggemann @ 2021-04-30 10:42 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song), Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> 
> 
>> -----Original Message-----
>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>
>> [...]
>>
>>>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

> Though we will never go to slow path, wake_wide() will affect want_affine,
> so eventually affect the "new_cpu"?

yes.

> 
> 	for_each_domain(cpu, tmp) {
> 		/*
> 		 * If both 'cpu' and 'prev_cpu' are part of this domain,
> 		 * cpu is a valid SD_WAKE_AFFINE target.
> 		 */
> 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> 			if (cpu != prev_cpu)
> 				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> 
> 			sd = NULL; /* Prefer wake_affine over balance flags */
> 			break;
> 		}
> 
> 		if (tmp->flags & sd_flag)
> 			sd = tmp;
> 		else if (!want_affine)
> 			break;
> 	}
> 
> If wake_affine is false, the above won't execute, new_cpu(target) will
> always be "prev_cpu"? so when task size > cluster size in wake_wide(),
> this means we won't pull the wakee to the cluster of waker? It seems
> sensible.

What is `task size` here?

The criterion is `!(slave < factor || master < slave * factor)` or
`slave >= factor && master >= slave * factor` to wake wide.

I see that since you effectively change the sched domain size from LLC
to CLUSTER (e.g. 24->6) for wakeups with cpu and prev_cpu sharing LLC
(hence the `numactl -N 0` in your workload), wake_wide() has to take
CLUSTER size into consideration.

I was wondering if you saw wake_wide() returning 1 with your use cases:

numactl -N 0 /usr/lib/lmbench/bin/stream -P [6,12] -M 1024M -N 5

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-30 10:42             ` Dietmar Eggemann
@ 2021-05-03  6:19               ` Song Bao Hua (Barry Song)
  2021-05-03 11:35               ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-05-03  6:19 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Friday, April 30, 2021 10:43 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> >
> >
> >> -----Original Message-----
> >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> 
> [...]
> 
> >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>
> >> [...]
> >>
> >>>>> On 20/04/2021 02:18, Barry Song wrote:
> 
> [...]
> 
> > Though we will never go to slow path, wake_wide() will affect want_affine,
> > so eventually affect the "new_cpu"?
> 
> yes.
> 
> >
> > 	for_each_domain(cpu, tmp) {
> > 		/*
> > 		 * If both 'cpu' and 'prev_cpu' are part of this domain,
> > 		 * cpu is a valid SD_WAKE_AFFINE target.
> > 		 */
> > 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> > 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> > 			if (cpu != prev_cpu)
> > 				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> >
> > 			sd = NULL; /* Prefer wake_affine over balance flags */
> > 			break;
> > 		}
> >
> > 		if (tmp->flags & sd_flag)
> > 			sd = tmp;
> > 		else if (!want_affine)
> > 			break;
> > 	}
> >
> > If wake_affine is false, the above won't execute, new_cpu(target) will
> > always be "prev_cpu"? so when task size > cluster size in wake_wide(),
> > this means we won't pull the wakee to the cluster of waker? It seems
> > sensible.
> 
> What is `task size` here?
> 
> The criterion is `!(slave < factor || master < slave * factor)` or
> `slave >= factor && master >= slave * factor` to wake wide.
> 

Yes. For "task size", I actually mean a bundle of waker-wakee tasks
which can make "slave >= factor && master >= slave * factor" either
true or false, then change the target cpu where we are going to scan
from.
Now since I have moved to cluster level when tasks have been in same
LLC level, it seems it would be more sensible to use "cluster_size" as
factor?

> I see that since you effectively change the sched domain size from LLC
> to CLUSTER (e.g. 24->6) for wakeups with cpu and prev_cpu sharing LLC
> (hence the `numactl -N 0` in your workload), wake_wide() has to take
> CLUSTER size into consideration.
> 
> I was wondering if you saw wake_wide() returning 1 with your use cases:
> 
> numactl -N 0 /usr/lib/lmbench/bin/stream -P [6,12] -M 1024M -N 5

I couldn't make wake_wide return 1 by the above stream command.
And I can't reproduce it by a 1:1(monogamous) hackbench "-f 1".

But I am able to reproduce this issue by a M:N hackbench, for example:

numactl -N 0 hackbench -p -T -f 10 -l 20000 -g 1

hackbench will create 10 senders which will send messages to 10
receivers. (Each sender can send messages to all 10 receivers.)

I've often seen flips like:
waker wakee
1501  39
1509  17
11   1320
13   2016

11, 13, 17 is smaller than LLC but larger than cluster. So the wake_wide()
using cluster factor will return 1, on the other hand, if we always use
llc_size as factor, it will return 0.

However, it seems the change in wake_wide() could bring some negative
influence to M:N relationship(-f 10) according to tests made today by:

numactl -N 0 hackbench -p -T -f 10 -l 20000 -g $1

g             =      1     2       3       4
cluster_size     0.5768 0.6578  0.8117 1.0119
LLC_size         0.5479 0.6162  0.6922 0.7754

Always using llc_size as factor in wake_wide still shows better result
in the 10:10 polygamous hackbench.

So it seems the `slave >= factor && master >= slave * factor` isn't
a suitable criterion for cluster size?

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-04-30 10:42             ` Dietmar Eggemann
  2021-05-03  6:19               ` Song Bao Hua (Barry Song)
@ 2021-05-03 11:35               ` Song Bao Hua (Barry Song)
  2021-05-05 12:29                 ` Dietmar Eggemann
  1 sibling, 1 reply; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-05-03 11:35 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Monday, May 3, 2021 6:12 PM
> To: 'Dietmar Eggemann' <dietmar.eggemann@arm.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> 
> 
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Friday, April 30, 2021 10:43 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> > <vincent.guittot@linaro.org>
> > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> > lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> > bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> > valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> mark.rutland@arm.com;
> > sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >
> > [...]
> >
> > >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>
> > >> [...]
> > >>
> > >>>>> On 20/04/2021 02:18, Barry Song wrote:
> >
> > [...]
> >
> > > Though we will never go to slow path, wake_wide() will affect want_affine,
> > > so eventually affect the "new_cpu"?
> >
> > yes.
> >
> > >
> > > 	for_each_domain(cpu, tmp) {
> > > 		/*
> > > 		 * If both 'cpu' and 'prev_cpu' are part of this domain,
> > > 		 * cpu is a valid SD_WAKE_AFFINE target.
> > > 		 */
> > > 		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
> > > 		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
> > > 			if (cpu != prev_cpu)
> > > 				new_cpu = wake_affine(tmp, p, cpu, prev_cpu, sync);
> > >
> > > 			sd = NULL; /* Prefer wake_affine over balance flags */
> > > 			break;
> > > 		}
> > >
> > > 		if (tmp->flags & sd_flag)
> > > 			sd = tmp;
> > > 		else if (!want_affine)
> > > 			break;
> > > 	}
> > >
> > > If wake_affine is false, the above won't execute, new_cpu(target) will
> > > always be "prev_cpu"? so when task size > cluster size in wake_wide(),
> > > this means we won't pull the wakee to the cluster of waker? It seems
> > > sensible.
> >
> > What is `task size` here?
> >
> > The criterion is `!(slave < factor || master < slave * factor)` or
> > `slave >= factor && master >= slave * factor` to wake wide.
> >
> 
> Yes. For "task size", I actually mean a bundle of waker-wakee tasks
> which can make "slave >= factor && master >= slave * factor" either
> true or false, then change the target cpu where we are going to scan
> from.
> Now since I have moved to cluster level when tasks have been in same
> LLC level, it seems it would be more sensible to use "cluster_size" as
> factor?
> 
> > I see that since you effectively change the sched domain size from LLC
> > to CLUSTER (e.g. 24->6) for wakeups with cpu and prev_cpu sharing LLC
> > (hence the `numactl -N 0` in your workload), wake_wide() has to take
> > CLUSTER size into consideration.
> >
> > I was wondering if you saw wake_wide() returning 1 with your use cases:
> >
> > numactl -N 0 /usr/lib/lmbench/bin/stream -P [6,12] -M 1024M -N 5
> 
> I couldn't make wake_wide return 1 by the above stream command.
> And I can't reproduce it by a 1:1(monogamous) hackbench "-f 1".
> 
> But I am able to reproduce this issue by a M:N hackbench, for example:
> 
> numactl -N 0 hackbench -p -T -f 10 -l 20000 -g 1
> 
> hackbench will create 10 senders which will send messages to 10
> receivers. (Each sender can send messages to all 10 receivers.)
> 
> I've often seen flips like:
> waker wakee
> 1501  39
> 1509  17
> 11   1320
> 13   2016
> 
> 11, 13, 17 is smaller than LLC but larger than cluster. So the wake_wide()
> using cluster factor will return 1, on the other hand, if we always use
> llc_size as factor, it will return 0.
> 
> However, it seems the change in wake_wide() could bring some negative
> influence to M:N relationship(-f 10) according to tests made today by:
> 
> numactl -N 0 hackbench -p -T -f 10 -l 20000 -g $1
> 
> g             =      1     2       3       4
> cluster_size     0.5768 0.6578  0.8117 1.0119
> LLC_size         0.5479 0.6162  0.6922 0.7754
> 
> Always using llc_size as factor in wake_wide still shows better result
> in the 10:10 polygamous hackbench.
> 
> So it seems the `slave >= factor && master >= slave * factor` isn't
> a suitable criterion for cluster size?

On the other hand, according to "sched: Implement smarter wake-affine logic"
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62470419

Proper factor in wake_wide is mainly beneficial of 1:n tasks like postgresql/pgbench.
So using the smaller cluster size as factor might help make wake_affine false so
improve pgbench.

From the commit log, while clients =  2*cpus, the commit made the biggest
improvement. In my case, It should be clients=48 for a machine whose LLC
size is 24.

In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
under two different scenarios:
1. page cache always hit, so no real I/O for database read
2. echo 3 > /proc/sys/vm/drop_caches

For case 1, using cluster_size and using llc_size will result in similar
tps= ~108000, all of 24 cpus have 100% cpu utilization.

For case 2, using llc_size still shows better performance.

tps for each test round(cluster size as factor in wake_wide):
1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294 1539.877146
avg tps = 1464

tps for each test round(llc size as factor in wake_wide):
1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814 1876.802168
avg tps = 1641  (+12%)

so it seems using cluster_size as factor in "slave >= factor && master >= slave *
factor" isn't a good choice for my machine at least.

Thanks
Barry
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-05-03 11:35               ` Song Bao Hua (Barry Song)
@ 2021-05-05 12:29                 ` Dietmar Eggemann
  2021-05-07 13:07                   ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 19+ messages in thread
From: Dietmar Eggemann @ 2021-05-05 12:29 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song), Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:

[...]

>> From: Song Bao Hua (Barry Song)

[...]

>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>
>>> [...]
>>>
>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>>>
>>>>> [...]
>>>>>
>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:

[...]

> 
> On the other hand, according to "sched: Implement smarter wake-affine logic"
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62470419
> 
> Proper factor in wake_wide is mainly beneficial of 1:n tasks like postgresql/pgbench.
> So using the smaller cluster size as factor might help make wake_affine false so
> improve pgbench.
> 
> From the commit log, while clients =  2*cpus, the commit made the biggest
> improvement. In my case, It should be clients=48 for a machine whose LLC
> size is 24.
> 
> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> under two different scenarios:
> 1. page cache always hit, so no real I/O for database read
> 2. echo 3 > /proc/sys/vm/drop_caches
> 
> For case 1, using cluster_size and using llc_size will result in similar
> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> 
> For case 2, using llc_size still shows better performance.
> 
> tps for each test round(cluster size as factor in wake_wide):
> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294 1539.877146
> avg tps = 1464
> 
> tps for each test round(llc size as factor in wake_wide):
> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814 1876.802168
> avg tps = 1641  (+12%)
> 
> so it seems using cluster_size as factor in "slave >= factor && master >= slave *
> factor" isn't a good choice for my machine at least.

So SD size = 4 (instead of 24) seems to be too small for `-c 48`.

Just curious, have you seen the benefit of using wake wide on SD size =
24 (LLC) compared to not using it at all?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-05-05 12:29                 ` Dietmar Eggemann
@ 2021-05-07 13:07                   ` Song Bao Hua (Barry Song)
  2021-05-13 12:32                     ` Dietmar Eggemann
  0 siblings, 1 reply; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-05-07 13:07 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Thursday, May 6, 2021 12:30 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> 
> [...]
> 
> >> From: Song Bao Hua (Barry Song)
> 
> [...]
> 
> >>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> 
> [...]
> 
> >>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> >>>>
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>
> >>> [...]
> >>>
> >>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> 
> [...]
> 
> >
> > On the other hand, according to "sched: Implement smarter wake-affine logic"
> >
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> ?id=62470419
> >
> > Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> postgresql/pgbench.
> > So using the smaller cluster size as factor might help make wake_affine false
> so
> > improve pgbench.
> >
> > From the commit log, while clients =  2*cpus, the commit made the biggest
> > improvement. In my case, It should be clients=48 for a machine whose LLC
> > size is 24.
> >
> > In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> > under two different scenarios:
> > 1. page cache always hit, so no real I/O for database read
> > 2. echo 3 > /proc/sys/vm/drop_caches
> >
> > For case 1, using cluster_size and using llc_size will result in similar
> > tps= ~108000, all of 24 cpus have 100% cpu utilization.
> >
> > For case 2, using llc_size still shows better performance.
> >
> > tps for each test round(cluster size as factor in wake_wide):
> > 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> 1539.877146
> > avg tps = 1464
> >
> > tps for each test round(llc size as factor in wake_wide):
> > 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> 1876.802168
> > avg tps = 1641  (+12%)
> >
> > so it seems using cluster_size as factor in "slave >= factor && master >=
> slave *
> > factor" isn't a good choice for my machine at least.
> 
> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> 
> Just curious, have you seen the benefit of using wake wide on SD size =
> 24 (LLC) compared to not using it at all?

At least in my benchmark made today, I have not seen any benefit to use
llc_size. Always returning 0 in wake_wide() seems to be much better.

postgres@ubuntu:$pgbench -i pgbench
postgres@pgbench:$ pgbench -T 120 -c 48 pgbench

using llc_size, it got to 123tps
always returning 0 in wake_wide(), it got to 158tps

actually, I really couldn't reproduce the performance improvement
the commit "sched: Implement smarter wake-affine logic" mentioned.
on the other hand, the commit log didn't present the pgbench command
parameter used. I guess the benchmark result will highly depend on
the command parameter and disk I/O speed.

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-05-07 13:07                   ` Song Bao Hua (Barry Song)
@ 2021-05-13 12:32                     ` Dietmar Eggemann
  2021-05-25  8:14                       ` Song Bao Hua (Barry Song)
  2021-05-26  9:54                       ` Song Bao Hua (Barry Song)
  0 siblings, 2 replies; 19+ messages in thread
From: Dietmar Eggemann @ 2021-05-13 12:32 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song), Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> 
> 
>> -----Original Message-----
>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]

[...]

>> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
>>
>> [...]
>>
>>>> From: Song Bao Hua (Barry Song)
>>
>> [...]
>>
>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>
>> [...]
>>
>>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
>>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>>>
>>>>> [...]
>>>>>
>>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
>>>>>>>
>>>>>>> [...]
>>>>>>>
>>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
>>
>> [...]
>>
>>>
>>> On the other hand, according to "sched: Implement smarter wake-affine logic"
>>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
>> ?id=62470419
>>>
>>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
>> postgresql/pgbench.
>>> So using the smaller cluster size as factor might help make wake_affine false
>> so
>>> improve pgbench.
>>>
>>> From the commit log, while clients =  2*cpus, the commit made the biggest
>>> improvement. In my case, It should be clients=48 for a machine whose LLC
>>> size is 24.
>>>
>>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
>>> under two different scenarios:
>>> 1. page cache always hit, so no real I/O for database read
>>> 2. echo 3 > /proc/sys/vm/drop_caches
>>>
>>> For case 1, using cluster_size and using llc_size will result in similar
>>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
>>>
>>> For case 2, using llc_size still shows better performance.
>>>
>>> tps for each test round(cluster size as factor in wake_wide):
>>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
>> 1539.877146
>>> avg tps = 1464
>>>
>>> tps for each test round(llc size as factor in wake_wide):
>>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
>> 1876.802168
>>> avg tps = 1641  (+12%)
>>>
>>> so it seems using cluster_size as factor in "slave >= factor && master >=
>> slave *
>>> factor" isn't a good choice for my machine at least.
>>
>> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
>>
>> Just curious, have you seen the benefit of using wake wide on SD size =
>> 24 (LLC) compared to not using it at all?
> 
> At least in my benchmark made today, I have not seen any benefit to use
> llc_size. Always returning 0 in wake_wide() seems to be much better.
> 
> postgres@ubuntu:$pgbench -i pgbench
> postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> 
> using llc_size, it got to 123tps
> always returning 0 in wake_wide(), it got to 158tps
> 
> actually, I really couldn't reproduce the performance improvement
> the commit "sched: Implement smarter wake-affine logic" mentioned.
> on the other hand, the commit log didn't present the pgbench command
> parameter used. I guess the benchmark result will highly depend on
> the command parameter and disk I/O speed.

I see. And it was a way smaller machine (12 CPUs) back then.

You could run pgbench via mmtests https://github.com/gormanm/mmtests.

I.e the `timed-ro-medium` test.

mmtests# ./run-mmtests.sh --config
./configs/config-db-pgbench-timed-ro-medium test_tag

/shellpacks/shellpack-bench-pgbench contains all the individual test
steps. Something you could use as a template for your pgbench standalone
tests as well.

I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
memory on v5.12 vanilla and w/o wakewide.
The test uses `scale_factor = 2570` on this machine. I guess this
relates to ~41GB? At least this was the size of the:

#mmtests/work/testdisk/data/pgdata directory when the test started.


mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
wo_wakewide | grep ^Hmean


      #clients  v5.12 vanilla          v5.12 w/o wakewide

Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*


So there are some improvements w/ wakewide but nothing of the scale
showed in the original wakewide patch.

I'm not an expert on how to set up these pgbench tests though. So maybe
other pgbench related mmtests configs or some more fine-grained tuning
can produce bigger diffs?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-05-13 12:32                     ` Dietmar Eggemann
@ 2021-05-25  8:14                       ` Song Bao Hua (Barry Song)
  2021-05-26  9:54                       ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-05-25  8:14 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> Sent: Friday, May 14, 2021 12:32 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> >
> >
> >> -----Original Message-----
> >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> 
> [...]
> 
> >> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> >>
> >> [...]
> >>
> >>>> From: Song Bao Hua (Barry Song)
> >>
> >> [...]
> >>
> >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>
> >> [...]
> >>
> >>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> >>>>>>
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >>>>>>>
> >>>>>>> [...]
> >>>>>>>
> >>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> >>
> >> [...]
> >>
> >>>
> >>> On the other hand, according to "sched: Implement smarter wake-affine logic"
> >>>
> >>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> >> ?id=62470419
> >>>
> >>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> >> postgresql/pgbench.
> >>> So using the smaller cluster size as factor might help make wake_affine
> false
> >> so
> >>> improve pgbench.
> >>>
> >>> From the commit log, while clients =  2*cpus, the commit made the biggest
> >>> improvement. In my case, It should be clients=48 for a machine whose LLC
> >>> size is 24.
> >>>
> >>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> >>> under two different scenarios:
> >>> 1. page cache always hit, so no real I/O for database read
> >>> 2. echo 3 > /proc/sys/vm/drop_caches
> >>>
> >>> For case 1, using cluster_size and using llc_size will result in similar
> >>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> >>>
> >>> For case 2, using llc_size still shows better performance.
> >>>
> >>> tps for each test round(cluster size as factor in wake_wide):
> >>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> >> 1539.877146
> >>> avg tps = 1464
> >>>
> >>> tps for each test round(llc size as factor in wake_wide):
> >>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> >> 1876.802168
> >>> avg tps = 1641  (+12%)
> >>>
> >>> so it seems using cluster_size as factor in "slave >= factor && master >=
> >> slave *
> >>> factor" isn't a good choice for my machine at least.
> >>
> >> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> >>
> >> Just curious, have you seen the benefit of using wake wide on SD size =
> >> 24 (LLC) compared to not using it at all?
> >
> > At least in my benchmark made today, I have not seen any benefit to use
> > llc_size. Always returning 0 in wake_wide() seems to be much better.
> >
> > postgres@ubuntu:$pgbench -i pgbench
> > postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> >
> > using llc_size, it got to 123tps
> > always returning 0 in wake_wide(), it got to 158tps
> >
> > actually, I really couldn't reproduce the performance improvement
> > the commit "sched: Implement smarter wake-affine logic" mentioned.
> > on the other hand, the commit log didn't present the pgbench command
> > parameter used. I guess the benchmark result will highly depend on
> > the command parameter and disk I/O speed.
> 
> I see. And it was a way smaller machine (12 CPUs) back then.
> 
> You could run pgbench via mmtests https://github.com/gormanm/mmtests.
> 
> I.e the `timed-ro-medium` test.
> 
> mmtests# ./run-mmtests.sh --config
> ./configs/config-db-pgbench-timed-ro-medium test_tag
> 
> /shellpacks/shellpack-bench-pgbench contains all the individual test
> steps. Something you could use as a template for your pgbench standalone
> tests as well.
> 
> I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
> memory on v5.12 vanilla and w/o wakewide.
> The test uses `scale_factor = 2570` on this machine. I guess this
> relates to ~41GB? At least this was the size of the:

Thanks. Dietmar, sorry for slow response. Sick leave for the whole
last week.

I feel it makes much more sense to use mmtests which is setting
scale_factor according to total memory size, thus, considering
the impact of page cache. And it is also doing database warming-up
for 30minutes.

I will get more data and compare three cases:
1. use cluster as wake_wide factor
2. use llc as wake_wide factor
3. always return 0 in wake_wide.

and post the result afterwards.

> 
> #mmtests/work/testdisk/data/pgdata directory when the test started.
> 
> 
> mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
> wo_wakewide | grep ^Hmean
> 
> 
>       #clients  v5.12 vanilla          v5.12 w/o wakewide
> 
> Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
> Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
> Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
> Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
> Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
> Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
> Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*
> 
> 
> So there are some improvements w/ wakewide but nothing of the scale
> showed in the original wakewide patch.
> 
> I'm not an expert on how to set up these pgbench tests though. So maybe
> other pgbench related mmtests configs or some more fine-grained tuning
> can produce bigger diffs?

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC
  2021-05-13 12:32                     ` Dietmar Eggemann
  2021-05-25  8:14                       ` Song Bao Hua (Barry Song)
@ 2021-05-26  9:54                       ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 19+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-05-26  9:54 UTC (permalink / raw)
  To: Dietmar Eggemann, Vincent Guittot
  Cc: tim.c.chen, catalin.marinas, will, rjw, bp, tglx, mingo, lenb,
	peterz, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Tuesday, May 25, 2021 8:07 PM
> To: 'Dietmar Eggemann' <dietmar.eggemann@arm.com>; Vincent Guittot
> <vincent.guittot@linaro.org>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; juri.lelli@redhat.com; mark.rutland@arm.com;
> sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> linuxarm@openeuler.org; hpa@zytor.com
> Subject: RE: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> within one LLC
> 
> 
> 
> > -----Original Message-----
> > From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > Sent: Friday, May 14, 2021 12:32 AM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>; Vincent Guittot
> > <vincent.guittot@linaro.org>
> > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; bp@alien8.de; tglx@linutronix.de; mingo@redhat.com;
> > lenb@kernel.org; peterz@infradead.org; rostedt@goodmis.org;
> > bsegall@google.com; mgorman@suse.de; msys.mizuma@gmail.com;
> > valentin.schneider@arm.com; gregkh@linuxfoundation.org; Jonathan Cameron
> > <jonathan.cameron@huawei.com>; juri.lelli@redhat.com;
> mark.rutland@arm.com;
> > sudeep.holla@arm.com; aubrey.li@linux.intel.com;
> > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org;
> > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) <xuwei5@huawei.com>;
> > Zengtao (B) <prime.zeng@hisilicon.com>; guodong.xu@linaro.org; yangyicong
> > <yangyicong@huawei.com>; Liguozhu (Kenneth) <liguozhu@hisilicon.com>;
> > linuxarm@openeuler.org; hpa@zytor.com
> > Subject: Re: [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks
> > within one LLC
> >
> > On 07/05/2021 15:07, Song Bao Hua (Barry Song) wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> >
> > [...]
> >
> > >> On 03/05/2021 13:35, Song Bao Hua (Barry Song) wrote:
> > >>
> > >> [...]
> > >>
> > >>>> From: Song Bao Hua (Barry Song)
> > >>
> > >> [...]
> > >>
> > >>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>
> > >> [...]
> > >>
> > >>>>> On 29/04/2021 00:41, Song Bao Hua (Barry Song) wrote:
> > >>>>>>
> > >>>>>>
> > >>>>>>> -----Original Message-----
> > >>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>>>>
> > >>>>> [...]
> > >>>>>
> > >>>>>>>>>> From: Dietmar Eggemann [mailto:dietmar.eggemann@arm.com]
> > >>>>>>>
> > >>>>>>> [...]
> > >>>>>>>
> > >>>>>>>>>> On 20/04/2021 02:18, Barry Song wrote:
> > >>
> > >> [...]
> > >>
> > >>>
> > >>> On the other hand, according to "sched: Implement smarter wake-affine
> logic"
> > >>>
> > >>
> >
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> > >> ?id=62470419
> > >>>
> > >>> Proper factor in wake_wide is mainly beneficial of 1:n tasks like
> > >> postgresql/pgbench.
> > >>> So using the smaller cluster size as factor might help make wake_affine
> > false
> > >> so
> > >>> improve pgbench.
> > >>>
> > >>> From the commit log, while clients =  2*cpus, the commit made the biggest
> > >>> improvement. In my case, It should be clients=48 for a machine whose LLC
> > >>> size is 24.
> > >>>
> > >>> In Linux, I created a 240MB database and ran "pgbench -c 48 -S -T 20 pgbench"
> > >>> under two different scenarios:
> > >>> 1. page cache always hit, so no real I/O for database read
> > >>> 2. echo 3 > /proc/sys/vm/drop_caches
> > >>>
> > >>> For case 1, using cluster_size and using llc_size will result in similar
> > >>> tps= ~108000, all of 24 cpus have 100% cpu utilization.
> > >>>
> > >>> For case 2, using llc_size still shows better performance.
> > >>>
> > >>> tps for each test round(cluster size as factor in wake_wide):
> > >>> 1398.450887 1275.020401 1632.542437 1412.241627 1611.095692 1381.354294
> > >> 1539.877146
> > >>> avg tps = 1464
> > >>>
> > >>> tps for each test round(llc size as factor in wake_wide):
> > >>> 1718.402983 1443.169823 1502.353823 1607.415861 1597.396924 1745.651814
> > >> 1876.802168
> > >>> avg tps = 1641  (+12%)
> > >>>
> > >>> so it seems using cluster_size as factor in "slave >= factor && master >=
> > >> slave *
> > >>> factor" isn't a good choice for my machine at least.
> > >>
> > >> So SD size = 4 (instead of 24) seems to be too small for `-c 48`.
> > >>
> > >> Just curious, have you seen the benefit of using wake wide on SD size =
> > >> 24 (LLC) compared to not using it at all?
> > >
> > > At least in my benchmark made today, I have not seen any benefit to use
> > > llc_size. Always returning 0 in wake_wide() seems to be much better.
> > >
> > > postgres@ubuntu:$pgbench -i pgbench
> > > postgres@pgbench:$ pgbench -T 120 -c 48 pgbench
> > >
> > > using llc_size, it got to 123tps
> > > always returning 0 in wake_wide(), it got to 158tps
> > >
> > > actually, I really couldn't reproduce the performance improvement
> > > the commit "sched: Implement smarter wake-affine logic" mentioned.
> > > on the other hand, the commit log didn't present the pgbench command
> > > parameter used. I guess the benchmark result will highly depend on
> > > the command parameter and disk I/O speed.
> >
> > I see. And it was a way smaller machine (12 CPUs) back then.
> >
> > You could run pgbench via mmtests https://github.com/gormanm/mmtests.
> >
> > I.e the `timed-ro-medium` test.
> >
> > mmtests# ./run-mmtests.sh --config
> > ./configs/config-db-pgbench-timed-ro-medium test_tag
> >
> > /shellpacks/shellpack-bench-pgbench contains all the individual test
> > steps. Something you could use as a template for your pgbench standalone
> > tests as well.
> >
> > I ran this test on an Intel Xeon E5-2690 v2 with 40 CPUs and 64GB of
> > memory on v5.12 vanilla and w/o wakewide.
> > The test uses `scale_factor = 2570` on this machine. I guess this
> > relates to ~41GB? At least this was the size of the:
> 
> Thanks. Dietmar, sorry for slow response. Sick leave for the whole
> last week.
> 
> I feel it makes much more sense to use mmtests which is setting
> scale_factor according to total memory size, thus, considering
> the impact of page cache. And it is also doing database warming-up
> for 30minutes.
> 
> I will get more data and compare three cases:
> 1. use cluster as wake_wide factor
> 2. use llc as wake_wide factor
> 3. always return 0 in wake_wide.
> 
> and post the result afterwards.

I used only a numa node with 24cpus and 60GB memory
(scale factor: 2392) to finish the test. As mentioned
before, each numa node shares one LLC. So waker and
wakee are in same LLC domain.

Basically, the difference is just noise between using
cluster size as factor in wake_wide() and using llc size
as factor for 1/48threads, 8/48threads, 12/48threads,
24/48threads, 32/48threads. But for 48/48threads(system
is busy), using llc size as factor shows 4%+ pgbench
improvement.

                 cluster_as_factor     llc_as_factor
Hmean     1     10779.67 (   0.00%)    10869.27 *   0.83%*
Hmean     8     19595.09 (   0.00%)    19580.59 *  -0.07%*
Hmean     12    29553.06 (   0.00%)    29643.56 *   0.31%*
Hmean     24    43368.55 (   0.00%)    43194.47 *  -0.40%*
Hmean     32    40258.08 (   0.00%)    40163.23 *  -0.24%*
Hmean     48    40450.42 (   0.00%)    42249.29 *   4.45%*

I can further see 14%+ improvement for 48/48threads transactions
case if I totally don't depend on wake_wide(), that is like
wake_wide() always return 0.

                llc_as_factor          don't_use_wake_wide
Hmean     1     10869.27 (   0.00%)    10723.08 *  -1.34%*
Hmean     8     19580.59 (   0.00%)    19469.34 *  -0.57%*
Hmean     12    29643.56 (   0.00%)    29520.16 *  -0.42%*
Hmean     24    43194.47 (   0.00%)    43774.78 *   1.34%*
Hmean     32    40163.23 (   0.00%)    40742.93 *   1.44%*
Hmean     48    42249.29 (   0.00%)    48329.00 *  14.39%*

I begin to believe wake_wide() is useless while waker and wakee
are already in same LLC. So I sent another patch to address this
generic issue:
[PATCH] sched: fair: don't depend on wake_wide if waker and wakee are already in same LLC
https://lore.kernel.org/lkml/20210526091057.1800-1-song.bao.hua@hisilicon.com/

> 
> >
> > #mmtests/work/testdisk/data/pgdata directory when the test started.
> >
> >
> > mmtests/work/log# ../../compare-kernels.sh --baseline base --compare
> > wo_wakewide | grep ^Hmean
> >
> >
> >       #clients  v5.12 vanilla          v5.12 w/o wakewide
> >
> > Hmean     1     10903.88 (   0.00%)    10792.59 *  -1.02%*
> > Hmean     6     28480.60 (   0.00%)    27954.97 *  -1.85%*
> > Hmean     12    49197.55 (   0.00%)    47758.16 *  -2.93%*
> > Hmean     22    72902.37 (   0.00%)    71314.01 *  -2.18%*
> > Hmean     30    75468.16 (   0.00%)    75929.17 *   0.61%*
> > Hmean     48    60155.58 (   0.00%)    60471.91 *   0.53%*
> > Hmean     80    62202.38 (   0.00%)    60814.76 *  -2.23%*
> >
> >
> > So there are some improvements w/ wakewide but nothing of the scale
> > showed in the original wakewide patch.
> >
> > I'm not an expert on how to set up these pgbench tests though. So maybe
> > other pgbench related mmtests configs or some more fine-grained tuning
> > can produce bigger diffs?
> 
> Thanks
> Barry

Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-05-26 12:07 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20  0:18 [RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2021-04-20  0:18 ` [RFC PATCH v6 1/4] topology: Represent clusters of CPUs within a die Barry Song
2021-04-28  9:48   ` Andrew Jones
2021-04-30  3:46     ` Song Bao Hua (Barry Song)
2021-04-20  0:18 ` [RFC PATCH v6 2/4] scheduler: add scheduler level for clusters Barry Song
2021-04-20  0:18 ` [RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC Barry Song
2021-04-27 11:35   ` Dietmar Eggemann
2021-04-28  9:51     ` Song Bao Hua (Barry Song)
2021-04-28 13:04       ` Vincent Guittot
2021-04-28 16:47         ` Dietmar Eggemann
     [not found]           ` <185746c4d02a485ca8f3509439328b26@hisilicon.com>
2021-04-30 10:42             ` Dietmar Eggemann
2021-05-03  6:19               ` Song Bao Hua (Barry Song)
2021-05-03 11:35               ` Song Bao Hua (Barry Song)
2021-05-05 12:29                 ` Dietmar Eggemann
2021-05-07 13:07                   ` Song Bao Hua (Barry Song)
2021-05-13 12:32                     ` Dietmar Eggemann
2021-05-25  8:14                       ` Song Bao Hua (Barry Song)
2021-05-26  9:54                       ` Song Bao Hua (Barry Song)
2021-04-20  0:18 ` [RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86 Barry Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).