All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v4 0/3] scheduler: expose the topology of clusters and add cluster scheduler
@ 2021-03-01 22:59 ` Barry Song
  0 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

ARM64 server chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster has
local L3 tag. On the other hand, each cluster will share some internal system
bus. This means cache is much more affine inside one cluster than across
clusters.

    +-----------------------------------+                          +---------+
    |  +------+    +------+            +---------------------------+         |
    |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+   cluster   |    |    tag    |         |         |
    |  | CPU2 |    | CPU3 |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   |    |    L3     |         |         |
    |  +------+    +------+             +----+    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |   L3    |
                                                                   |   data  |
    +-----------------------------------+                          |         |
    |  +------+    +------+             |    +-----------+         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             +----+    L3     |         |         |
    |                                   |    |    tag    |         |         |
    |  +------+    +------+             |    |           |         |         |
    |  |      |    |      |            ++    +-----------+         |         |
    |  +------+    +------+            |---------------------------+         |
    +-----------------------------------|                          |         |
    +-----------------------------------|                          |         |
    |  +------+    +------+            +---------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+             |    |    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |   +-----------+          |         |
    |  +------+    +------+             |   |           |          |         |


There is a similar need for clustering in x86.  Some x86 cores could share L2 caches
that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).  

Having a sched_domain for clusters will bring two aspects of improvement:
1. spreading unrelated tasks among clusters, which decreases the contention of resources
and improve the throughput.
unrelated tasks might be put randomly without cluster sched_domain:
+-------------------+            +-----------------+
| +----+   +----+   |            |                 |
| |task|   |task|   |            |                 |
| |1   |   |2   |   |            |                 |
| +----+   +----+   |            |                 |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

but with cluster sched_domain, they are likely to spread due to LB:
+-------------------+            +-----------------+
| +----+            |            | +----+          |
| |task|            |            | |task|          |
| |1   |            |            | |2   |          |
| +----+            |            | +----+          |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

2. gathering related tasks within a cluster, which improves the cache affinity of tasks
talking with each other.
Without cluster sched_domain, related tasks might be put randomly. In case task1-8 have
relationship as below:
Task1 wakes up task4
Task2 wakes up task5
Task3 wakes up task6
Task4 wakes up task7
With the tuning of select_idle_cpu() to scan local cluster first, those tasks might
get a chance to be gathered like:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 4   |     |    | |2   |      |5    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  6   |    |    | |4    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+
Otherwise, the result might be:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 2   |     |    | |5   |      |6    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  4   |    |    | |7    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+

-v4:
  * rebased to tip/sched/core with the latest unified code of select_idle_cpu
  * added Tim's patch for x86 Jacobsville
  * also added benchmark data of spreading unrelated tasks
  * avoided the iteration of sched_domain by moving to static_key(addressing
    Vincent's comment
  * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)

Barry Song (1):
  scheduler: add scheduler level for clusters

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die.

Tim Chen (1):
  scheduler: Add cluster scheduler level for x86

 Documentation/admin-guide/cputopology.rst | 26 ++++++++++--
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/topology.c              |  2 +
 arch/x86/Kconfig                          |  8 ++++
 arch/x86/include/asm/smp.h                |  7 ++++
 arch/x86/include/asm/topology.h           |  1 +
 arch/x86/kernel/cpu/cacheinfo.c           |  1 +
 arch/x86/kernel/cpu/common.c              |  3 ++
 arch/x86/kernel/smpboot.c                 | 43 +++++++++++++++++++-
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 +++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/sched/cluster.h             | 19 +++++++++
 include/linux/sched/sd_flags.h            |  9 +++++
 include/linux/sched/topology.h            |  7 ++++
 include/linux/topology.h                  | 13 ++++++
 kernel/sched/core.c                       | 18 +++++++++
 kernel/sched/fair.c                       | 66 ++++++++++++++++++++++++-------
 kernel/sched/sched.h                      |  1 +
 kernel/sched/topology.c                   |  6 +++
 22 files changed, 315 insertions(+), 19 deletions(-)
 create mode 100644 include/linux/sched/cluster.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 0/3] scheduler: expose the topology of clusters and add cluster scheduler
@ 2021-03-01 22:59 ` Barry Song
  0 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: juri.lelli, mark.rutland, aubrey.li, hpa, prime.zeng, guodong.xu,
	gregkh, sudeep.holla, linux-kernel, linuxarm, linux-acpi, xuwei5,
	jonathan.cameron, yangyicong, x86, msys.mizuma, liguozhu,
	valentin.schneider, linux-arm-kernel, Barry Song

ARM64 server chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data while each cluster has
local L3 tag. On the other hand, each cluster will share some internal system
bus. This means cache is much more affine inside one cluster than across
clusters.

    +-----------------------------------+                          +---------+
    |  +------+    +------+            +---------------------------+         |
    |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+   cluster   |    |    tag    |         |         |
    |  | CPU2 |    | CPU3 |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   |    |    L3     |         |         |
    |  +------+    +------+             +----+    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |   L3    |
                                                                   |   data  |
    +-----------------------------------+                          |         |
    |  +------+    +------+             |    +-----------+         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             +----+    L3     |         |         |
    |                                   |    |    tag    |         |         |
    |  +------+    +------+             |    |           |         |         |
    |  |      |    |      |            ++    +-----------+         |         |
    |  +------+    +------+            |---------------------------+         |
    +-----------------------------------|                          |         |
    +-----------------------------------|                          |         |
    |  +------+    +------+            +---------------------------+         |
    |  |      |    |      |             |    +-----------+         |         |
    |  +------+    +------+             |    |           |         |         |
    |                                   +----+    L3     |         |         |
    |  +------+    +------+             |    |    tag    |         |         |
    |  |      |    |      |             |    |           |         |         |
    |  +------+    +------+             |    +-----------+         |         |
    |                                   |                          |         |
    +-----------------------------------+                          |         |
    +-----------------------------------+                          |         |
    |  +------+    +------+             +--------------------------+         |
    |  |      |    |      |             |   +-----------+          |         |
    |  +------+    +------+             |   |           |          |         |


There is a similar need for clustering in x86.  Some x86 cores could share L2 caches
that is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing L3).  

Having a sched_domain for clusters will bring two aspects of improvement:
1. spreading unrelated tasks among clusters, which decreases the contention of resources
and improve the throughput.
unrelated tasks might be put randomly without cluster sched_domain:
+-------------------+            +-----------------+
| +----+   +----+   |            |                 |
| |task|   |task|   |            |                 |
| |1   |   |2   |   |            |                 |
| +----+   +----+   |            |                 |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

but with cluster sched_domain, they are likely to spread due to LB:
+-------------------+            +-----------------+
| +----+            |            | +----+          |
| |task|            |            | |task|          |
| |1   |            |            | |2   |          |
| +----+            |            | +----+          |
|                   |            |                 |
|       cluster1    |            |     cluster2    |
+-------------------+            +-----------------+

2. gathering related tasks within a cluster, which improves the cache affinity of tasks
talking with each other.
Without cluster sched_domain, related tasks might be put randomly. In case task1-8 have
relationship as below:
Task1 wakes up task4
Task2 wakes up task5
Task3 wakes up task6
Task4 wakes up task7
With the tuning of select_idle_cpu() to scan local cluster first, those tasks might
get a chance to be gathered like:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 4   |     |    | |2   |      |5    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  6   |    |    | |4    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+
Otherwise, the result might be:
+---------------------------+    +----------------------+
| +----+        +-----+     |    | +----+      +-----+  |
| |task|        |task |     |    | |task|      |task |  |
| |1   |        | 2   |     |    | |5   |      |6    |  |
| +----+        +-----+     |    | +----+      +-----+  |
|                           |    |                      |
|       cluster1            |    |     cluster2         |
|                           |    |                      |
|                           |    |                      |
| +-----+       +------+    |    | +-----+     +------+ |
| |task |       | task |    |    | |task |     |task  | |
| |3    |       |  4   |    |    | |7    |     |8     | |
| +-----+       +------+    |    | +-----+     +------+ |
+---------------------------+    +----------------------+

-v4:
  * rebased to tip/sched/core with the latest unified code of select_idle_cpu
  * added Tim's patch for x86 Jacobsville
  * also added benchmark data of spreading unrelated tasks
  * avoided the iteration of sched_domain by moving to static_key(addressing
    Vincent's comment
  * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)

Barry Song (1):
  scheduler: add scheduler level for clusters

Jonathan Cameron (1):
  topology: Represent clusters of CPUs within a die.

Tim Chen (1):
  scheduler: Add cluster scheduler level for x86

 Documentation/admin-guide/cputopology.rst | 26 ++++++++++--
 arch/arm64/Kconfig                        |  7 ++++
 arch/arm64/kernel/topology.c              |  2 +
 arch/x86/Kconfig                          |  8 ++++
 arch/x86/include/asm/smp.h                |  7 ++++
 arch/x86/include/asm/topology.h           |  1 +
 arch/x86/kernel/cpu/cacheinfo.c           |  1 +
 arch/x86/kernel/cpu/common.c              |  3 ++
 arch/x86/kernel/smpboot.c                 | 43 +++++++++++++++++++-
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 +++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/sched/cluster.h             | 19 +++++++++
 include/linux/sched/sd_flags.h            |  9 +++++
 include/linux/sched/topology.h            |  7 ++++
 include/linux/topology.h                  | 13 ++++++
 kernel/sched/core.c                       | 18 +++++++++
 kernel/sched/fair.c                       | 66 ++++++++++++++++++++++++-------
 kernel/sched/sched.h                      |  1 +
 kernel/sched/topology.c                   |  6 +++
 22 files changed, 315 insertions(+), 19 deletions(-)
 create mode 100644 include/linux/sched/cluster.h

-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
  2021-03-01 22:59 ` Barry Song
@ 2021-03-01 22:59   ` Barry Song
  -1 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Jonathan Cameron, Barry Song

From: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.

For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.

This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].

Note this patch only handle the ACPI case.

Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).

Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT.  Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.

Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.

[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
    structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
  -v4:
  * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
 arch/arm64/kernel/topology.c              |  2 +
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 +++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/topology.h                  |  6 +++
 8 files changed, 127 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
 	identifier (rather than the kernel's).  The actual value is
 	architecture and platform dependent.
 
+cluster_id:
+
+	the Cluster ID of cpuX.  Typically it is the hardware platform's
+	identifier (rather than the kernel's).  The actual value is
+	architecture and platform dependent.
+
 book_id:
 
 	the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
 	human-readable list of CPUs sharing the same physical_package_id.
 	(deprecated name: "core_siblings_list")
 
+cluster_cpus:
+
+	internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+	human-readable list of CPUs within the same cluster.
+
 die_cpus:
 
 	internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
 
 	#define topology_physical_package_id(cpu)
 	#define topology_die_id(cpu)
+	#define topology_cluster_id(cpu)
 	#define topology_core_id(cpu)
 	#define topology_book_id(cpu)
 	#define topology_drawer_id(cpu)
 	#define topology_sibling_cpumask(cpu)
 	#define topology_core_cpumask(cpu)
+	#define topology_cluster_cpumask(cpu)
 	#define topology_die_cpumask(cpu)
 	#define topology_book_cpumask(cpu)
 	#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
 
 1) topology_physical_package_id: -1
 2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
 
 For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
 default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index f6faa69..fe076b3 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
 			cpu_topology[cpu].thread_id  = -1;
 			cpu_topology[cpu].core_id    = topology_id;
 		}
+		topology_id = find_acpi_cpu_topology_cluster(cpu);
+		cpu_topology[cpu].cluster_id = topology_id;
 		topology_id = find_acpi_cpu_topology_package(cpu);
 		cpu_topology[cpu].package_id = topology_id;
 
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..11f8b02 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
 }
 
 /**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	struct acpi_pptt_processor *cpu_node, *cluster_node;
+	u32 acpi_cpu_id;
+	int retval;
+	int is_thread;
+
+	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+	if (ACPI_FAILURE(status)) {
+		acpi_pptt_warn_missing();
+		return -ENOENT;
+	}
+
+	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+	if (cpu_node == NULL || !cpu_node->parent) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+
+	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+	cluster_node = fetch_pptt_node(table, cpu_node->parent);
+	if (cluster_node == NULL) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+	if (is_thread) {
+		if (!cluster_node->parent) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+		cluster_node = fetch_pptt_node(table, cluster_node->parent);
+		if (cluster_node == NULL) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+	}
+	retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+	acpi_put_table(table);
+
+	return retval;
+}
+
+/**
  * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
  * @cpu: Kernel logical CPU number
  *
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..3079232 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return core_mask;
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].cluster_sibling;
+}
+
 void update_siblings_masks(unsigned int cpuid)
 {
 	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;
 
+		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+		}
+
 		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
 		cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
 
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
 	cpumask_clear(&cpu_topo->llc_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
 
+	cpumask_clear(&cpu_topo->cluster_sibling);
+	cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
 	cpumask_clear(&cpu_topo->core_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
 	cpumask_clear(&cpu_topo->thread_sibling);
@@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu)
 		cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
 	for_each_cpu(sibling, topology_sibling_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
 	for_each_cpu(sibling, topology_llc_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
 
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@
 define_id_show_func(die_id);
 static DEVICE_ATTR_RO(die_id);
 
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
 define_id_show_func(core_id);
 static DEVICE_ATTR_RO(core_id);
 
@@ -61,6 +64,10 @@
 static DEVICE_ATTR_RO(core_siblings);
 static DEVICE_ATTR_RO(core_siblings_list);
 
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
 define_siblings_show_func(die_cpus, die_cpumask);
 static DEVICE_ATTR_RO(die_cpus);
 static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@
 static struct attribute *default_attrs[] = {
 	&dev_attr_physical_package_id.attr,
 	&dev_attr_die_id.attr,
+	&dev_attr_cluster_id.attr,
 	&dev_attr_core_id.attr,
 	&dev_attr_thread_siblings.attr,
 	&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@
 	&dev_attr_core_cpus_list.attr,
 	&dev_attr_core_siblings.attr,
 	&dev_attr_core_siblings_list.attr,
+	&dev_attr_cluster_cpus.attr,
+	&dev_attr_cluster_cpus_list.attr,
 	&dev_attr_die_cpus.attr,
 	&dev_attr_die_cpus_list.attr,
 	&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 053bf05..161196a 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1332,6 +1332,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
 #ifdef CONFIG_ACPI_PPTT
 int acpi_pptt_cpu_is_thread(unsigned int cpu);
 int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
 int find_acpi_cpu_topology_package(unsigned int cpu);
 int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
 int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1344,6 +1345,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
 {
 	return -EINVAL;
 }
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	return -EINVAL;
+}
 static inline int find_acpi_cpu_topology_package(unsigned int cpu)
 {
 	return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
 struct cpu_topology {
 	int thread_id;
 	int core_id;
+	int cluster_id;
 	int package_id;
 	int llc_id;
 	cpumask_t thread_sibling;
 	cpumask_t core_sibling;
+	cpumask_t cluster_sibling;
 	cpumask_t llc_sibling;
 };
 
@@ -60,13 +62,16 @@ struct cpu_topology {
 extern struct cpu_topology cpu_topology[NR_CPUS];
 
 #define topology_physical_package_id(cpu)	(cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu)	(cpu_topology[cpu].cluster_id)
 #define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
 #define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
 #define topology_sibling_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu)	(&cpu_topology[cpu].cluster_sibling)
 #define topology_llc_cpumask(cpu)	(&cpu_topology[cpu].llc_sibling)
 void init_cpu_topology(void);
 void store_cpu_topology(unsigned int cpuid);
 const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
 void update_siblings_masks(unsigned int cpu);
 void remove_cpu_topology(unsigned int cpuid);
 void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7634cd7..80d27d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_die_id
 #define topology_die_id(cpu)			((void)(cpu), -1)
 #endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu)		((void)(cpu), -1)
+#endif
 #ifndef topology_core_id
 #define topology_core_id(cpu)			((void)(cpu), 0)
 #endif
@@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_core_cpumask
 #define topology_core_cpumask(cpu)		cpumask_of(cpu)
 #endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu)		cpumask_of(cpu)
+#endif
 #ifndef topology_die_cpumask
 #define topology_die_cpumask(cpu)		cpumask_of(cpu)
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
@ 2021-03-01 22:59   ` Barry Song
  0 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: juri.lelli, mark.rutland, aubrey.li, hpa, prime.zeng, guodong.xu,
	gregkh, sudeep.holla, linux-kernel, linuxarm, linux-acpi, xuwei5,
	Jonathan Cameron, yangyicong, x86, msys.mizuma, liguozhu,
	valentin.schneider, linux-arm-kernel, Barry Song

From: Jonathan Cameron <Jonathan.Cameron@huawei.com>

Both ACPI and DT provide the ability to describe additional layers of
topology between that of individual cores and higher level constructs
such as the level at which the last level cache is shared.
In ACPI this can be represented in PPTT as a Processor Hierarchy
Node Structure [1] that is the parent of the CPU cores and in turn
has a parent Processor Hierarchy Nodes Structure representing
a higher level of topology.

For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus.

+-----------------------------------+                          +---------+
|  +------+    +------+            +---------------------------+         |
|  | CPU0 |    | cpu1 |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+   cluster   |    |    tag    |         |         |
|  | CPU2 |    | CPU3 |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   |    |    L3     |         |         |
|  +------+    +------+             +----+    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |   L3    |
                                                               |   data  |
+-----------------------------------+                          |         |
|  +------+    +------+             |    +-----------+         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             +----+    L3     |         |         |
|                                   |    |    tag    |         |         |
|  +------+    +------+             |    |           |         |         |
|  |      |    |      |            ++    +-----------+         |         |
|  +------+    +------+            |---------------------------+         |
+-----------------------------------|                          |         |
+-----------------------------------|                          |         |
|  +------+    +------+            +---------------------------+         |
|  |      |    |      |             |    +-----------+         |         |
|  +------+    +------+             |    |           |         |         |
|                                   +----+    L3     |         |         |
|  +------+    +------+             |    |    tag    |         |         |
|  |      |    |      |             |    |           |         |         |
|  +------+    +------+             |    +-----------+         |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                          |         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |   +-----------+          |         |
|  +------+    +------+             |   |           |          |         |
|                                   |   |    L3     |          |         |
|  +------+    +------+             +---+    tag    |          |         |
|  |      |    |      |             |   |           |          |         |
|  +------+    +------+             |   +-----------+          |         |
|                                   |                          |         |
+-----------------------------------+                          |         |
+-----------------------------------+                         ++         |
|  +------+    +------+             +--------------------------+         |
|  |      |    |      |             |  +-----------+           |         |
|  +------+    +------+             |  |           |           |         |
|                                   |  |    L3     |           |         |
|  +------+    +------+             +--+    tag    |           |         |
|  |      |    |      |             |  |           |           |         |
|  +------+    +------+             |  +-----------+           |         |
|                                   |                          +---------+
+-----------------------------------+

That means the cost to transfer ownership of a cacheline between CPUs
within a cluster is lower than between CPUs in different clusters on
the same die. Hence, it can make sense to tell the scheduler to use
the cache affinity of the cluster to make better decision on thread
migration.

This patch simply exposes this information to userspace libraries
like hwloc by providing cluster_cpus and related sysfs attributes.
PoC of HWLOC support at [2].

Note this patch only handle the ACPI case.

Special consideration is needed for SMT processors, where it is
necessary to move 2 levels up the hierarchy from the leaf nodes
(thus skipping the processor core level).

Currently the ID provided is the offset of the Processor
Hierarchy Nodes Structure within PPTT.  Whilst this is unique
it is not terribly elegant so alternative suggestions welcome.

Note that arm64 / ACPI does not provide any means of identifying
a die level in the topology but that may be unrelate to the cluster
level.

[1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
    structure (Type 0)
[2] https://github.com/hisilicon/hwloc/tree/linux-cluster

Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
  -v4:
  * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)

 Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
 arch/arm64/kernel/topology.c              |  2 +
 drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
 drivers/base/arch_topology.c              | 14 +++++++
 drivers/base/topology.c                   | 10 +++++
 include/linux/acpi.h                      |  5 +++
 include/linux/arch_topology.h             |  5 +++
 include/linux/topology.h                  |  6 +++
 8 files changed, 127 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cputopology.rst b/Documentation/admin-guide/cputopology.rst
index b90dafc..f9d3745 100644
--- a/Documentation/admin-guide/cputopology.rst
+++ b/Documentation/admin-guide/cputopology.rst
@@ -24,6 +24,12 @@ core_id:
 	identifier (rather than the kernel's).  The actual value is
 	architecture and platform dependent.
 
+cluster_id:
+
+	the Cluster ID of cpuX.  Typically it is the hardware platform's
+	identifier (rather than the kernel's).  The actual value is
+	architecture and platform dependent.
+
 book_id:
 
 	the book ID of cpuX. Typically it is the hardware platform's
@@ -56,6 +62,14 @@ package_cpus_list:
 	human-readable list of CPUs sharing the same physical_package_id.
 	(deprecated name: "core_siblings_list")
 
+cluster_cpus:
+
+	internal kernel map of CPUs within the same cluster.
+
+cluster_cpus_list:
+
+	human-readable list of CPUs within the same cluster.
+
 die_cpus:
 
 	internal kernel map of CPUs within the same die.
@@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
 
 	#define topology_physical_package_id(cpu)
 	#define topology_die_id(cpu)
+	#define topology_cluster_id(cpu)
 	#define topology_core_id(cpu)
 	#define topology_book_id(cpu)
 	#define topology_drawer_id(cpu)
 	#define topology_sibling_cpumask(cpu)
 	#define topology_core_cpumask(cpu)
+	#define topology_cluster_cpumask(cpu)
 	#define topology_die_cpumask(cpu)
 	#define topology_book_cpumask(cpu)
 	#define topology_drawer_cpumask(cpu)
@@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
 
 1) topology_physical_package_id: -1
 2) topology_die_id: -1
-3) topology_core_id: 0
-4) topology_sibling_cpumask: just the given CPU
-5) topology_core_cpumask: just the given CPU
-6) topology_die_cpumask: just the given CPU
+3) topology_cluster_id: -1
+4) topology_core_id: 0
+5) topology_sibling_cpumask: just the given CPU
+6) topology_core_cpumask: just the given CPU
+7) topology_cluster_cpumask: just the given CPU
+8) topology_die_cpumask: just the given CPU
 
 For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
 default definitions for topology_book_id() and topology_book_cpumask().
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index f6faa69..fe076b3 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
 			cpu_topology[cpu].thread_id  = -1;
 			cpu_topology[cpu].core_id    = topology_id;
 		}
+		topology_id = find_acpi_cpu_topology_cluster(cpu);
+		cpu_topology[cpu].cluster_id = topology_id;
 		topology_id = find_acpi_cpu_topology_package(cpu);
 		cpu_topology[cpu].package_id = topology_id;
 
diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
index 4ae9335..11f8b02 100644
--- a/drivers/acpi/pptt.c
+++ b/drivers/acpi/pptt.c
@@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
 }
 
 /**
+ * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
+ * @cpu: Kernel logical CPU number
+ *
+ * Determine a topology unique cluster ID for the given CPU/thread.
+ * This ID can then be used to group peers, which will have matching ids.
+ *
+ * The cluster, if present is the level of topology above CPUs. In a
+ * multi-thread CPU, it will be the level above the CPU, not the thread.
+ * It may not exist in single CPU systems. In simple multi-CPU systems,
+ * it may be equal to the package topology level.
+ *
+ * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
+ * or there is no toplogy level above the CPU..
+ * Otherwise returns a value which represents the package for this CPU.
+ */
+
+int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	struct acpi_table_header *table;
+	acpi_status status;
+	struct acpi_pptt_processor *cpu_node, *cluster_node;
+	u32 acpi_cpu_id;
+	int retval;
+	int is_thread;
+
+	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+	if (ACPI_FAILURE(status)) {
+		acpi_pptt_warn_missing();
+		return -ENOENT;
+	}
+
+	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+	if (cpu_node == NULL || !cpu_node->parent) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+
+	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+	cluster_node = fetch_pptt_node(table, cpu_node->parent);
+	if (cluster_node == NULL) {
+		retval = -ENOENT;
+		goto put_table;
+	}
+	if (is_thread) {
+		if (!cluster_node->parent) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+		cluster_node = fetch_pptt_node(table, cluster_node->parent);
+		if (cluster_node == NULL) {
+			retval = -ENOENT;
+			goto put_table;
+		}
+	}
+	retval = ACPI_PTR_DIFF(cluster_node, table);
+put_table:
+	acpi_put_table(table);
+
+	return retval;
+}
+
+/**
  * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
  * @cpu: Kernel logical CPU number
  *
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index de8587c..3079232 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return core_mask;
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return &cpu_topology[cpu].cluster_sibling;
+}
+
 void update_siblings_masks(unsigned int cpuid)
 {
 	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
 		if (cpuid_topo->package_id != cpu_topo->package_id)
 			continue;
 
+		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
+			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
+		}
+
 		cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
 		cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
 
@@ -541,6 +551,9 @@ static void clear_cpu_topology(int cpu)
 	cpumask_clear(&cpu_topo->llc_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->llc_sibling);
 
+	cpumask_clear(&cpu_topo->cluster_sibling);
+	cpumask_set_cpu(cpu, &cpu_topo->cluster_sibling);
+
 	cpumask_clear(&cpu_topo->core_sibling);
 	cpumask_set_cpu(cpu, &cpu_topo->core_sibling);
 	cpumask_clear(&cpu_topo->thread_sibling);
@@ -571,6 +584,7 @@ void remove_cpu_topology(unsigned int cpu)
 		cpumask_clear_cpu(cpu, topology_core_cpumask(sibling));
 	for_each_cpu(sibling, topology_sibling_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
+
 	for_each_cpu(sibling, topology_llc_cpumask(cpu))
 		cpumask_clear_cpu(cpu, topology_llc_cpumask(sibling));
 
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 4d254fc..7157ac0 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -46,6 +46,9 @@
 define_id_show_func(die_id);
 static DEVICE_ATTR_RO(die_id);
 
+define_id_show_func(cluster_id);
+static DEVICE_ATTR_RO(cluster_id);
+
 define_id_show_func(core_id);
 static DEVICE_ATTR_RO(core_id);
 
@@ -61,6 +64,10 @@
 static DEVICE_ATTR_RO(core_siblings);
 static DEVICE_ATTR_RO(core_siblings_list);
 
+define_siblings_show_func(cluster_cpus, cluster_cpumask);
+static DEVICE_ATTR_RO(cluster_cpus);
+static DEVICE_ATTR_RO(cluster_cpus_list);
+
 define_siblings_show_func(die_cpus, die_cpumask);
 static DEVICE_ATTR_RO(die_cpus);
 static DEVICE_ATTR_RO(die_cpus_list);
@@ -88,6 +95,7 @@
 static struct attribute *default_attrs[] = {
 	&dev_attr_physical_package_id.attr,
 	&dev_attr_die_id.attr,
+	&dev_attr_cluster_id.attr,
 	&dev_attr_core_id.attr,
 	&dev_attr_thread_siblings.attr,
 	&dev_attr_thread_siblings_list.attr,
@@ -95,6 +103,8 @@
 	&dev_attr_core_cpus_list.attr,
 	&dev_attr_core_siblings.attr,
 	&dev_attr_core_siblings_list.attr,
+	&dev_attr_cluster_cpus.attr,
+	&dev_attr_cluster_cpus_list.attr,
 	&dev_attr_die_cpus.attr,
 	&dev_attr_die_cpus_list.attr,
 	&dev_attr_package_cpus.attr,
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 053bf05..161196a 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -1332,6 +1332,7 @@ static inline int lpit_read_residency_count_address(u64 *address)
 #ifdef CONFIG_ACPI_PPTT
 int acpi_pptt_cpu_is_thread(unsigned int cpu);
 int find_acpi_cpu_topology(unsigned int cpu, int level);
+int find_acpi_cpu_topology_cluster(unsigned int cpu);
 int find_acpi_cpu_topology_package(unsigned int cpu);
 int find_acpi_cpu_topology_hetero_id(unsigned int cpu);
 int find_acpi_cpu_cache_topology(unsigned int cpu, int level);
@@ -1344,6 +1345,10 @@ static inline int find_acpi_cpu_topology(unsigned int cpu, int level)
 {
 	return -EINVAL;
 }
+static inline int find_acpi_cpu_topology_cluster(unsigned int cpu)
+{
+	return -EINVAL;
+}
 static inline int find_acpi_cpu_topology_package(unsigned int cpu)
 {
 	return -EINVAL;
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index 0f6cd6b..987c7ea 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -49,10 +49,12 @@ void topology_set_thermal_pressure(const struct cpumask *cpus,
 struct cpu_topology {
 	int thread_id;
 	int core_id;
+	int cluster_id;
 	int package_id;
 	int llc_id;
 	cpumask_t thread_sibling;
 	cpumask_t core_sibling;
+	cpumask_t cluster_sibling;
 	cpumask_t llc_sibling;
 };
 
@@ -60,13 +62,16 @@ struct cpu_topology {
 extern struct cpu_topology cpu_topology[NR_CPUS];
 
 #define topology_physical_package_id(cpu)	(cpu_topology[cpu].package_id)
+#define topology_cluster_id(cpu)	(cpu_topology[cpu].cluster_id)
 #define topology_core_id(cpu)		(cpu_topology[cpu].core_id)
 #define topology_core_cpumask(cpu)	(&cpu_topology[cpu].core_sibling)
 #define topology_sibling_cpumask(cpu)	(&cpu_topology[cpu].thread_sibling)
+#define topology_cluster_cpumask(cpu)	(&cpu_topology[cpu].cluster_sibling)
 #define topology_llc_cpumask(cpu)	(&cpu_topology[cpu].llc_sibling)
 void init_cpu_topology(void);
 void store_cpu_topology(unsigned int cpuid);
 const struct cpumask *cpu_coregroup_mask(int cpu);
+const struct cpumask *cpu_clustergroup_mask(int cpu);
 void update_siblings_masks(unsigned int cpu);
 void remove_cpu_topology(unsigned int cpuid);
 void reset_cpu_topology(void);
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7634cd7..80d27d7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -186,6 +186,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_die_id
 #define topology_die_id(cpu)			((void)(cpu), -1)
 #endif
+#ifndef topology_cluster_id
+#define topology_cluster_id(cpu)		((void)(cpu), -1)
+#endif
 #ifndef topology_core_id
 #define topology_core_id(cpu)			((void)(cpu), 0)
 #endif
@@ -195,6 +198,9 @@ static inline int cpu_to_mem(int cpu)
 #ifndef topology_core_cpumask
 #define topology_core_cpumask(cpu)		cpumask_of(cpu)
 #endif
+#ifndef topology_cluster_cpumask
+#define topology_cluster_cpumask(cpu)		cpumask_of(cpu)
+#endif
 #ifndef topology_die_cpumask
 #define topology_die_cpumask(cpu)		cpumask_of(cpu)
 #endif
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
  2021-03-01 22:59 ` Barry Song
@ 2021-03-01 22:59   ` Barry Song
  -1 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus. This means cache coherence overhead inside one
cluster is much less than the overhead across clusters.

This patch adds the sched_domain for clusters. On kunpeng 920, without
this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.

This will help spread unrelated tasks among clusters, thus decrease the
contention and improve the throughput, for example, stream benchmark can
improve around 4.3%~6.3% by this patch:

w/o patch:
numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
STREAM copy latency: 3.36 nanoseconds
STREAM copy bandwidth: 57072.50 MB/sec
STREAM scale latency: 3.40 nanoseconds
STREAM scale bandwidth: 56542.52 MB/sec
STREAM add latency: 5.10 nanoseconds
STREAM add bandwidth: 56482.83 MB/sec
STREAM triad latency: 5.14 nanoseconds
STREAM triad bandwidth: 56069.52 MB/sec

w/ patch:
$ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
STREAM copy latency: 3.22 nanoseconds
STREAM copy bandwidth: 59660.96 MB/sec    ->  +4.5%
STREAM scale latency: 3.25 nanoseconds
STREAM scale bandwidth: 59002.29 MB/sec   ->  +4.3%
STREAM add latency: 4.80 nanoseconds
STREAM add bandwidth: 60036.62 MB/sec     ->  +6.3%
STREAM triad latency: 4.86 nanoseconds
STREAM triad bandwidth: 59228.30 MB/sec   ->  +5.6%

On the other hand, while doing WAKE_AFFINE, this patch will try to find
a core in the target cluster before scanning the whole llc domain. So it
helps gather related tasks within one cluster.
we run the below hackbench with different -g parameter from 2 to 14, for
each different g, we run the command 10 times and get the average time
$ numactl -N 0 hackbench -p -T -l 20000 -g $1

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 20000 -g 10
Running in threaded mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 8.874

The below is the result of hackbench w/ and w/o the patch:
g=    2      4     6       8      10     12      14
w/o: 1.9596 4.0506 5.9654 8.0068 9.8147 11.4900 13.1163
w/ : 1.9362 3.9197 5.6570 7.1376 8.5263 10.0512 11.3256
            +3.3%  +5.2%  +10.9% +13.2%  +12.8%  +13.7%

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
-v4:
  * rebased to tip/sched/core with the latest unified code of select_idle_cpu
  * also added benchmark data of spreading unrelated tasks
  * avoided the iteration of sched_domain by moving to static_key(addressing
    Vincent's comment

 arch/arm64/Kconfig             |  7 +++++
 include/linux/sched/cluster.h  | 19 ++++++++++++
 include/linux/sched/sd_flags.h |  9 ++++++
 include/linux/sched/topology.h |  7 +++++
 include/linux/topology.h       |  7 +++++
 kernel/sched/core.c            | 18 ++++++++++++
 kernel/sched/fair.c            | 66 +++++++++++++++++++++++++++++++++---------
 kernel/sched/sched.h           |  1 +
 kernel/sched/topology.c        |  6 ++++
 9 files changed, 126 insertions(+), 14 deletions(-)
 create mode 100644 include/linux/sched/cluster.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b..158b0fa 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -971,6 +971,13 @@ config SCHED_MC
 	  making when dealing with multi-core CPU chips at a cost of slightly
 	  increased overhead in some places. If unsure say N here.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	help
+	  Cluster scheduler support improves the CPU scheduler's decision
+	  making when dealing with machines that have clusters(sharing internal
+	  bus or sharing LLC cache tag). If unsure say N here.
+
 config SCHED_SMT
 	bool "SMT scheduler support"
 	help
diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
new file mode 100644
index 0000000..ea6c475
--- /dev/null
+++ b/include/linux/sched/cluster.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_SCHED_CLUSTER_H
+#define _LINUX_SCHED_CLUSTER_H
+
+#include <linux/static_key.h>
+
+#ifdef CONFIG_SCHED_CLUSTER
+extern struct static_key_false sched_cluster_present;
+
+static __always_inline bool sched_cluster_active(void)
+{
+	return static_branch_likely(&sched_cluster_present);
+}
+#else
+static inline bool sched_cluster_active(void) { return false; }
+
+#endif
+
+#endif
diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
index 34b21e9..fc3c894 100644
--- a/include/linux/sched/sd_flags.h
+++ b/include/linux/sched/sd_flags.h
@@ -100,6 +100,15 @@
 SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
 
 /*
+ * Domain members share CPU cluster resources (i.e. llc cache tags)
+ *
+ * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
+ *               the cluster resouces (such as llc tags and internal bus)
+ * NEEDS_GROUPS: Caches are shared between groups.
+ */
+SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
+
+/*
  * Domain members share CPU package resources (i.e. caches)
  *
  * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 8f0f778..846fcac 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
 }
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+static inline int cpu_cluster_flags(void)
+{
+	return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
+}
+#endif
+
 #ifdef CONFIG_SCHED_MC
 static inline int cpu_core_flags(void)
 {
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 80d27d7..0b3704a 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
 }
 #endif
 
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
+static inline const struct cpumask *cpu_cluster_mask(int cpu)
+{
+	return topology_cluster_cpumask(cpu);
+}
+#endif
+
 static inline const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 88a2e2b..d805e59 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
 	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
 		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going up, increment the number of cluster cpus with
+	 * cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
+		static_branch_inc_cpuslocked(&sched_cluster_present);
+#endif
+
 	set_cpu_active(cpu, true);
 
 	if (sched_smp_initialized) {
@@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
 		static_branch_dec_cpuslocked(&sched_smt_present);
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going down, decrement the number of cpus with cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
+		static_branch_dec_cpuslocked(&sched_cluster_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8a8bd7b..3db7b07 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
 	return -1;
 }
 
+#ifdef CONFIG_SCHED_CLUSTER
+DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
+EXPORT_SYMBOL_GPL(sched_cluster_present);
+#endif
+
 #ifdef CONFIG_SCHED_SMT
 DEFINE_STATIC_KEY_FALSE(sched_smt_present);
 EXPORT_SYMBOL_GPL(sched_smt_present);
@@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
 
 #endif /* CONFIG_SCHED_SMT */
 
+static inline int _select_idle_cpu(bool smt, struct task_struct *p, int target, struct cpumask *cpus, int *idle_cpu, int *nr)
+{
+	int cpu, i;
+
+	for_each_cpu_wrap(cpu, cpus, target) {
+		if (smt) {
+			i = select_idle_core(p, cpu, cpus, idle_cpu);
+		} else {
+			if (!--*nr)
+				return -1;
+			i = __select_idle_cpu(cpu);
+		} 
+
+		if ((unsigned int)i < nr_cpumask_bits)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  * Scan the LLC domain for idle CPUs; this is dynamically regulated by
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
 static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	int i, cpu, idle_cpu = -1, nr = INT_MAX;
+	int i, idle_cpu = -1, nr = INT_MAX;
 	bool smt = test_idle_cores(target, false);
 	int this = smp_processor_id();
 	struct sched_domain *this_sd;
@@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	if (!this_sd)
 		return -1;
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	if (!sched_cluster_active())
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+#ifdef CONFIG_SCHED_CLUSTER
+	if (sched_cluster_active())
+		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
+#endif
 
 	if (sched_feat(SIS_PROP) && !smt) {
 		u64 avg_cost, avg_idle, span_avg;
@@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 		time = cpu_clock(this);
 	}
 
-	for_each_cpu_wrap(cpu, cpus, target) {
-		if (smt) {
-			i = select_idle_core(p, cpu, cpus, &idle_cpu);
-			if ((unsigned int)i < nr_cpumask_bits)
-				return i;
+	/* scan cluster before scanning the whole llc */
+#ifdef CONFIG_SCHED_CLUSTER
+	if (sched_cluster_active()) {
+		i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
+		if ((unsigned int) i < nr_cpumask_bits) {
+			idle_cpu = i;
+			goto done;
+		} else if (nr <= 0)
+			return -1;
 
-		} else {
-			if (!--nr)
-				return -1;
-			idle_cpu = __select_idle_cpu(cpu);
-			if ((unsigned int)idle_cpu < nr_cpumask_bits)
-				break;
-		}
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
 	}
+#endif
+
+	i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
+	if ((unsigned int) i < nr_cpumask_bits) {
+		idle_cpu = i;
+		goto done;
+	} else if (nr <= 0)
+		return -1;
 
 	if (smt)
 		set_idle_cores(this, false);
 
+done:
 	if (sched_feat(SIS_PROP) && !smt) {
 		time = cpu_clock(this) - time;
 		update_avg(&this_sd->avg_scan_cost, time);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 10a1522..48a020f 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -6,6 +6,7 @@
 
 #include <linux/sched/autogroup.h>
 #include <linux/sched/clock.h>
+#include <linux/sched/cluster.h>
 #include <linux/sched/coredump.h>
 #include <linux/sched/cpufreq.h>
 #include <linux/sched/cputime.h>
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 09d3504..d019c25 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1361,6 +1361,7 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
  */
 #define TOPOLOGY_SD_FLAGS		\
 	(SD_SHARE_CPUCAPACITY	|	\
+	 SD_SHARE_CLS_RESOURCES	|	\
 	 SD_SHARE_PKG_RESOURCES |	\
 	 SD_NUMA		|	\
 	 SD_ASYM_PACKING)
@@ -1480,6 +1481,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
+
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
@ 2021-03-01 22:59   ` Barry Song
  0 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: juri.lelli, mark.rutland, aubrey.li, hpa, prime.zeng, guodong.xu,
	gregkh, sudeep.holla, linux-kernel, linuxarm, linux-acpi, xuwei5,
	jonathan.cameron, yangyicong, x86, msys.mizuma, liguozhu,
	valentin.schneider, linux-arm-kernel, Barry Song

ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
cluster has 4 cpus. All clusters share L3 cache data, but each cluster
has local L3 tag. On the other hand, each clusters will share some
internal system bus. This means cache coherence overhead inside one
cluster is much less than the overhead across clusters.

This patch adds the sched_domain for clusters. On kunpeng 920, without
this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.

This will help spread unrelated tasks among clusters, thus decrease the
contention and improve the throughput, for example, stream benchmark can
improve around 4.3%~6.3% by this patch:

w/o patch:
numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
STREAM copy latency: 3.36 nanoseconds
STREAM copy bandwidth: 57072.50 MB/sec
STREAM scale latency: 3.40 nanoseconds
STREAM scale bandwidth: 56542.52 MB/sec
STREAM add latency: 5.10 nanoseconds
STREAM add bandwidth: 56482.83 MB/sec
STREAM triad latency: 5.14 nanoseconds
STREAM triad bandwidth: 56069.52 MB/sec

w/ patch:
$ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
STREAM copy latency: 3.22 nanoseconds
STREAM copy bandwidth: 59660.96 MB/sec    ->  +4.5%
STREAM scale latency: 3.25 nanoseconds
STREAM scale bandwidth: 59002.29 MB/sec   ->  +4.3%
STREAM add latency: 4.80 nanoseconds
STREAM add bandwidth: 60036.62 MB/sec     ->  +6.3%
STREAM triad latency: 4.86 nanoseconds
STREAM triad bandwidth: 59228.30 MB/sec   ->  +5.6%

On the other hand, while doing WAKE_AFFINE, this patch will try to find
a core in the target cluster before scanning the whole llc domain. So it
helps gather related tasks within one cluster.
we run the below hackbench with different -g parameter from 2 to 14, for
each different g, we run the command 10 times and get the average time
$ numactl -N 0 hackbench -p -T -l 20000 -g $1

hackbench will report the time which is needed to complete a certain number
of messages transmissions between a certain number of tasks, for example:
$ numactl -N 0 hackbench -p -T -l 20000 -g 10
Running in threaded mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 20000 messages of 100 bytes
Time: 8.874

The below is the result of hackbench w/ and w/o the patch:
g=    2      4     6       8      10     12      14
w/o: 1.9596 4.0506 5.9654 8.0068 9.8147 11.4900 13.1163
w/ : 1.9362 3.9197 5.6570 7.1376 8.5263 10.0512 11.3256
            +3.3%  +5.2%  +10.9% +13.2%  +12.8%  +13.7%

Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
-v4:
  * rebased to tip/sched/core with the latest unified code of select_idle_cpu
  * also added benchmark data of spreading unrelated tasks
  * avoided the iteration of sched_domain by moving to static_key(addressing
    Vincent's comment

 arch/arm64/Kconfig             |  7 +++++
 include/linux/sched/cluster.h  | 19 ++++++++++++
 include/linux/sched/sd_flags.h |  9 ++++++
 include/linux/sched/topology.h |  7 +++++
 include/linux/topology.h       |  7 +++++
 kernel/sched/core.c            | 18 ++++++++++++
 kernel/sched/fair.c            | 66 +++++++++++++++++++++++++++++++++---------
 kernel/sched/sched.h           |  1 +
 kernel/sched/topology.c        |  6 ++++
 9 files changed, 126 insertions(+), 14 deletions(-)
 create mode 100644 include/linux/sched/cluster.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b..158b0fa 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -971,6 +971,13 @@ config SCHED_MC
 	  making when dealing with multi-core CPU chips at a cost of slightly
 	  increased overhead in some places. If unsure say N here.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	help
+	  Cluster scheduler support improves the CPU scheduler's decision
+	  making when dealing with machines that have clusters(sharing internal
+	  bus or sharing LLC cache tag). If unsure say N here.
+
 config SCHED_SMT
 	bool "SMT scheduler support"
 	help
diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
new file mode 100644
index 0000000..ea6c475
--- /dev/null
+++ b/include/linux/sched/cluster.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_SCHED_CLUSTER_H
+#define _LINUX_SCHED_CLUSTER_H
+
+#include <linux/static_key.h>
+
+#ifdef CONFIG_SCHED_CLUSTER
+extern struct static_key_false sched_cluster_present;
+
+static __always_inline bool sched_cluster_active(void)
+{
+	return static_branch_likely(&sched_cluster_present);
+}
+#else
+static inline bool sched_cluster_active(void) { return false; }
+
+#endif
+
+#endif
diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
index 34b21e9..fc3c894 100644
--- a/include/linux/sched/sd_flags.h
+++ b/include/linux/sched/sd_flags.h
@@ -100,6 +100,15 @@
 SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
 
 /*
+ * Domain members share CPU cluster resources (i.e. llc cache tags)
+ *
+ * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
+ *               the cluster resouces (such as llc tags and internal bus)
+ * NEEDS_GROUPS: Caches are shared between groups.
+ */
+SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
+
+/*
  * Domain members share CPU package resources (i.e. caches)
  *
  * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 8f0f778..846fcac 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
 }
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+static inline int cpu_cluster_flags(void)
+{
+	return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
+}
+#endif
+
 #ifdef CONFIG_SCHED_MC
 static inline int cpu_core_flags(void)
 {
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 80d27d7..0b3704a 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
 }
 #endif
 
+#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
+static inline const struct cpumask *cpu_cluster_mask(int cpu)
+{
+	return topology_cluster_cpumask(cpu);
+}
+#endif
+
 static inline const struct cpumask *cpu_cpu_mask(int cpu)
 {
 	return cpumask_of_node(cpu_to_node(cpu));
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 88a2e2b..d805e59 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
 	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
 		static_branch_inc_cpuslocked(&sched_smt_present);
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going up, increment the number of cluster cpus with
+	 * cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
+		static_branch_inc_cpuslocked(&sched_cluster_present);
+#endif
+
 	set_cpu_active(cpu, true);
 
 	if (sched_smp_initialized) {
@@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
 		static_branch_dec_cpuslocked(&sched_smt_present);
 #endif
 
+#ifdef CONFIG_SCHED_CLUSTER
+	/*
+	 * When going down, decrement the number of cpus with cluster present.
+	 */
+	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
+		static_branch_dec_cpuslocked(&sched_cluster_present);
+#endif
+
 	if (!sched_smp_initialized)
 		return 0;
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8a8bd7b..3db7b07 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
 	return -1;
 }
 
+#ifdef CONFIG_SCHED_CLUSTER
+DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
+EXPORT_SYMBOL_GPL(sched_cluster_present);
+#endif
+
 #ifdef CONFIG_SCHED_SMT
 DEFINE_STATIC_KEY_FALSE(sched_smt_present);
 EXPORT_SYMBOL_GPL(sched_smt_present);
@@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
 
 #endif /* CONFIG_SCHED_SMT */
 
+static inline int _select_idle_cpu(bool smt, struct task_struct *p, int target, struct cpumask *cpus, int *idle_cpu, int *nr)
+{
+	int cpu, i;
+
+	for_each_cpu_wrap(cpu, cpus, target) {
+		if (smt) {
+			i = select_idle_core(p, cpu, cpus, idle_cpu);
+		} else {
+			if (!--*nr)
+				return -1;
+			i = __select_idle_cpu(cpu);
+		} 
+
+		if ((unsigned int)i < nr_cpumask_bits)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  * Scan the LLC domain for idle CPUs; this is dynamically regulated by
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
 static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
 {
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
-	int i, cpu, idle_cpu = -1, nr = INT_MAX;
+	int i, idle_cpu = -1, nr = INT_MAX;
 	bool smt = test_idle_cores(target, false);
 	int this = smp_processor_id();
 	struct sched_domain *this_sd;
@@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 	if (!this_sd)
 		return -1;
 
-	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+	if (!sched_cluster_active())
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+#ifdef CONFIG_SCHED_CLUSTER
+	if (sched_cluster_active())
+		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
+#endif
 
 	if (sched_feat(SIS_PROP) && !smt) {
 		u64 avg_cost, avg_idle, span_avg;
@@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
 		time = cpu_clock(this);
 	}
 
-	for_each_cpu_wrap(cpu, cpus, target) {
-		if (smt) {
-			i = select_idle_core(p, cpu, cpus, &idle_cpu);
-			if ((unsigned int)i < nr_cpumask_bits)
-				return i;
+	/* scan cluster before scanning the whole llc */
+#ifdef CONFIG_SCHED_CLUSTER
+	if (sched_cluster_active()) {
+		i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
+		if ((unsigned int) i < nr_cpumask_bits) {
+			idle_cpu = i;
+			goto done;
+		} else if (nr <= 0)
+			return -1;
 
-		} else {
-			if (!--nr)
-				return -1;
-			idle_cpu = __select_idle_cpu(cpu);
-			if ((unsigned int)idle_cpu < nr_cpumask_bits)
-				break;
-		}
+		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
 	}
+#endif
+
+	i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
+	if ((unsigned int) i < nr_cpumask_bits) {
+		idle_cpu = i;
+		goto done;
+	} else if (nr <= 0)
+		return -1;
 
 	if (smt)
 		set_idle_cores(this, false);
 
+done:
 	if (sched_feat(SIS_PROP) && !smt) {
 		time = cpu_clock(this) - time;
 		update_avg(&this_sd->avg_scan_cost, time);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 10a1522..48a020f 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -6,6 +6,7 @@
 
 #include <linux/sched/autogroup.h>
 #include <linux/sched/clock.h>
+#include <linux/sched/cluster.h>
 #include <linux/sched/coredump.h>
 #include <linux/sched/cpufreq.h>
 #include <linux/sched/cputime.h>
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 09d3504..d019c25 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1361,6 +1361,7 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
  */
 #define TOPOLOGY_SD_FLAGS		\
 	(SD_SHARE_CPUCAPACITY	|	\
+	 SD_SHARE_CLS_RESOURCES	|	\
 	 SD_SHARE_PKG_RESOURCES |	\
 	 SD_NUMA		|	\
 	 SD_ASYM_PACKING)
@@ -1480,6 +1481,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+
+#ifdef CONFIG_SCHED_CLUSTER
+       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
+
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
 #endif
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
  2021-03-01 22:59 ` Barry Song
@ 2021-03-01 22:59   ` Barry Song
  -1 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa,
	Barry Song

From: Tim Chen <tim.c.chen@linux.intel.com>

There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a cluster of cores instead of being exclusive
to one single core.

To prevent oversubscription of L2 cache, load should be
balanced between such L2 clusters, especially for tasks with
no shared data.

Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.

Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/x86/Kconfig                |  8 ++++++++
 arch/x86/include/asm/smp.h      |  7 +++++++
 arch/x86/include/asm/topology.h |  1 +
 arch/x86/kernel/cpu/cacheinfo.c |  1 +
 arch/x86/kernel/cpu/common.c    |  3 +++
 arch/x86/kernel/smpboot.c       | 43 ++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d3338a8..40110de 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1009,6 +1009,14 @@ config NR_CPUS
 	  This is purely to save memory: each supported CPU adds about 8KB
 	  to the kernel image.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	default n
+	help
+	 Cluster scheduler support improves the CPU scheduler's decision
+	 making when dealing with machines that have clusters of CPUs
+	 sharing L2 cache. If unsure say N here.
+
 config SCHED_SMT
 	def_bool y if SMP
 
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index c0538f8..9cbc4ae 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -16,7 +16,9 @@
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map);
 /* cpus sharing the last level cache: */
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
 static inline struct cpumask *cpu_llc_shared_mask(int cpu)
@@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu)
 	return per_cpu(cpu_llc_shared_map, cpu);
 }
 
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+{
+	return per_cpu(cpu_l2c_shared_map, cpu);
+}
+
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 9239399..2a11ccc 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
 #include <asm-generic/topology.h>
 
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
 #define topology_logical_package_id(cpu)	(cpu_data(cpu).logical_proc_id)
 #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3ca9be4..0d03a71 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
 		l2 = new_l2;
 #ifdef CONFIG_SMP
 		per_cpu(cpu_llc_id, cpu) = l2_id;
+		per_cpu(cpu_l2c_id, cpu) = l2_id;
 #endif
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad848..fb08c73 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -78,6 +78,9 @@
 /* Last level cache ID of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
 
+/* L2 cache ID of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID;
+
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 02813a7..c85ffa8 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,8 @@
 
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
 
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
+
 /* Per CPU bogomips and other parameters */
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
@@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 	return topology_sane(c, o, "llc");
 }
 
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+{
+	int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
+
+	/* Do not match if we do not have a valid APICID for cpu: */
+	if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID)
+		return false;
+
+	/* Do not match if L2 cache id does not match: */
+	if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2))
+		return false;
+
+	return topology_sane(c, o, "l2c");
+}
+
 /*
  * Unlike the other levels, we do not enforce keeping a
  * multicore group inside a NUMA node.  If this happens, we will
@@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 }
 
 
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC)
 static inline int x86_sched_itmt_flags(void)
 {
 	return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0;
@@ -540,12 +557,21 @@ static int x86_smt_flags(void)
 	return cpu_smt_flags() | x86_sched_itmt_flags();
 }
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+static int x86_cluster_flags(void)
+{
+	return cpu_cluster_flags() | x86_sched_itmt_flags();
+}
+#endif
 #endif
 
 static struct sched_domain_topology_level x86_numa_in_package_topology[] = {
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -556,6 +582,9 @@ static int x86_smt_flags(void)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu)
 	if (!has_mp) {
 		cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu));
 		cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu));
+		cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu));
 		cpumask_set_cpu(cpu, topology_core_cpumask(cpu));
 		cpumask_set_cpu(cpu, topology_die_cpumask(cpu));
 		c->booted_cores = 1;
@@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu)
 		if ((i == cpu) || (has_mp && match_llc(c, o)))
 			link_mask(cpu_llc_shared_mask, cpu, i);
 
+		if ((i == cpu) || (has_mp && match_l2c(c, o)))
+			link_mask(cpu_l2c_shared_mask, cpu, i);
 	}
 
 	/*
@@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return cpu_llc_shared_mask(cpu);
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return cpu_l2c_shared_mask(cpu);
+}
+
 static void impress_friends(void)
 {
 	int cpu;
@@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 		zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL);
 	}
 
 	/*
@@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu)
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
 	for_each_cpu(sibling, cpu_llc_shared_mask(cpu))
 		cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling));
+	for_each_cpu(sibling, cpu_l2c_shared_mask(cpu))
+		cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling));
 	cpumask_clear(cpu_llc_shared_mask(cpu));
+	cpumask_clear(cpu_l2c_shared_mask(cpu));
 	cpumask_clear(topology_sibling_cpumask(cpu));
 	cpumask_clear(topology_core_cpumask(cpu));
 	cpumask_clear(topology_die_cpumask(cpu));
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
@ 2021-03-01 22:59   ` Barry Song
  0 siblings, 0 replies; 28+ messages in thread
From: Barry Song @ 2021-03-01 22:59 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman
  Cc: juri.lelli, mark.rutland, aubrey.li, hpa, prime.zeng, guodong.xu,
	gregkh, sudeep.holla, linux-kernel, linuxarm, linux-acpi, xuwei5,
	jonathan.cameron, yangyicong, x86, msys.mizuma, liguozhu,
	valentin.schneider, linux-arm-kernel, Barry Song

From: Tim Chen <tim.c.chen@linux.intel.com>

There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a cluster of cores instead of being exclusive
to one single core.

To prevent oversubscription of L2 cache, load should be
balanced between such L2 clusters, especially for tasks with
no shared data.

Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.

Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
---
 arch/x86/Kconfig                |  8 ++++++++
 arch/x86/include/asm/smp.h      |  7 +++++++
 arch/x86/include/asm/topology.h |  1 +
 arch/x86/kernel/cpu/cacheinfo.c |  1 +
 arch/x86/kernel/cpu/common.c    |  3 +++
 arch/x86/kernel/smpboot.c       | 43 ++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d3338a8..40110de 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1009,6 +1009,14 @@ config NR_CPUS
 	  This is purely to save memory: each supported CPU adds about 8KB
 	  to the kernel image.
 
+config SCHED_CLUSTER
+	bool "Cluster scheduler support"
+	default n
+	help
+	 Cluster scheduler support improves the CPU scheduler's decision
+	 making when dealing with machines that have clusters of CPUs
+	 sharing L2 cache. If unsure say N here.
+
 config SCHED_SMT
 	def_bool y if SMP
 
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index c0538f8..9cbc4ae 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -16,7 +16,9 @@
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_die_map);
 /* cpus sharing the last level cache: */
 DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
+DECLARE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id);
+DECLARE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id);
 DECLARE_PER_CPU_READ_MOSTLY(int, cpu_number);
 
 static inline struct cpumask *cpu_llc_shared_mask(int cpu)
@@ -24,6 +26,11 @@ static inline struct cpumask *cpu_llc_shared_mask(int cpu)
 	return per_cpu(cpu_llc_shared_map, cpu);
 }
 
+static inline struct cpumask *cpu_l2c_shared_mask(int cpu)
+{
+	return per_cpu(cpu_l2c_shared_map, cpu);
+}
+
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid);
 DECLARE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid);
diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
index 9239399..2a11ccc 100644
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -103,6 +103,7 @@ static inline void setup_node_to_cpumask_map(void) { }
 #include <asm-generic/topology.h>
 
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
+extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
 #define topology_logical_package_id(cpu)	(cpu_data(cpu).logical_proc_id)
 #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3ca9be4..0d03a71 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -846,6 +846,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
 		l2 = new_l2;
 #ifdef CONFIG_SMP
 		per_cpu(cpu_llc_id, cpu) = l2_id;
+		per_cpu(cpu_l2c_id, cpu) = l2_id;
 #endif
 	}
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad848..fb08c73 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -78,6 +78,9 @@
 /* Last level cache ID of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_llc_id) = BAD_APICID;
 
+/* L2 cache ID of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(u16, cpu_l2c_id) = BAD_APICID;
+
 /* correctly size the local cpu masks */
 void __init setup_cpu_local_masks(void)
 {
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 02813a7..c85ffa8 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -101,6 +101,8 @@
 
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
 
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
+
 /* Per CPU bogomips and other parameters */
 DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
 EXPORT_PER_CPU_SYMBOL(cpu_info);
@@ -501,6 +503,21 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 	return topology_sane(c, o, "llc");
 }
 
+static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+{
+	int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
+
+	/* Do not match if we do not have a valid APICID for cpu: */
+	if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID)
+		return false;
+
+	/* Do not match if L2 cache id does not match: */
+	if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2))
+		return false;
+
+	return topology_sane(c, o, "l2c");
+}
+
 /*
  * Unlike the other levels, we do not enforce keeping a
  * multicore group inside a NUMA node.  If this happens, we will
@@ -522,7 +539,7 @@ static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 }
 
 
-#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_CLUSTER) || defined(CONFIG_SCHED_MC)
 static inline int x86_sched_itmt_flags(void)
 {
 	return sysctl_sched_itmt_enabled ? SD_ASYM_PACKING : 0;
@@ -540,12 +557,21 @@ static int x86_smt_flags(void)
 	return cpu_smt_flags() | x86_sched_itmt_flags();
 }
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+static int x86_cluster_flags(void)
+{
+	return cpu_cluster_flags() | x86_sched_itmt_flags();
+}
+#endif
 #endif
 
 static struct sched_domain_topology_level x86_numa_in_package_topology[] = {
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -556,6 +582,9 @@ static int x86_smt_flags(void)
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, x86_smt_flags, SD_INIT_NAME(SMT) },
 #endif
+#ifdef CONFIG_SCHED_CLUSTER
+	{ cpu_clustergroup_mask, x86_cluster_flags, SD_INIT_NAME(CLS) },
+#endif
 #ifdef CONFIG_SCHED_MC
 	{ cpu_coregroup_mask, x86_core_flags, SD_INIT_NAME(MC) },
 #endif
@@ -583,6 +612,7 @@ void set_cpu_sibling_map(int cpu)
 	if (!has_mp) {
 		cpumask_set_cpu(cpu, topology_sibling_cpumask(cpu));
 		cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu));
+		cpumask_set_cpu(cpu, cpu_l2c_shared_mask(cpu));
 		cpumask_set_cpu(cpu, topology_core_cpumask(cpu));
 		cpumask_set_cpu(cpu, topology_die_cpumask(cpu));
 		c->booted_cores = 1;
@@ -598,6 +628,8 @@ void set_cpu_sibling_map(int cpu)
 		if ((i == cpu) || (has_mp && match_llc(c, o)))
 			link_mask(cpu_llc_shared_mask, cpu, i);
 
+		if ((i == cpu) || (has_mp && match_l2c(c, o)))
+			link_mask(cpu_l2c_shared_mask, cpu, i);
 	}
 
 	/*
@@ -649,6 +681,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
 	return cpu_llc_shared_mask(cpu);
 }
 
+const struct cpumask *cpu_clustergroup_mask(int cpu)
+{
+	return cpu_l2c_shared_mask(cpu);
+}
+
 static void impress_friends(void)
 {
 	int cpu;
@@ -1332,6 +1369,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 		zalloc_cpumask_var(&per_cpu(cpu_core_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_die_map, i), GFP_KERNEL);
 		zalloc_cpumask_var(&per_cpu(cpu_llc_shared_map, i), GFP_KERNEL);
+		zalloc_cpumask_var(&per_cpu(cpu_l2c_shared_map, i), GFP_KERNEL);
 	}
 
 	/*
@@ -1556,7 +1594,10 @@ static void remove_siblinginfo(int cpu)
 		cpumask_clear_cpu(cpu, topology_sibling_cpumask(sibling));
 	for_each_cpu(sibling, cpu_llc_shared_mask(cpu))
 		cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling));
+	for_each_cpu(sibling, cpu_l2c_shared_mask(cpu))
+		cpumask_clear_cpu(cpu, cpu_l2c_shared_mask(sibling));
 	cpumask_clear(cpu_llc_shared_mask(cpu));
+	cpumask_clear(cpu_l2c_shared_mask(cpu));
 	cpumask_clear(topology_sibling_cpumask(cpu));
 	cpumask_clear(topology_core_cpumask(cpu));
 	cpumask_clear(topology_die_cpumask(cpu));
-- 
1.8.3.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
  2021-03-01 22:59   ` Barry Song
@ 2021-03-02 10:30     ` Peter Zijlstra
  -1 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2021-03-02 10:30 UTC (permalink / raw)
  To: Barry Song
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, dietmar.eggemann, rostedt, bsegall, mgorman,
	msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa

On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> is shared among a cluster of cores instead of being exclusive
> to one single core.

Isn't that most atoms one way or another? Tremont seems to have it per 4
cores, but earlier it was per 2 cores.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
@ 2021-03-02 10:30     ` Peter Zijlstra
  0 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2021-03-02 10:30 UTC (permalink / raw)
  To: Barry Song
  Cc: juri.lelli, mark.rutland, x86, catalin.marinas, bsegall,
	liguozhu, xuwei5, hpa, will, vincent.guittot, aubrey.li,
	linux-acpi, mingo, mgorman, msys.mizuma, valentin.schneider,
	lenb, linuxarm, rostedt, bp, prime.zeng, jonathan.cameron, tglx,
	dietmar.eggemann, linux-arm-kernel, guodong.xu, gregkh, rjw,
	linux-kernel, sudeep.holla, yangyicong, tim.c.chen

On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> From: Tim Chen <tim.c.chen@linux.intel.com>
> 
> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> is shared among a cluster of cores instead of being exclusive
> to one single core.

Isn't that most atoms one way or another? Tremont seems to have it per 4
cores, but earlier it was per 2 cores.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
  2021-03-01 22:59   ` Barry Song
@ 2021-03-02 10:43     ` Peter Zijlstra
  -1 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2021-03-02 10:43 UTC (permalink / raw)
  To: Barry Song
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, dietmar.eggemann, rostedt, bsegall, mgorman,
	msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa

On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 88a2e2b..d805e59 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
>  	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
>  		static_branch_inc_cpuslocked(&sched_smt_present);
>  #endif
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +	/*
> +	 * When going up, increment the number of cluster cpus with
> +	 * cluster present.
> +	 */
> +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +		static_branch_inc_cpuslocked(&sched_cluster_present);
> +#endif
> +
>  	set_cpu_active(cpu, true);
>  
>  	if (sched_smp_initialized) {
> @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
>  		static_branch_dec_cpuslocked(&sched_smt_present);
>  #endif
>  
> +#ifdef CONFIG_SCHED_CLUSTER
> +	/*
> +	 * When going down, decrement the number of cpus with cluster present.
> +	 */
> +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +		static_branch_dec_cpuslocked(&sched_cluster_present);
> +#endif
> +
>  	if (!sched_smp_initialized)
>  		return 0;

I don't think that's correct. IIUC this will mean the
sched_cluster_present thing will be enabled on anything with SMT (very
much including x86 big cores after the next patch).

I'm thinking that at the very least you should check a CLS domain
exists, but that might be hard at this point, because the sched domains
haven't been build yet.

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b..3db7b07 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
>  	return -1;
>  }
>  
> +#ifdef CONFIG_SCHED_CLUSTER
> +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> +EXPORT_SYMBOL_GPL(sched_cluster_present);

I really rather think this shouldn't be exported

> +#endif
> +
>  #ifdef CONFIG_SCHED_SMT
>  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
>  EXPORT_SYMBOL_GPL(sched_smt_present);

This is a KVM wart, it needs to know because mitigation crap.

> @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>  
>  #endif /* CONFIG_SCHED_SMT */
>  
> +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int target, struct cpumask *cpus, int *idle_cpu, int *nr)
> +{
> +	int cpu, i;
> +
> +	for_each_cpu_wrap(cpu, cpus, target) {
> +		if (smt) {
> +			i = select_idle_core(p, cpu, cpus, idle_cpu);
> +		} else {
> +			if (!--*nr)
> +				return -1;
> +			i = __select_idle_cpu(cpu);
> +		} 
> +
> +		if ((unsigned int)i < nr_cpumask_bits)
> +			return i;
> +	}
> +
> +	return -1;
> +}
> +
>  /*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>  	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -	int i, cpu, idle_cpu = -1, nr = INT_MAX;
> +	int i, idle_cpu = -1, nr = INT_MAX;
>  	bool smt = test_idle_cores(target, false);
>  	int this = smp_processor_id();
>  	struct sched_domain *this_sd;
> @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>  	if (!this_sd)
>  		return -1;
>  
> -	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +	if (!sched_cluster_active())
> +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +#ifdef CONFIG_SCHED_CLUSTER
> +	if (sched_cluster_active())
> +		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> +#endif
>  
>  	if (sched_feat(SIS_PROP) && !smt) {
>  		u64 avg_cost, avg_idle, span_avg;
> @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>  		time = cpu_clock(this);
>  	}
>  
> -	for_each_cpu_wrap(cpu, cpus, target) {
> -		if (smt) {
> -			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> -			if ((unsigned int)i < nr_cpumask_bits)
> -				return i;
> +	/* scan cluster before scanning the whole llc */
> +#ifdef CONFIG_SCHED_CLUSTER
> +	if (sched_cluster_active()) {
> +		i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +		if ((unsigned int) i < nr_cpumask_bits) {
> +			idle_cpu = i;
> +			goto done;
> +		} else if (nr <= 0)
> +			return -1;
>  
> -		} else {
> -			if (!--nr)
> -				return -1;
> -			idle_cpu = __select_idle_cpu(cpu);
> -			if ((unsigned int)idle_cpu < nr_cpumask_bits)
> -				break;
> -		}
> +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
>  	}
> +#endif
> +
> +	i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +	if ((unsigned int) i < nr_cpumask_bits) {
> +		idle_cpu = i;
> +		goto done;
> +	} else if (nr <= 0)
> +		return -1;
>  
>  	if (smt)
>  		set_idle_cores(this, false);
>  
> +done:
>  	if (sched_feat(SIS_PROP) && !smt) {
>  		time = cpu_clock(this) - time;
>  		update_avg(&this_sd->avg_scan_cost, time);

And this is just horrific :-(

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
@ 2021-03-02 10:43     ` Peter Zijlstra
  0 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2021-03-02 10:43 UTC (permalink / raw)
  To: Barry Song
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, dietmar.eggemann, rostedt, bsegall, mgorman,
	msys.mizuma, valentin.schneider, gregkh, jonathan.cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei5,
	prime.zeng, guodong.xu, yangyicong, liguozhu, linuxarm, hpa

On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 88a2e2b..d805e59 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
>  	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
>  		static_branch_inc_cpuslocked(&sched_smt_present);
>  #endif
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +	/*
> +	 * When going up, increment the number of cluster cpus with
> +	 * cluster present.
> +	 */
> +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +		static_branch_inc_cpuslocked(&sched_cluster_present);
> +#endif
> +
>  	set_cpu_active(cpu, true);
>  
>  	if (sched_smp_initialized) {
> @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
>  		static_branch_dec_cpuslocked(&sched_smt_present);
>  #endif
>  
> +#ifdef CONFIG_SCHED_CLUSTER
> +	/*
> +	 * When going down, decrement the number of cpus with cluster present.
> +	 */
> +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +		static_branch_dec_cpuslocked(&sched_cluster_present);
> +#endif
> +
>  	if (!sched_smp_initialized)
>  		return 0;

I don't think that's correct. IIUC this will mean the
sched_cluster_present thing will be enabled on anything with SMT (very
much including x86 big cores after the next patch).

I'm thinking that at the very least you should check a CLS domain
exists, but that might be hard at this point, because the sched domains
haven't been build yet.

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b..3db7b07 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
>  	return -1;
>  }
>  
> +#ifdef CONFIG_SCHED_CLUSTER
> +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> +EXPORT_SYMBOL_GPL(sched_cluster_present);

I really rather think this shouldn't be exported

> +#endif
> +
>  #ifdef CONFIG_SCHED_SMT
>  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
>  EXPORT_SYMBOL_GPL(sched_smt_present);

This is a KVM wart, it needs to know because mitigation crap.

> @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>  
>  #endif /* CONFIG_SCHED_SMT */
>  
> +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int target, struct cpumask *cpus, int *idle_cpu, int *nr)
> +{
> +	int cpu, i;
> +
> +	for_each_cpu_wrap(cpu, cpus, target) {
> +		if (smt) {
> +			i = select_idle_core(p, cpu, cpus, idle_cpu);
> +		} else {
> +			if (!--*nr)
> +				return -1;
> +			i = __select_idle_cpu(cpu);
> +		} 
> +
> +		if ((unsigned int)i < nr_cpumask_bits)
> +			return i;
> +	}
> +
> +	return -1;
> +}
> +
>  /*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>  	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -	int i, cpu, idle_cpu = -1, nr = INT_MAX;
> +	int i, idle_cpu = -1, nr = INT_MAX;
>  	bool smt = test_idle_cores(target, false);
>  	int this = smp_processor_id();
>  	struct sched_domain *this_sd;
> @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>  	if (!this_sd)
>  		return -1;
>  
> -	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +	if (!sched_cluster_active())
> +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +#ifdef CONFIG_SCHED_CLUSTER
> +	if (sched_cluster_active())
> +		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> +#endif
>  
>  	if (sched_feat(SIS_PROP) && !smt) {
>  		u64 avg_cost, avg_idle, span_avg;
> @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>  		time = cpu_clock(this);
>  	}
>  
> -	for_each_cpu_wrap(cpu, cpus, target) {
> -		if (smt) {
> -			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> -			if ((unsigned int)i < nr_cpumask_bits)
> -				return i;
> +	/* scan cluster before scanning the whole llc */
> +#ifdef CONFIG_SCHED_CLUSTER
> +	if (sched_cluster_active()) {
> +		i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +		if ((unsigned int) i < nr_cpumask_bits) {
> +			idle_cpu = i;
> +			goto done;
> +		} else if (nr <= 0)
> +			return -1;
>  
> -		} else {
> -			if (!--nr)
> -				return -1;
> -			idle_cpu = __select_idle_cpu(cpu);
> -			if ((unsigned int)idle_cpu < nr_cpumask_bits)
> -				break;
> -		}
> +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
>  	}
> +#endif
> +
> +	i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +	if ((unsigned int) i < nr_cpumask_bits) {
> +		idle_cpu = i;
> +		goto done;
> +	} else if (nr <= 0)
> +		return -1;
>  
>  	if (smt)
>  		set_idle_cores(this, false);
>  
> +done:
>  	if (sched_feat(SIS_PROP) && !smt) {
>  		time = cpu_clock(this) - time;
>  		update_avg(&this_sd->avg_scan_cost, time);

And this is just horrific :-(

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
  2021-03-02 10:30     ` Peter Zijlstra
@ 2021-03-03 18:34       ` Tim Chen
  -1 siblings, 0 replies; 28+ messages in thread
From: Tim Chen @ 2021-03-03 18:34 UTC (permalink / raw)
  To: Peter Zijlstra, Barry Song
  Cc: catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, dietmar.eggemann, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, jonathan.cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei5, prime.zeng, guodong.xu,
	yangyicong, liguozhu, linuxarm, hpa



On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
>> From: Tim Chen <tim.c.chen@linux.intel.com>
>>
>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>> is shared among a cluster of cores instead of being exclusive
>> to one single core.
> 
> Isn't that most atoms one way or another? Tremont seems to have it per 4
> cores, but earlier it was per 2 cores.
> 

Yes, older Atoms have 2 cores sharing L2.  I probably should
rephrase my comments to not leave the impression that sharing
L2 among cores is new for Atoms.

Tremont based Atom CPUs increases the possible load imbalance more
with 4 cores per L2 instead of 2.  And also with more overall cores on a die, the
chance increases for packing running tasks on a few clusters while leaving
others empty on light/medium loaded systems.  We did see
this effect on Jacobsville.

So load balancing between the L2 clusters is more
useful on Tremont based Atom CPUs compared to the older Atoms.

Tim

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
@ 2021-03-03 18:34       ` Tim Chen
  0 siblings, 0 replies; 28+ messages in thread
From: Tim Chen @ 2021-03-03 18:34 UTC (permalink / raw)
  To: Peter Zijlstra, Barry Song
  Cc: catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, dietmar.eggemann, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, jonathan.cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei5, prime.zeng, guodong.xu,
	yangyicong, liguozhu, linuxarm, hpa



On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
>> From: Tim Chen <tim.c.chen@linux.intel.com>
>>
>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>> is shared among a cluster of cores instead of being exclusive
>> to one single core.
> 
> Isn't that most atoms one way or another? Tremont seems to have it per 4
> cores, but earlier it was per 2 cores.
> 

Yes, older Atoms have 2 cores sharing L2.  I probably should
rephrase my comments to not leave the impression that sharing
L2 among cores is new for Atoms.

Tremont based Atom CPUs increases the possible load imbalance more
with 4 cores per L2 instead of 2.  And also with more overall cores on a die, the
chance increases for packing running tasks on a few clusters while leaving
others empty on light/medium loaded systems.  We did see
this effect on Jacobsville.

So load balancing between the L2 clusters is more
useful on Tremont based Atom CPUs compared to the older Atoms.

Tim

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
  2021-03-01 22:59   ` Barry Song
@ 2021-03-08 11:25     ` Vincent Guittot
  -1 siblings, 0 replies; 28+ messages in thread
From: Vincent Guittot @ 2021-03-08 11:25 UTC (permalink / raw)
  To: Barry Song
  Cc: Tim Chen, Catalin Marinas, Will Deacon, Rafael J. Wysocki,
	Borislav Petkov, Thomas Gleixner, Ingo Molnar, Cc: Len Brown,
	Peter Zijlstra, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Juri Lelli, Mark Rutland, Aubrey Li, H. Peter Anvin,
	Zengtao (B),
	Guodong Xu, gregkh, Sudeep Holla, linux-kernel, linuxarm,
	ACPI Devel Maling List, xuwei (O),
	Jonathan Cameron, yangyicong, x86, msys.mizuma,
	Liguozhu (Kenneth),
	Valentin Schneider, LAK

On Tue, 2 Mar 2021 at 00:08, Barry Song <song.bao.hua@hisilicon.com> wrote:
>
> ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus. This means cache coherence overhead inside one
> cluster is much less than the overhead across clusters.
>
> This patch adds the sched_domain for clusters. On kunpeng 920, without
> this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
> patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
>
> This will help spread unrelated tasks among clusters, thus decrease the
> contention and improve the throughput, for example, stream benchmark can
> improve around 4.3%~6.3% by this patch:
>
> w/o patch:
> numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> STREAM copy latency: 3.36 nanoseconds
> STREAM copy bandwidth: 57072.50 MB/sec
> STREAM scale latency: 3.40 nanoseconds
> STREAM scale bandwidth: 56542.52 MB/sec
> STREAM add latency: 5.10 nanoseconds
> STREAM add bandwidth: 56482.83 MB/sec
> STREAM triad latency: 5.14 nanoseconds
> STREAM triad bandwidth: 56069.52 MB/sec
>
> w/ patch:
> $ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> STREAM copy latency: 3.22 nanoseconds
> STREAM copy bandwidth: 59660.96 MB/sec    ->  +4.5%
> STREAM scale latency: 3.25 nanoseconds
> STREAM scale bandwidth: 59002.29 MB/sec   ->  +4.3%
> STREAM add latency: 4.80 nanoseconds
> STREAM add bandwidth: 60036.62 MB/sec     ->  +6.3%
> STREAM triad latency: 4.86 nanoseconds
> STREAM triad bandwidth: 59228.30 MB/sec   ->  +5.6%
>
> On the other hand, while doing WAKE_AFFINE, this patch will try to find
> a core in the target cluster before scanning the whole llc domain. So it
> helps gather related tasks within one cluster.

Could you split this patch in 2 patches ? One for adding a cluster
sched domain level and one for modifying the wake up path ?

This would ease the review and I would be curious about the impact of
each feature in the performance. In particular, I'm still not
convinced that the modification of the wakeup path is the root of the
hackbench improvement; especially with g=14 where there should not be
much idle CPUs with 14*40 tasks on at most 32 CPUs.  IIRC, there was
no obvious improvement with the changes in select_idle_cpu unless you
hack the behavior to not fall back to llc domain

> we run the below hackbench with different -g parameter from 2 to 14, for
> each different g, we run the command 10 times and get the average time
> $ numactl -N 0 hackbench -p -T -l 20000 -g $1
>
> hackbench will report the time which is needed to complete a certain number
> of messages transmissions between a certain number of tasks, for example:
> $ numactl -N 0 hackbench -p -T -l 20000 -g 10
> Running in threaded mode with 10 groups using 40 file descriptors each
> (== 400 tasks)
> Each sender will pass 20000 messages of 100 bytes
> Time: 8.874
>
> The below is the result of hackbench w/ and w/o the patch:
> g=    2      4     6       8      10     12      14
> w/o: 1.9596 4.0506 5.9654 8.0068 9.8147 11.4900 13.1163
> w/ : 1.9362 3.9197 5.6570 7.1376 8.5263 10.0512 11.3256
>             +3.3%  +5.2%  +10.9% +13.2%  +12.8%  +13.7%
>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
> -v4:
>   * rebased to tip/sched/core with the latest unified code of select_idle_cpu
>   * also added benchmark data of spreading unrelated tasks
>   * avoided the iteration of sched_domain by moving to static_key(addressing
>     Vincent's comment
>
>  arch/arm64/Kconfig             |  7 +++++
>  include/linux/sched/cluster.h  | 19 ++++++++++++
>  include/linux/sched/sd_flags.h |  9 ++++++
>  include/linux/sched/topology.h |  7 +++++
>  include/linux/topology.h       |  7 +++++
>  kernel/sched/core.c            | 18 ++++++++++++
>  kernel/sched/fair.c            | 66 +++++++++++++++++++++++++++++++++---------
>  kernel/sched/sched.h           |  1 +
>  kernel/sched/topology.c        |  6 ++++
>  9 files changed, 126 insertions(+), 14 deletions(-)
>  create mode 100644 include/linux/sched/cluster.h
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f39568b..158b0fa 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -971,6 +971,13 @@ config SCHED_MC
>           making when dealing with multi-core CPU chips at a cost of slightly
>           increased overhead in some places. If unsure say N here.
>
> +config SCHED_CLUSTER
> +       bool "Cluster scheduler support"
> +       help
> +         Cluster scheduler support improves the CPU scheduler's decision
> +         making when dealing with machines that have clusters(sharing internal
> +         bus or sharing LLC cache tag). If unsure say N here.
> +
>  config SCHED_SMT
>         bool "SMT scheduler support"
>         help
> diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
> new file mode 100644
> index 0000000..ea6c475
> --- /dev/null
> +++ b/include/linux/sched/cluster.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_SCHED_CLUSTER_H
> +#define _LINUX_SCHED_CLUSTER_H
> +
> +#include <linux/static_key.h>
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +extern struct static_key_false sched_cluster_present;
> +
> +static __always_inline bool sched_cluster_active(void)
> +{
> +       return static_branch_likely(&sched_cluster_present);
> +}
> +#else
> +static inline bool sched_cluster_active(void) { return false; }
> +
> +#endif
> +
> +#endif
> diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
> index 34b21e9..fc3c894 100644
> --- a/include/linux/sched/sd_flags.h
> +++ b/include/linux/sched/sd_flags.h
> @@ -100,6 +100,15 @@
>  SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
>
>  /*
> + * Domain members share CPU cluster resources (i.e. llc cache tags)
> + *
> + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
> + *               the cluster resouces (such as llc tags and internal bus)
> + * NEEDS_GROUPS: Caches are shared between groups.
> + */
> +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
> +
> +/*
>   * Domain members share CPU package resources (i.e. caches)
>   *
>   * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 8f0f778..846fcac 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
>  }
>  #endif
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +static inline int cpu_cluster_flags(void)
> +{
> +       return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
> +}
> +#endif
> +
>  #ifdef CONFIG_SCHED_MC
>  static inline int cpu_core_flags(void)
>  {
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 80d27d7..0b3704a 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
>  }
>  #endif
>
> +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
> +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> +{
> +       return topology_cluster_cpumask(cpu);
> +}
> +#endif
> +
>  static inline const struct cpumask *cpu_cpu_mask(int cpu)
>  {
>         return cpumask_of_node(cpu_to_node(cpu));
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 88a2e2b..d805e59 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
>         if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
>                 static_branch_inc_cpuslocked(&sched_smt_present);
>  #endif
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +       /*
> +        * When going up, increment the number of cluster cpus with
> +        * cluster present.
> +        */
> +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +               static_branch_inc_cpuslocked(&sched_cluster_present);
> +#endif
> +
>         set_cpu_active(cpu, true);
>
>         if (sched_smp_initialized) {
> @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
>                 static_branch_dec_cpuslocked(&sched_smt_present);
>  #endif
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +       /*
> +        * When going down, decrement the number of cpus with cluster present.
> +        */
> +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +               static_branch_dec_cpuslocked(&sched_cluster_present);
> +#endif
> +
>         if (!sched_smp_initialized)
>                 return 0;
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b..3db7b07 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
>         return -1;
>  }
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> +EXPORT_SYMBOL_GPL(sched_cluster_present);
> +#endif
> +
>  #ifdef CONFIG_SCHED_SMT
>  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
>  EXPORT_SYMBOL_GPL(sched_smt_present);
> @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>
>  #endif /* CONFIG_SCHED_SMT */
>
> +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int target, struct cpumask *cpus, int *idle_cpu, int *nr)
> +{
> +       int cpu, i;
> +
> +       for_each_cpu_wrap(cpu, cpus, target) {
> +               if (smt) {
> +                       i = select_idle_core(p, cpu, cpus, idle_cpu);
> +               } else {
> +                       if (!--*nr)
> +                               return -1;
> +                       i = __select_idle_cpu(cpu);
> +               }
> +
> +               if ((unsigned int)i < nr_cpumask_bits)
> +                       return i;
> +       }
> +
> +       return -1;
> +}
> +
>  /*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -       int i, cpu, idle_cpu = -1, nr = INT_MAX;
> +       int i, idle_cpu = -1, nr = INT_MAX;
>         bool smt = test_idle_cores(target, false);
>         int this = smp_processor_id();
>         struct sched_domain *this_sd;
> @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>         if (!this_sd)
>                 return -1;
>
> -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +       if (!sched_cluster_active())
> +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +#ifdef CONFIG_SCHED_CLUSTER
> +       if (sched_cluster_active())
> +               cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> +#endif
>
>         if (sched_feat(SIS_PROP) && !smt) {
>                 u64 avg_cost, avg_idle, span_avg;
> @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>                 time = cpu_clock(this);
>         }
>
> -       for_each_cpu_wrap(cpu, cpus, target) {
> -               if (smt) {
> -                       i = select_idle_core(p, cpu, cpus, &idle_cpu);
> -                       if ((unsigned int)i < nr_cpumask_bits)
> -                               return i;
> +       /* scan cluster before scanning the whole llc */
> +#ifdef CONFIG_SCHED_CLUSTER
> +       if (sched_cluster_active()) {
> +               i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +               if ((unsigned int) i < nr_cpumask_bits) {
> +                       idle_cpu = i;
> +                       goto done;
> +               } else if (nr <= 0)
> +                       return -1;
>
> -               } else {
> -                       if (!--nr)
> -                               return -1;
> -                       idle_cpu = __select_idle_cpu(cpu);
> -                       if ((unsigned int)idle_cpu < nr_cpumask_bits)
> -                               break;
> -               }
> +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +               cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
>         }
> +#endif
> +
> +       i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +       if ((unsigned int) i < nr_cpumask_bits) {
> +               idle_cpu = i;
> +               goto done;
> +       } else if (nr <= 0)
> +               return -1;
>
>         if (smt)
>                 set_idle_cores(this, false);
>
> +done:
>         if (sched_feat(SIS_PROP) && !smt) {
>                 time = cpu_clock(this) - time;
>                 update_avg(&this_sd->avg_scan_cost, time);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522..48a020f 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -6,6 +6,7 @@
>
>  #include <linux/sched/autogroup.h>
>  #include <linux/sched/clock.h>
> +#include <linux/sched/cluster.h>
>  #include <linux/sched/coredump.h>
>  #include <linux/sched/cpufreq.h>
>  #include <linux/sched/cputime.h>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 09d3504..d019c25 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1361,6 +1361,7 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
>   */
>  #define TOPOLOGY_SD_FLAGS              \
>         (SD_SHARE_CPUCAPACITY   |       \
> +        SD_SHARE_CLS_RESOURCES |       \
>          SD_SHARE_PKG_RESOURCES |       \
>          SD_NUMA                |       \
>          SD_ASYM_PACKING)
> @@ -1480,6 +1481,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
>  #ifdef CONFIG_SCHED_SMT
>         { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
>  #endif
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
> +#endif
> +
>  #ifdef CONFIG_SCHED_MC
>         { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
>  #endif
> --
> 1.8.3.1
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
@ 2021-03-08 11:25     ` Vincent Guittot
  0 siblings, 0 replies; 28+ messages in thread
From: Vincent Guittot @ 2021-03-08 11:25 UTC (permalink / raw)
  To: Barry Song
  Cc: Tim Chen, Catalin Marinas, Will Deacon, Rafael J. Wysocki,
	Borislav Petkov, Thomas Gleixner, Ingo Molnar, Cc: Len Brown,
	Peter Zijlstra, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Juri Lelli, Mark Rutland, Aubrey Li, H. Peter Anvin,
	Zengtao (B),
	Guodong Xu, gregkh, Sudeep Holla, linux-kernel, linuxarm,
	ACPI Devel Maling List, xuwei (O),
	Jonathan Cameron, yangyicong, x86, msys.mizuma,
	Liguozhu (Kenneth),
	Valentin Schneider, LAK

On Tue, 2 Mar 2021 at 00:08, Barry Song <song.bao.hua@hisilicon.com> wrote:
>
> ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus. This means cache coherence overhead inside one
> cluster is much less than the overhead across clusters.
>
> This patch adds the sched_domain for clusters. On kunpeng 920, without
> this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
> patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
>
> This will help spread unrelated tasks among clusters, thus decrease the
> contention and improve the throughput, for example, stream benchmark can
> improve around 4.3%~6.3% by this patch:
>
> w/o patch:
> numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> STREAM copy latency: 3.36 nanoseconds
> STREAM copy bandwidth: 57072.50 MB/sec
> STREAM scale latency: 3.40 nanoseconds
> STREAM scale bandwidth: 56542.52 MB/sec
> STREAM add latency: 5.10 nanoseconds
> STREAM add bandwidth: 56482.83 MB/sec
> STREAM triad latency: 5.14 nanoseconds
> STREAM triad bandwidth: 56069.52 MB/sec
>
> w/ patch:
> $ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> STREAM copy latency: 3.22 nanoseconds
> STREAM copy bandwidth: 59660.96 MB/sec    ->  +4.5%
> STREAM scale latency: 3.25 nanoseconds
> STREAM scale bandwidth: 59002.29 MB/sec   ->  +4.3%
> STREAM add latency: 4.80 nanoseconds
> STREAM add bandwidth: 60036.62 MB/sec     ->  +6.3%
> STREAM triad latency: 4.86 nanoseconds
> STREAM triad bandwidth: 59228.30 MB/sec   ->  +5.6%
>
> On the other hand, while doing WAKE_AFFINE, this patch will try to find
> a core in the target cluster before scanning the whole llc domain. So it
> helps gather related tasks within one cluster.

Could you split this patch in 2 patches ? One for adding a cluster
sched domain level and one for modifying the wake up path ?

This would ease the review and I would be curious about the impact of
each feature in the performance. In particular, I'm still not
convinced that the modification of the wakeup path is the root of the
hackbench improvement; especially with g=14 where there should not be
much idle CPUs with 14*40 tasks on at most 32 CPUs.  IIRC, there was
no obvious improvement with the changes in select_idle_cpu unless you
hack the behavior to not fall back to llc domain

> we run the below hackbench with different -g parameter from 2 to 14, for
> each different g, we run the command 10 times and get the average time
> $ numactl -N 0 hackbench -p -T -l 20000 -g $1
>
> hackbench will report the time which is needed to complete a certain number
> of messages transmissions between a certain number of tasks, for example:
> $ numactl -N 0 hackbench -p -T -l 20000 -g 10
> Running in threaded mode with 10 groups using 40 file descriptors each
> (== 400 tasks)
> Each sender will pass 20000 messages of 100 bytes
> Time: 8.874
>
> The below is the result of hackbench w/ and w/o the patch:
> g=    2      4     6       8      10     12      14
> w/o: 1.9596 4.0506 5.9654 8.0068 9.8147 11.4900 13.1163
> w/ : 1.9362 3.9197 5.6570 7.1376 8.5263 10.0512 11.3256
>             +3.3%  +5.2%  +10.9% +13.2%  +12.8%  +13.7%
>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
> -v4:
>   * rebased to tip/sched/core with the latest unified code of select_idle_cpu
>   * also added benchmark data of spreading unrelated tasks
>   * avoided the iteration of sched_domain by moving to static_key(addressing
>     Vincent's comment
>
>  arch/arm64/Kconfig             |  7 +++++
>  include/linux/sched/cluster.h  | 19 ++++++++++++
>  include/linux/sched/sd_flags.h |  9 ++++++
>  include/linux/sched/topology.h |  7 +++++
>  include/linux/topology.h       |  7 +++++
>  kernel/sched/core.c            | 18 ++++++++++++
>  kernel/sched/fair.c            | 66 +++++++++++++++++++++++++++++++++---------
>  kernel/sched/sched.h           |  1 +
>  kernel/sched/topology.c        |  6 ++++
>  9 files changed, 126 insertions(+), 14 deletions(-)
>  create mode 100644 include/linux/sched/cluster.h
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f39568b..158b0fa 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -971,6 +971,13 @@ config SCHED_MC
>           making when dealing with multi-core CPU chips at a cost of slightly
>           increased overhead in some places. If unsure say N here.
>
> +config SCHED_CLUSTER
> +       bool "Cluster scheduler support"
> +       help
> +         Cluster scheduler support improves the CPU scheduler's decision
> +         making when dealing with machines that have clusters(sharing internal
> +         bus or sharing LLC cache tag). If unsure say N here.
> +
>  config SCHED_SMT
>         bool "SMT scheduler support"
>         help
> diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
> new file mode 100644
> index 0000000..ea6c475
> --- /dev/null
> +++ b/include/linux/sched/cluster.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_SCHED_CLUSTER_H
> +#define _LINUX_SCHED_CLUSTER_H
> +
> +#include <linux/static_key.h>
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +extern struct static_key_false sched_cluster_present;
> +
> +static __always_inline bool sched_cluster_active(void)
> +{
> +       return static_branch_likely(&sched_cluster_present);
> +}
> +#else
> +static inline bool sched_cluster_active(void) { return false; }
> +
> +#endif
> +
> +#endif
> diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h
> index 34b21e9..fc3c894 100644
> --- a/include/linux/sched/sd_flags.h
> +++ b/include/linux/sched/sd_flags.h
> @@ -100,6 +100,15 @@
>  SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
>
>  /*
> + * Domain members share CPU cluster resources (i.e. llc cache tags)
> + *
> + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
> + *               the cluster resouces (such as llc tags and internal bus)
> + * NEEDS_GROUPS: Caches are shared between groups.
> + */
> +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
> +
> +/*
>   * Domain members share CPU package resources (i.e. caches)
>   *
>   * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer share
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 8f0f778..846fcac 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
>  }
>  #endif
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +static inline int cpu_cluster_flags(void)
> +{
> +       return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
> +}
> +#endif
> +
>  #ifdef CONFIG_SCHED_MC
>  static inline int cpu_core_flags(void)
>  {
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 80d27d7..0b3704a 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int cpu)
>  }
>  #endif
>
> +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
> +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> +{
> +       return topology_cluster_cpumask(cpu);
> +}
> +#endif
> +
>  static inline const struct cpumask *cpu_cpu_mask(int cpu)
>  {
>         return cpumask_of_node(cpu_to_node(cpu));
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 88a2e2b..d805e59 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
>         if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
>                 static_branch_inc_cpuslocked(&sched_smt_present);
>  #endif
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +       /*
> +        * When going up, increment the number of cluster cpus with
> +        * cluster present.
> +        */
> +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +               static_branch_inc_cpuslocked(&sched_cluster_present);
> +#endif
> +
>         set_cpu_active(cpu, true);
>
>         if (sched_smp_initialized) {
> @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
>                 static_branch_dec_cpuslocked(&sched_smt_present);
>  #endif
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +       /*
> +        * When going down, decrement the number of cpus with cluster present.
> +        */
> +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> +               static_branch_dec_cpuslocked(&sched_cluster_present);
> +#endif
> +
>         if (!sched_smp_initialized)
>                 return 0;
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b..3db7b07 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
>         return -1;
>  }
>
> +#ifdef CONFIG_SCHED_CLUSTER
> +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> +EXPORT_SYMBOL_GPL(sched_cluster_present);
> +#endif
> +
>  #ifdef CONFIG_SCHED_SMT
>  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
>  EXPORT_SYMBOL_GPL(sched_smt_present);
> @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>
>  #endif /* CONFIG_SCHED_SMT */
>
> +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int target, struct cpumask *cpus, int *idle_cpu, int *nr)
> +{
> +       int cpu, i;
> +
> +       for_each_cpu_wrap(cpu, cpus, target) {
> +               if (smt) {
> +                       i = select_idle_core(p, cpu, cpus, idle_cpu);
> +               } else {
> +                       if (!--*nr)
> +                               return -1;
> +                       i = __select_idle_cpu(cpu);
> +               }
> +
> +               if ((unsigned int)i < nr_cpumask_bits)
> +                       return i;
> +       }
> +
> +       return -1;
> +}
> +
>  /*
>   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
>   * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
> @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct *p, int core, struct cpuma
>  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
>  {
>         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> -       int i, cpu, idle_cpu = -1, nr = INT_MAX;
> +       int i, idle_cpu = -1, nr = INT_MAX;
>         bool smt = test_idle_cores(target, false);
>         int this = smp_processor_id();
>         struct sched_domain *this_sd;
> @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>         if (!this_sd)
>                 return -1;
>
> -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +       if (!sched_cluster_active())
> +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +#ifdef CONFIG_SCHED_CLUSTER
> +       if (sched_cluster_active())
> +               cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> +#endif
>
>         if (sched_feat(SIS_PROP) && !smt) {
>                 u64 avg_cost, avg_idle, span_avg;
> @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>                 time = cpu_clock(this);
>         }
>
> -       for_each_cpu_wrap(cpu, cpus, target) {
> -               if (smt) {
> -                       i = select_idle_core(p, cpu, cpus, &idle_cpu);
> -                       if ((unsigned int)i < nr_cpumask_bits)
> -                               return i;
> +       /* scan cluster before scanning the whole llc */
> +#ifdef CONFIG_SCHED_CLUSTER
> +       if (sched_cluster_active()) {
> +               i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +               if ((unsigned int) i < nr_cpumask_bits) {
> +                       idle_cpu = i;
> +                       goto done;
> +               } else if (nr <= 0)
> +                       return -1;
>
> -               } else {
> -                       if (!--nr)
> -                               return -1;
> -                       idle_cpu = __select_idle_cpu(cpu);
> -                       if ((unsigned int)idle_cpu < nr_cpumask_bits)
> -                               break;
> -               }
> +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +               cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
>         }
> +#endif
> +
> +       i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> +       if ((unsigned int) i < nr_cpumask_bits) {
> +               idle_cpu = i;
> +               goto done;
> +       } else if (nr <= 0)
> +               return -1;
>
>         if (smt)
>                 set_idle_cores(this, false);
>
> +done:
>         if (sched_feat(SIS_PROP) && !smt) {
>                 time = cpu_clock(this) - time;
>                 update_avg(&this_sd->avg_scan_cost, time);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522..48a020f 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -6,6 +6,7 @@
>
>  #include <linux/sched/autogroup.h>
>  #include <linux/sched/clock.h>
> +#include <linux/sched/cluster.h>
>  #include <linux/sched/coredump.h>
>  #include <linux/sched/cpufreq.h>
>  #include <linux/sched/cputime.h>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 09d3504..d019c25 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1361,6 +1361,7 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
>   */
>  #define TOPOLOGY_SD_FLAGS              \
>         (SD_SHARE_CPUCAPACITY   |       \
> +        SD_SHARE_CLS_RESOURCES |       \
>          SD_SHARE_PKG_RESOURCES |       \
>          SD_NUMA                |       \
>          SD_ASYM_PACKING)
> @@ -1480,6 +1481,11 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
>  #ifdef CONFIG_SCHED_SMT
>         { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
>  #endif
> +
> +#ifdef CONFIG_SCHED_CLUSTER
> +       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
> +#endif
> +
>  #ifdef CONFIG_SCHED_MC
>         { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
>  #endif
> --
> 1.8.3.1
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
  2021-03-08 11:25     ` Vincent Guittot
@ 2021-03-08 22:15       ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-08 22:15 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Tim Chen, Catalin Marinas, Will Deacon, Rafael J. Wysocki,
	Borislav Petkov, Thomas Gleixner, Ingo Molnar, Cc: Len Brown,
	Peter Zijlstra, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Juri Lelli, Mark Rutland, Aubrey Li, H. Peter Anvin,
	Zengtao (B),
	Guodong Xu, gregkh, Sudeep Holla, linux-kernel, linuxarm,
	ACPI Devel Maling List, xuwei (O),
	Jonathan Cameron, yangyicong, x86, msys.mizuma,
	Liguozhu (Kenneth),
	Valentin Schneider, LAK



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Tuesday, March 9, 2021 12:26 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Borislav Petkov <bp@alien8.de>; Thomas Gleixner
> <tglx@linutronix.de>; Ingo Molnar <mingo@redhat.com>; Cc: Len Brown
> <lenb@kernel.org>; Peter Zijlstra <peterz@infradead.org>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>; Steven Rostedt <rostedt@goodmis.org>; Ben Segall
> <bsegall@google.com>; Mel Gorman <mgorman@suse.de>; Juri Lelli
> <juri.lelli@redhat.com>; Mark Rutland <mark.rutland@arm.com>; Aubrey Li
> <aubrey.li@linux.intel.com>; H. Peter Anvin <hpa@zytor.com>; Zengtao (B)
> <prime.zeng@hisilicon.com>; Guodong Xu <guodong.xu@linaro.org>;
> gregkh@linuxfoundation.org; Sudeep Holla <sudeep.holla@arm.com>; linux-kernel
> <linux-kernel@vger.kernel.org>; linuxarm@openeuler.org; ACPI Devel Maling
> List <linux-acpi@vger.kernel.org>; xuwei (O) <xuwei5@huawei.com>; Jonathan
> Cameron <jonathan.cameron@huawei.com>; yangyicong <yangyicong@huawei.com>;
> x86 <x86@kernel.org>; msys.mizuma@gmail.com; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; Valentin Schneider <valentin.schneider@arm.com>;
> LAK <linux-arm-kernel@lists.infradead.org>
> Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
> 
> On Tue, 2 Mar 2021 at 00:08, Barry Song <song.bao.hua@hisilicon.com> wrote:
> >
> > ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > has local L3 tag. On the other hand, each clusters will share some
> > internal system bus. This means cache coherence overhead inside one
> > cluster is much less than the overhead across clusters.
> >
> > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
> > patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
> >
> > This will help spread unrelated tasks among clusters, thus decrease the
> > contention and improve the throughput, for example, stream benchmark can
> > improve around 4.3%~6.3% by this patch:
> >
> > w/o patch:
> > numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> > STREAM copy latency: 3.36 nanoseconds
> > STREAM copy bandwidth: 57072.50 MB/sec
> > STREAM scale latency: 3.40 nanoseconds
> > STREAM scale bandwidth: 56542.52 MB/sec
> > STREAM add latency: 5.10 nanoseconds
> > STREAM add bandwidth: 56482.83 MB/sec
> > STREAM triad latency: 5.14 nanoseconds
> > STREAM triad bandwidth: 56069.52 MB/sec
> >
> > w/ patch:
> > $ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> > STREAM copy latency: 3.22 nanoseconds
> > STREAM copy bandwidth: 59660.96 MB/sec    ->  +4.5%
> > STREAM scale latency: 3.25 nanoseconds
> > STREAM scale bandwidth: 59002.29 MB/sec   ->  +4.3%
> > STREAM add latency: 4.80 nanoseconds
> > STREAM add bandwidth: 60036.62 MB/sec     ->  +6.3%
> > STREAM triad latency: 4.86 nanoseconds
> > STREAM triad bandwidth: 59228.30 MB/sec   ->  +5.6%
> >
> > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > a core in the target cluster before scanning the whole llc domain. So it
> > helps gather related tasks within one cluster.
> 
> Could you split this patch in 2 patches ? One for adding a cluster
> sched domain level and one for modifying the wake up path ?

Yes. If this is helpful, I would like to split into two patches.

> 
> This would ease the review and I would be curious about the impact of
> each feature in the performance. In particular, I'm still not
> convinced that the modification of the wakeup path is the root of the
> hackbench improvement; especially with g=14 where there should not be
> much idle CPUs with 14*40 tasks on at most 32 CPUs.  IIRC, there was

My understanding is that threads could be blocked due to pipe. So CPUs
still have some chance to be idle for a big g. Also note the default g
of hackbench is 10.

Anyway, i'd like to add some tracepoints to get the percentages of how
many are picked from cluster, how many are selected from cpus outside
cluster.

> no obvious improvement with the changes in select_idle_cpu unless you
> hack the behavior to not fall back to llc domain
> 

You have a good memory. In a very old version I once mentioned that. But
at that time, I didn't decrease nr after scanning cluster, so it was
scanning at least 8 cpus(4 within cluster, 4 outside cluster). I guess
that is the reason my hack to not fall back to llc domain could bringing
actual hackbench improvement.

> > we run the below hackbench with different -g parameter from 2 to 14, for
> > each different g, we run the command 10 times and get the average time
> > $ numactl -N 0 hackbench -p -T -l 20000 -g $1
> >
> > hackbench will report the time which is needed to complete a certain number
> > of messages transmissions between a certain number of tasks, for example:
> > $ numactl -N 0 hackbench -p -T -l 20000 -g 10
> > Running in threaded mode with 10 groups using 40 file descriptors each
> > (== 400 tasks)
> > Each sender will pass 20000 messages of 100 bytes
> > Time: 8.874
> >
> > The below is the result of hackbench w/ and w/o the patch:
> > g=    2      4     6       8      10     12      14
> > w/o: 1.9596 4.0506 5.9654 8.0068 9.8147 11.4900 13.1163
> > w/ : 1.9362 3.9197 5.6570 7.1376 8.5263 10.0512 11.3256
> >             +3.3%  +5.2%  +10.9% +13.2%  +12.8%  +13.7%
> >
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > ---
> > -v4:
> >   * rebased to tip/sched/core with the latest unified code of select_idle_cpu
> >   * also added benchmark data of spreading unrelated tasks
> >   * avoided the iteration of sched_domain by moving to static_key(addressing
> >     Vincent's comment
> >
> >  arch/arm64/Kconfig             |  7 +++++
> >  include/linux/sched/cluster.h  | 19 ++++++++++++
> >  include/linux/sched/sd_flags.h |  9 ++++++
> >  include/linux/sched/topology.h |  7 +++++
> >  include/linux/topology.h       |  7 +++++
> >  kernel/sched/core.c            | 18 ++++++++++++
> >  kernel/sched/fair.c            | 66
> +++++++++++++++++++++++++++++++++---------
> >  kernel/sched/sched.h           |  1 +
> >  kernel/sched/topology.c        |  6 ++++
> >  9 files changed, 126 insertions(+), 14 deletions(-)
> >  create mode 100644 include/linux/sched/cluster.h
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index f39568b..158b0fa 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -971,6 +971,13 @@ config SCHED_MC
> >           making when dealing with multi-core CPU chips at a cost of slightly
> >           increased overhead in some places. If unsure say N here.
> >
> > +config SCHED_CLUSTER
> > +       bool "Cluster scheduler support"
> > +       help
> > +         Cluster scheduler support improves the CPU scheduler's decision
> > +         making when dealing with machines that have clusters(sharing internal
> > +         bus or sharing LLC cache tag). If unsure say N here.
> > +
> >  config SCHED_SMT
> >         bool "SMT scheduler support"
> >         help
> > diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
> > new file mode 100644
> > index 0000000..ea6c475
> > --- /dev/null
> > +++ b/include/linux/sched/cluster.h
> > @@ -0,0 +1,19 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _LINUX_SCHED_CLUSTER_H
> > +#define _LINUX_SCHED_CLUSTER_H
> > +
> > +#include <linux/static_key.h>
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +extern struct static_key_false sched_cluster_present;
> > +
> > +static __always_inline bool sched_cluster_active(void)
> > +{
> > +       return static_branch_likely(&sched_cluster_present);
> > +}
> > +#else
> > +static inline bool sched_cluster_active(void) { return false; }
> > +
> > +#endif
> > +
> > +#endif
> > diff --git a/include/linux/sched/sd_flags.h
> b/include/linux/sched/sd_flags.h
> > index 34b21e9..fc3c894 100644
> > --- a/include/linux/sched/sd_flags.h
> > +++ b/include/linux/sched/sd_flags.h
> > @@ -100,6 +100,15 @@
> >  SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
> >
> >  /*
> > + * Domain members share CPU cluster resources (i.e. llc cache tags)
> > + *
> > + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer
> share
> > + *               the cluster resouces (such as llc tags and internal bus)
> > + * NEEDS_GROUPS: Caches are shared between groups.
> > + */
> > +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
> > +
> > +/*
> >   * Domain members share CPU package resources (i.e. caches)
> >   *
> >   * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer
> share
> > diff --git a/include/linux/sched/topology.h
> b/include/linux/sched/topology.h
> > index 8f0f778..846fcac 100644
> > --- a/include/linux/sched/topology.h
> > +++ b/include/linux/sched/topology.h
> > @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
> >  }
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +static inline int cpu_cluster_flags(void)
> > +{
> > +       return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
> > +}
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_MC
> >  static inline int cpu_core_flags(void)
> >  {
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index 80d27d7..0b3704a 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> cpu)
> >  }
> >  #endif
> >
> > +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
> > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > +{
> > +       return topology_cluster_cpumask(cpu);
> > +}
> > +#endif
> > +
> >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> >  {
> >         return cpumask_of_node(cpu_to_node(cpu));
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 88a2e2b..d805e59 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
> >         if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> >                 static_branch_inc_cpuslocked(&sched_smt_present);
> >  #endif
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       /*
> > +        * When going up, increment the number of cluster cpus with
> > +        * cluster present.
> > +        */
> > +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +               static_branch_inc_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >         set_cpu_active(cpu, true);
> >
> >         if (sched_smp_initialized) {
> > @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
> >                 static_branch_dec_cpuslocked(&sched_smt_present);
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       /*
> > +        * When going down, decrement the number of cpus with cluster present.
> > +        */
> > +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +               static_branch_dec_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >         if (!sched_smp_initialized)
> >                 return 0;
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b..3db7b07 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
> >         return -1;
> >  }
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> > +EXPORT_SYMBOL_GPL(sched_cluster_present);
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_SMT
> >  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> >  EXPORT_SYMBOL_GPL(sched_smt_present);
> > @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >
> >  #endif /* CONFIG_SCHED_SMT */
> >
> > +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int
> target, struct cpumask *cpus, int *idle_cpu, int *nr)
> > +{
> > +       int cpu, i;
> > +
> > +       for_each_cpu_wrap(cpu, cpus, target) {
> > +               if (smt) {
> > +                       i = select_idle_core(p, cpu, cpus, idle_cpu);
> > +               } else {
> > +                       if (!--*nr)
> > +                               return -1;
> > +                       i = __select_idle_cpu(cpu);
> > +               }
> > +
> > +               if ((unsigned int)i < nr_cpumask_bits)
> > +                       return i;
> > +       }
> > +
> > +       return -1;
> > +}
> > +
> >  /*
> >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> the
> > @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd,
> int target)
> >  {
> >         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> > -       int i, cpu, idle_cpu = -1, nr = INT_MAX;
> > +       int i, idle_cpu = -1, nr = INT_MAX;
> >         bool smt = test_idle_cores(target, false);
> >         int this = smp_processor_id();
> >         struct sched_domain *this_sd;
> > @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >         if (!this_sd)
> >                 return -1;
> >
> > -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +       if (!sched_cluster_active())
> > +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       if (sched_cluster_active())
> > +               cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> > +#endif
> >
> >         if (sched_feat(SIS_PROP) && !smt) {
> >                 u64 avg_cost, avg_idle, span_avg;
> > @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >                 time = cpu_clock(this);
> >         }
> >
> > -       for_each_cpu_wrap(cpu, cpus, target) {
> > -               if (smt) {
> > -                       i = select_idle_core(p, cpu, cpus, &idle_cpu);
> > -                       if ((unsigned int)i < nr_cpumask_bits)
> > -                               return i;
> > +       /* scan cluster before scanning the whole llc */
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       if (sched_cluster_active()) {
> > +               i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +               if ((unsigned int) i < nr_cpumask_bits) {
> > +                       idle_cpu = i;
> > +                       goto done;
> > +               } else if (nr <= 0)
> > +                       return -1;
> >
> > -               } else {
> > -                       if (!--nr)
> > -                               return -1;
> > -                       idle_cpu = __select_idle_cpu(cpu);
> > -                       if ((unsigned int)idle_cpu < nr_cpumask_bits)
> > -                               break;
> > -               }
> > +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +               cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
> >         }
> > +#endif
> > +
> > +       i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +       if ((unsigned int) i < nr_cpumask_bits) {
> > +               idle_cpu = i;
> > +               goto done;
> > +       } else if (nr <= 0)
> > +               return -1;
> >
> >         if (smt)
> >                 set_idle_cores(this, false);
> >
> > +done:
> >         if (sched_feat(SIS_PROP) && !smt) {
> >                 time = cpu_clock(this) - time;
> >                 update_avg(&this_sd->avg_scan_cost, time);
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 10a1522..48a020f 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -6,6 +6,7 @@
> >
> >  #include <linux/sched/autogroup.h>
> >  #include <linux/sched/clock.h>
> > +#include <linux/sched/cluster.h>
> >  #include <linux/sched/coredump.h>
> >  #include <linux/sched/cpufreq.h>
> >  #include <linux/sched/cputime.h>
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 09d3504..d019c25 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -1361,6 +1361,7 @@ static void claim_allocations(int cpu, struct
> sched_domain *sd)
> >   */
> >  #define TOPOLOGY_SD_FLAGS              \
> >         (SD_SHARE_CPUCAPACITY   |       \
> > +        SD_SHARE_CLS_RESOURCES |       \
> >          SD_SHARE_PKG_RESOURCES |       \
> >          SD_NUMA                |       \
> >          SD_ASYM_PACKING)
> > @@ -1480,6 +1481,11 @@ static void claim_allocations(int cpu, struct
> sched_domain *sd)
> >  #ifdef CONFIG_SCHED_SMT
> >         { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> >  #endif
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_MC
> >         { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> >  #endif
> > --
> > 1.8.3.1
> >
> >
Thanks
Barry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
@ 2021-03-08 22:15       ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-08 22:15 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Tim Chen, Catalin Marinas, Will Deacon, Rafael J. Wysocki,
	Borislav Petkov, Thomas Gleixner, Ingo Molnar, Cc: Len Brown,
	Peter Zijlstra, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Juri Lelli, Mark Rutland, Aubrey Li, H. Peter Anvin,
	Zengtao (B),
	Guodong Xu, gregkh, Sudeep Holla, linux-kernel, linuxarm,
	ACPI Devel Maling List, xuwei (O),
	Jonathan Cameron, yangyicong, x86, msys.mizuma,
	Liguozhu (Kenneth),
	Valentin Schneider, LAK



> -----Original Message-----
> From: Vincent Guittot [mailto:vincent.guittot@linaro.org]
> Sent: Tuesday, March 9, 2021 12:26 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>; Rafael J. Wysocki
> <rjw@rjwysocki.net>; Borislav Petkov <bp@alien8.de>; Thomas Gleixner
> <tglx@linutronix.de>; Ingo Molnar <mingo@redhat.com>; Cc: Len Brown
> <lenb@kernel.org>; Peter Zijlstra <peterz@infradead.org>; Dietmar Eggemann
> <dietmar.eggemann@arm.com>; Steven Rostedt <rostedt@goodmis.org>; Ben Segall
> <bsegall@google.com>; Mel Gorman <mgorman@suse.de>; Juri Lelli
> <juri.lelli@redhat.com>; Mark Rutland <mark.rutland@arm.com>; Aubrey Li
> <aubrey.li@linux.intel.com>; H. Peter Anvin <hpa@zytor.com>; Zengtao (B)
> <prime.zeng@hisilicon.com>; Guodong Xu <guodong.xu@linaro.org>;
> gregkh@linuxfoundation.org; Sudeep Holla <sudeep.holla@arm.com>; linux-kernel
> <linux-kernel@vger.kernel.org>; linuxarm@openeuler.org; ACPI Devel Maling
> List <linux-acpi@vger.kernel.org>; xuwei (O) <xuwei5@huawei.com>; Jonathan
> Cameron <jonathan.cameron@huawei.com>; yangyicong <yangyicong@huawei.com>;
> x86 <x86@kernel.org>; msys.mizuma@gmail.com; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; Valentin Schneider <valentin.schneider@arm.com>;
> LAK <linux-arm-kernel@lists.infradead.org>
> Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
> 
> On Tue, 2 Mar 2021 at 00:08, Barry Song <song.bao.hua@hisilicon.com> wrote:
> >
> > ARM64 chip Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > has local L3 tag. On the other hand, each clusters will share some
> > internal system bus. This means cache coherence overhead inside one
> > cluster is much less than the overhead across clusters.
> >
> > This patch adds the sched_domain for clusters. On kunpeng 920, without
> > this patch, domain0 of cpu0 would be MC with cpu0~cpu23 with ; with this
> > patch, MC becomes domain1, a new domain0 "CLS" including cpu0-cpu3.
> >
> > This will help spread unrelated tasks among clusters, thus decrease the
> > contention and improve the throughput, for example, stream benchmark can
> > improve around 4.3%~6.3% by this patch:
> >
> > w/o patch:
> > numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> > STREAM copy latency: 3.36 nanoseconds
> > STREAM copy bandwidth: 57072.50 MB/sec
> > STREAM scale latency: 3.40 nanoseconds
> > STREAM scale bandwidth: 56542.52 MB/sec
> > STREAM add latency: 5.10 nanoseconds
> > STREAM add bandwidth: 56482.83 MB/sec
> > STREAM triad latency: 5.14 nanoseconds
> > STREAM triad bandwidth: 56069.52 MB/sec
> >
> > w/ patch:
> > $ numactl -N 0 /usr/lib/lmbench/bin/stream -P 12 -M 1024M -N 5
> > STREAM copy latency: 3.22 nanoseconds
> > STREAM copy bandwidth: 59660.96 MB/sec    ->  +4.5%
> > STREAM scale latency: 3.25 nanoseconds
> > STREAM scale bandwidth: 59002.29 MB/sec   ->  +4.3%
> > STREAM add latency: 4.80 nanoseconds
> > STREAM add bandwidth: 60036.62 MB/sec     ->  +6.3%
> > STREAM triad latency: 4.86 nanoseconds
> > STREAM triad bandwidth: 59228.30 MB/sec   ->  +5.6%
> >
> > On the other hand, while doing WAKE_AFFINE, this patch will try to find
> > a core in the target cluster before scanning the whole llc domain. So it
> > helps gather related tasks within one cluster.
> 
> Could you split this patch in 2 patches ? One for adding a cluster
> sched domain level and one for modifying the wake up path ?

Yes. If this is helpful, I would like to split into two patches.

> 
> This would ease the review and I would be curious about the impact of
> each feature in the performance. In particular, I'm still not
> convinced that the modification of the wakeup path is the root of the
> hackbench improvement; especially with g=14 where there should not be
> much idle CPUs with 14*40 tasks on at most 32 CPUs.  IIRC, there was

My understanding is that threads could be blocked due to pipe. So CPUs
still have some chance to be idle for a big g. Also note the default g
of hackbench is 10.

Anyway, i'd like to add some tracepoints to get the percentages of how
many are picked from cluster, how many are selected from cpus outside
cluster.

> no obvious improvement with the changes in select_idle_cpu unless you
> hack the behavior to not fall back to llc domain
> 

You have a good memory. In a very old version I once mentioned that. But
at that time, I didn't decrease nr after scanning cluster, so it was
scanning at least 8 cpus(4 within cluster, 4 outside cluster). I guess
that is the reason my hack to not fall back to llc domain could bringing
actual hackbench improvement.

> > we run the below hackbench with different -g parameter from 2 to 14, for
> > each different g, we run the command 10 times and get the average time
> > $ numactl -N 0 hackbench -p -T -l 20000 -g $1
> >
> > hackbench will report the time which is needed to complete a certain number
> > of messages transmissions between a certain number of tasks, for example:
> > $ numactl -N 0 hackbench -p -T -l 20000 -g 10
> > Running in threaded mode with 10 groups using 40 file descriptors each
> > (== 400 tasks)
> > Each sender will pass 20000 messages of 100 bytes
> > Time: 8.874
> >
> > The below is the result of hackbench w/ and w/o the patch:
> > g=    2      4     6       8      10     12      14
> > w/o: 1.9596 4.0506 5.9654 8.0068 9.8147 11.4900 13.1163
> > w/ : 1.9362 3.9197 5.6570 7.1376 8.5263 10.0512 11.3256
> >             +3.3%  +5.2%  +10.9% +13.2%  +12.8%  +13.7%
> >
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > ---
> > -v4:
> >   * rebased to tip/sched/core with the latest unified code of select_idle_cpu
> >   * also added benchmark data of spreading unrelated tasks
> >   * avoided the iteration of sched_domain by moving to static_key(addressing
> >     Vincent's comment
> >
> >  arch/arm64/Kconfig             |  7 +++++
> >  include/linux/sched/cluster.h  | 19 ++++++++++++
> >  include/linux/sched/sd_flags.h |  9 ++++++
> >  include/linux/sched/topology.h |  7 +++++
> >  include/linux/topology.h       |  7 +++++
> >  kernel/sched/core.c            | 18 ++++++++++++
> >  kernel/sched/fair.c            | 66
> +++++++++++++++++++++++++++++++++---------
> >  kernel/sched/sched.h           |  1 +
> >  kernel/sched/topology.c        |  6 ++++
> >  9 files changed, 126 insertions(+), 14 deletions(-)
> >  create mode 100644 include/linux/sched/cluster.h
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index f39568b..158b0fa 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -971,6 +971,13 @@ config SCHED_MC
> >           making when dealing with multi-core CPU chips at a cost of slightly
> >           increased overhead in some places. If unsure say N here.
> >
> > +config SCHED_CLUSTER
> > +       bool "Cluster scheduler support"
> > +       help
> > +         Cluster scheduler support improves the CPU scheduler's decision
> > +         making when dealing with machines that have clusters(sharing internal
> > +         bus or sharing LLC cache tag). If unsure say N here.
> > +
> >  config SCHED_SMT
> >         bool "SMT scheduler support"
> >         help
> > diff --git a/include/linux/sched/cluster.h b/include/linux/sched/cluster.h
> > new file mode 100644
> > index 0000000..ea6c475
> > --- /dev/null
> > +++ b/include/linux/sched/cluster.h
> > @@ -0,0 +1,19 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _LINUX_SCHED_CLUSTER_H
> > +#define _LINUX_SCHED_CLUSTER_H
> > +
> > +#include <linux/static_key.h>
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +extern struct static_key_false sched_cluster_present;
> > +
> > +static __always_inline bool sched_cluster_active(void)
> > +{
> > +       return static_branch_likely(&sched_cluster_present);
> > +}
> > +#else
> > +static inline bool sched_cluster_active(void) { return false; }
> > +
> > +#endif
> > +
> > +#endif
> > diff --git a/include/linux/sched/sd_flags.h
> b/include/linux/sched/sd_flags.h
> > index 34b21e9..fc3c894 100644
> > --- a/include/linux/sched/sd_flags.h
> > +++ b/include/linux/sched/sd_flags.h
> > @@ -100,6 +100,15 @@
> >  SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
> >
> >  /*
> > + * Domain members share CPU cluster resources (i.e. llc cache tags)
> > + *
> > + * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer
> share
> > + *               the cluster resouces (such as llc tags and internal bus)
> > + * NEEDS_GROUPS: Caches are shared between groups.
> > + */
> > +SD_FLAG(SD_SHARE_CLS_RESOURCES, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS)
> > +
> > +/*
> >   * Domain members share CPU package resources (i.e. caches)
> >   *
> >   * SHARED_CHILD: Set from the base domain up until spanned CPUs no longer
> share
> > diff --git a/include/linux/sched/topology.h
> b/include/linux/sched/topology.h
> > index 8f0f778..846fcac 100644
> > --- a/include/linux/sched/topology.h
> > +++ b/include/linux/sched/topology.h
> > @@ -42,6 +42,13 @@ static inline int cpu_smt_flags(void)
> >  }
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +static inline int cpu_cluster_flags(void)
> > +{
> > +       return SD_SHARE_CLS_RESOURCES | SD_SHARE_PKG_RESOURCES;
> > +}
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_MC
> >  static inline int cpu_core_flags(void)
> >  {
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index 80d27d7..0b3704a 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -212,6 +212,13 @@ static inline const struct cpumask *cpu_smt_mask(int
> cpu)
> >  }
> >  #endif
> >
> > +#if defined(CONFIG_SCHED_CLUSTER) && !defined(cpu_cluster_mask)
> > +static inline const struct cpumask *cpu_cluster_mask(int cpu)
> > +{
> > +       return topology_cluster_cpumask(cpu);
> > +}
> > +#endif
> > +
> >  static inline const struct cpumask *cpu_cpu_mask(int cpu)
> >  {
> >         return cpumask_of_node(cpu_to_node(cpu));
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 88a2e2b..d805e59 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
> >         if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> >                 static_branch_inc_cpuslocked(&sched_smt_present);
> >  #endif
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       /*
> > +        * When going up, increment the number of cluster cpus with
> > +        * cluster present.
> > +        */
> > +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +               static_branch_inc_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >         set_cpu_active(cpu, true);
> >
> >         if (sched_smp_initialized) {
> > @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
> >                 static_branch_dec_cpuslocked(&sched_smt_present);
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       /*
> > +        * When going down, decrement the number of cpus with cluster present.
> > +        */
> > +       if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +               static_branch_dec_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >         if (!sched_smp_initialized)
> >                 return 0;
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b..3db7b07 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
> >         return -1;
> >  }
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> > +EXPORT_SYMBOL_GPL(sched_cluster_present);
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_SMT
> >  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> >  EXPORT_SYMBOL_GPL(sched_smt_present);
> > @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >
> >  #endif /* CONFIG_SCHED_SMT */
> >
> > +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int
> target, struct cpumask *cpus, int *idle_cpu, int *nr)
> > +{
> > +       int cpu, i;
> > +
> > +       for_each_cpu_wrap(cpu, cpus, target) {
> > +               if (smt) {
> > +                       i = select_idle_core(p, cpu, cpus, idle_cpu);
> > +               } else {
> > +                       if (!--*nr)
> > +                               return -1;
> > +                       i = __select_idle_cpu(cpu);
> > +               }
> > +
> > +               if ((unsigned int)i < nr_cpumask_bits)
> > +                       return i;
> > +       }
> > +
> > +       return -1;
> > +}
> > +
> >  /*
> >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> the
> > @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd,
> int target)
> >  {
> >         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> > -       int i, cpu, idle_cpu = -1, nr = INT_MAX;
> > +       int i, idle_cpu = -1, nr = INT_MAX;
> >         bool smt = test_idle_cores(target, false);
> >         int this = smp_processor_id();
> >         struct sched_domain *this_sd;
> > @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >         if (!this_sd)
> >                 return -1;
> >
> > -       cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +       if (!sched_cluster_active())
> > +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       if (sched_cluster_active())
> > +               cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> > +#endif
> >
> >         if (sched_feat(SIS_PROP) && !smt) {
> >                 u64 avg_cost, avg_idle, span_avg;
> > @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >                 time = cpu_clock(this);
> >         }
> >
> > -       for_each_cpu_wrap(cpu, cpus, target) {
> > -               if (smt) {
> > -                       i = select_idle_core(p, cpu, cpus, &idle_cpu);
> > -                       if ((unsigned int)i < nr_cpumask_bits)
> > -                               return i;
> > +       /* scan cluster before scanning the whole llc */
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       if (sched_cluster_active()) {
> > +               i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +               if ((unsigned int) i < nr_cpumask_bits) {
> > +                       idle_cpu = i;
> > +                       goto done;
> > +               } else if (nr <= 0)
> > +                       return -1;
> >
> > -               } else {
> > -                       if (!--nr)
> > -                               return -1;
> > -                       idle_cpu = __select_idle_cpu(cpu);
> > -                       if ((unsigned int)idle_cpu < nr_cpumask_bits)
> > -                               break;
> > -               }
> > +               cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +               cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
> >         }
> > +#endif
> > +
> > +       i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +       if ((unsigned int) i < nr_cpumask_bits) {
> > +               idle_cpu = i;
> > +               goto done;
> > +       } else if (nr <= 0)
> > +               return -1;
> >
> >         if (smt)
> >                 set_idle_cores(this, false);
> >
> > +done:
> >         if (sched_feat(SIS_PROP) && !smt) {
> >                 time = cpu_clock(this) - time;
> >                 update_avg(&this_sd->avg_scan_cost, time);
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 10a1522..48a020f 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -6,6 +6,7 @@
> >
> >  #include <linux/sched/autogroup.h>
> >  #include <linux/sched/clock.h>
> > +#include <linux/sched/cluster.h>
> >  #include <linux/sched/coredump.h>
> >  #include <linux/sched/cpufreq.h>
> >  #include <linux/sched/cputime.h>
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 09d3504..d019c25 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -1361,6 +1361,7 @@ static void claim_allocations(int cpu, struct
> sched_domain *sd)
> >   */
> >  #define TOPOLOGY_SD_FLAGS              \
> >         (SD_SHARE_CPUCAPACITY   |       \
> > +        SD_SHARE_CLS_RESOURCES |       \
> >          SD_SHARE_PKG_RESOURCES |       \
> >          SD_NUMA                |       \
> >          SD_ASYM_PACKING)
> > @@ -1480,6 +1481,11 @@ static void claim_allocations(int cpu, struct
> sched_domain *sd)
> >  #ifdef CONFIG_SCHED_SMT
> >         { cpu_smt_mask, cpu_smt_flags, SD_INIT_NAME(SMT) },
> >  #endif
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +       { cpu_clustergroup_mask, cpu_cluster_flags, SD_INIT_NAME(CLS) },
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_MC
> >         { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
> >  #endif
> > --
> > 1.8.3.1
> >
> >
Thanks
Barry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [Linuxarm]  Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
  2021-03-03 18:34       ` Tim Chen
@ 2021-03-08 22:30         ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-08 22:30 UTC (permalink / raw)
  To: Tim Chen, Peter Zijlstra
  Cc: catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, dietmar.eggemann, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Thursday, March 4, 2021 7:34 AM
> To: Peter Zijlstra <peterz@infradead.org>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Cc: catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net;
> vincent.guittot@linaro.org; bp@alien8.de; tglx@linutronix.de;
> mingo@redhat.com; lenb@kernel.org; dietmar.eggemann@arm.com;
> rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de;
> msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler
> level for x86
> 
> 
> 
> On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> > On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> >> From: Tim Chen <tim.c.chen@linux.intel.com>
> >>
> >> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> >> is shared among a cluster of cores instead of being exclusive
> >> to one single core.
> >
> > Isn't that most atoms one way or another? Tremont seems to have it per 4
> > cores, but earlier it was per 2 cores.
> >
> 
> Yes, older Atoms have 2 cores sharing L2.  I probably should
> rephrase my comments to not leave the impression that sharing
> L2 among cores is new for Atoms.
> 
> Tremont based Atom CPUs increases the possible load imbalance more
> with 4 cores per L2 instead of 2.  And also with more overall cores on a die,
> the
> chance increases for packing running tasks on a few clusters while leaving
> others empty on light/medium loaded systems.  We did see
> this effect on Jacobsville.
> 
> So load balancing between the L2 clusters is more
> useful on Tremont based Atom CPUs compared to the older Atoms.

It seems sensible the more CPU we get in the cluster, the more
we need the kernel to be aware of its existence.

Tim, it is possible for you to bring up the cpu_cluster_mask and
cluster_sibling for x86 so that the topology can be represented
in sysfs and be used by scheduler? It seems your patch lacks this
part.

BTW, I wonder if x86 can do some improvement on your KMP_AFFINITY
by leveraging the cluster topology level.
https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support/openmp-library-support/thread-affinity-interface-linux-and-windows.html

KMP_AFFINITY has thread affinity modes like compact and scatter,
it seems this "compact" and "scatter" can also use the cluster
information as you see we are also struggling with the "compact"
and "scatter" issues here in this patchset :-)

Thanks
Barry

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [Linuxarm]  Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
@ 2021-03-08 22:30         ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-08 22:30 UTC (permalink / raw)
  To: Tim Chen, Peter Zijlstra
  Cc: catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, dietmar.eggemann, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Tim Chen [mailto:tim.c.chen@linux.intel.com]
> Sent: Thursday, March 4, 2021 7:34 AM
> To: Peter Zijlstra <peterz@infradead.org>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Cc: catalin.marinas@arm.com; will@kernel.org; rjw@rjwysocki.net;
> vincent.guittot@linaro.org; bp@alien8.de; tglx@linutronix.de;
> mingo@redhat.com; lenb@kernel.org; dietmar.eggemann@arm.com;
> rostedt@goodmis.org; bsegall@google.com; mgorman@suse.de;
> msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler
> level for x86
> 
> 
> 
> On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> > On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> >> From: Tim Chen <tim.c.chen@linux.intel.com>
> >>
> >> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> >> is shared among a cluster of cores instead of being exclusive
> >> to one single core.
> >
> > Isn't that most atoms one way or another? Tremont seems to have it per 4
> > cores, but earlier it was per 2 cores.
> >
> 
> Yes, older Atoms have 2 cores sharing L2.  I probably should
> rephrase my comments to not leave the impression that sharing
> L2 among cores is new for Atoms.
> 
> Tremont based Atom CPUs increases the possible load imbalance more
> with 4 cores per L2 instead of 2.  And also with more overall cores on a die,
> the
> chance increases for packing running tasks on a few clusters while leaving
> others empty on light/medium loaded systems.  We did see
> this effect on Jacobsville.
> 
> So load balancing between the L2 clusters is more
> useful on Tremont based Atom CPUs compared to the older Atoms.

It seems sensible the more CPU we get in the cluster, the more
we need the kernel to be aware of its existence.

Tim, it is possible for you to bring up the cpu_cluster_mask and
cluster_sibling for x86 so that the topology can be represented
in sysfs and be used by scheduler? It seems your patch lacks this
part.

BTW, I wonder if x86 can do some improvement on your KMP_AFFINITY
by leveraging the cluster topology level.
https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support/openmp-library-support/thread-affinity-interface-linux-and-windows.html

KMP_AFFINITY has thread affinity modes like compact and scatter,
it seems this "compact" and "scatter" can also use the cluster
information as you see we are also struggling with the "compact"
and "scatter" issues here in this patchset :-)

Thanks
Barry
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
  2021-03-01 22:59   ` Barry Song
@ 2021-03-15  3:11     ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-15  3:11 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, Jonathan Cameron
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Tuesday, March 2, 2021 12:00 PM
> To: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com; Jonathan
> Cameron <jonathan.cameron@huawei.com>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Subject: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
> 
> From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 
> Both ACPI and DT provide the ability to describe additional layers of
> topology between that of individual cores and higher level constructs
> such as the level at which the last level cache is shared.
> In ACPI this can be represented in PPTT as a Processor Hierarchy
> Node Structure [1] that is the parent of the CPU cores and in turn
> has a parent Processor Hierarchy Nodes Structure representing
> a higher level of topology.
> 
> For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus.
> 
> +-----------------------------------+                          +---------+
> |  +------+    +------+            +---------------------------+         |
> |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+   cluster   |    |    tag    |         |         |
> |  | CPU2 |    | CPU3 |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   |    |    L3     |         |         |
> |  +------+    +------+             +----+    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |   L3    |
>                                                                |   data  |
> +-----------------------------------+                          |         |
> |  +------+    +------+             |    +-----------+         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             +----+    L3     |         |         |
> |                                   |    |    tag    |         |         |
> |  +------+    +------+             |    |           |         |         |
> |  |      |    |      |            ++    +-----------+         |         |
> |  +------+    +------+            |---------------------------+         |
> +-----------------------------------|                          |         |
> +-----------------------------------|                          |         |
> |  +------+    +------+            +---------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+             |    |    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |   +-----------+          |         |
> |  +------+    +------+             |   |           |          |         |
> |                                   |   |    L3     |          |         |
> |  +------+    +------+             +---+    tag    |          |         |
> |  |      |    |      |             |   |           |          |         |
> |  +------+    +------+             |   +-----------+          |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                         ++         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |  +-----------+           |         |
> |  +------+    +------+             |  |           |           |         |
> |                                   |  |    L3     |           |         |
> |  +------+    +------+             +--+    tag    |           |         |
> |  |      |    |      |             |  |           |           |         |
> |  +------+    +------+             |  +-----------+           |         |
> |                                   |                          +---------+
> +-----------------------------------+
> 
> That means the cost to transfer ownership of a cacheline between CPUs
> within a cluster is lower than between CPUs in different clusters on
> the same die. Hence, it can make sense to tell the scheduler to use
> the cache affinity of the cluster to make better decision on thread
> migration.
> 
> This patch simply exposes this information to userspace libraries
> like hwloc by providing cluster_cpus and related sysfs attributes.
> PoC of HWLOC support at [2].
> 
> Note this patch only handle the ACPI case.
> 
> Special consideration is needed for SMT processors, where it is
> necessary to move 2 levels up the hierarchy from the leaf nodes
> (thus skipping the processor core level).
> 
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
> 
> Note that arm64 / ACPI does not provide any means of identifying
> a die level in the topology but that may be unrelate to the cluster
> level.
> 
> [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
>     structure (Type 0)
> [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
> 
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
>   -v4:
>   * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)
> 
>  Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
>  arch/arm64/kernel/topology.c              |  2 +
>  drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
>  drivers/base/arch_topology.c              | 14 +++++++
>  drivers/base/topology.c                   | 10 +++++
>  include/linux/acpi.h                      |  5 +++
>  include/linux/arch_topology.h             |  5 +++
>  include/linux/topology.h                  |  6 +++
>  8 files changed, 127 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/cputopology.rst
> b/Documentation/admin-guide/cputopology.rst
> index b90dafc..f9d3745 100644
> --- a/Documentation/admin-guide/cputopology.rst
> +++ b/Documentation/admin-guide/cputopology.rst
> @@ -24,6 +24,12 @@ core_id:
>  	identifier (rather than the kernel's).  The actual value is
>  	architecture and platform dependent.
> 
> +cluster_id:
> +
> +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> +	identifier (rather than the kernel's).  The actual value is
> +	architecture and platform dependent.
> +
>  book_id:
> 
>  	the book ID of cpuX. Typically it is the hardware platform's
> @@ -56,6 +62,14 @@ package_cpus_list:
>  	human-readable list of CPUs sharing the same physical_package_id.
>  	(deprecated name: "core_siblings_list")
> 
> +cluster_cpus:
> +
> +	internal kernel map of CPUs within the same cluster.
> +
> +cluster_cpus_list:
> +
> +	human-readable list of CPUs within the same cluster.
> +
>  die_cpus:
> 
>  	internal kernel map of CPUs within the same die.
> @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
> 
>  	#define topology_physical_package_id(cpu)
>  	#define topology_die_id(cpu)
> +	#define topology_cluster_id(cpu)
>  	#define topology_core_id(cpu)
>  	#define topology_book_id(cpu)
>  	#define topology_drawer_id(cpu)
>  	#define topology_sibling_cpumask(cpu)
>  	#define topology_core_cpumask(cpu)
> +	#define topology_cluster_cpumask(cpu)
>  	#define topology_die_cpumask(cpu)
>  	#define topology_book_cpumask(cpu)
>  	#define topology_drawer_cpumask(cpu)
> @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
> 
>  1) topology_physical_package_id: -1
>  2) topology_die_id: -1
> -3) topology_core_id: 0
> -4) topology_sibling_cpumask: just the given CPU
> -5) topology_core_cpumask: just the given CPU
> -6) topology_die_cpumask: just the given CPU
> +3) topology_cluster_id: -1
> +4) topology_core_id: 0
> +5) topology_sibling_cpumask: just the given CPU
> +6) topology_core_cpumask: just the given CPU
> +7) topology_cluster_cpumask: just the given CPU
> +8) topology_die_cpumask: just the given CPU
> 
>  For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
>  default definitions for topology_book_id() and topology_book_cpumask().
> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> index f6faa69..fe076b3 100644
> --- a/arch/arm64/kernel/topology.c
> +++ b/arch/arm64/kernel/topology.c
> @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
>  			cpu_topology[cpu].thread_id  = -1;
>  			cpu_topology[cpu].core_id    = topology_id;
>  		}
> +		topology_id = find_acpi_cpu_topology_cluster(cpu);
> +		cpu_topology[cpu].cluster_id = topology_id;
>  		topology_id = find_acpi_cpu_topology_package(cpu);
>  		cpu_topology[cpu].package_id = topology_id;
> 
> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
> index 4ae9335..11f8b02 100644
> --- a/drivers/acpi/pptt.c
> +++ b/drivers/acpi/pptt.c
> @@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
>  }
> 
>  /**
> + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
> + * @cpu: Kernel logical CPU number
> + *
> + * Determine a topology unique cluster ID for the given CPU/thread.
> + * This ID can then be used to group peers, which will have matching ids.
> + *
> + * The cluster, if present is the level of topology above CPUs. In a
> + * multi-thread CPU, it will be the level above the CPU, not the thread.
> + * It may not exist in single CPU systems. In simple multi-CPU systems,
> + * it may be equal to the package topology level.
> + *
> + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
> + * or there is no toplogy level above the CPU..
> + * Otherwise returns a value which represents the package for this CPU.
> + */
> +
> +int find_acpi_cpu_topology_cluster(unsigned int cpu)
> +{
> +	struct acpi_table_header *table;
> +	acpi_status status;
> +	struct acpi_pptt_processor *cpu_node, *cluster_node;
> +	u32 acpi_cpu_id;
> +	int retval;
> +	int is_thread;
> +
> +	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
> +	if (ACPI_FAILURE(status)) {
> +		acpi_pptt_warn_missing();
> +		return -ENOENT;
> +	}
> +
> +	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
> +	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
> +	if (cpu_node == NULL || !cpu_node->parent) {
> +		retval = -ENOENT;
> +		goto put_table;
> +	}
> +
> +	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
> +	cluster_node = fetch_pptt_node(table, cpu_node->parent);
> +	if (cluster_node == NULL) {
> +		retval = -ENOENT;
> +		goto put_table;
> +	}
> +	if (is_thread) {
> +		if (!cluster_node->parent) {
> +			retval = -ENOENT;
> +			goto put_table;
> +		}
> +		cluster_node = fetch_pptt_node(table, cluster_node->parent);
> +		if (cluster_node == NULL) {
> +			retval = -ENOENT;
> +			goto put_table;
> +		}
> +	}
> +	retval = ACPI_PTR_DIFF(cluster_node, table);
> +put_table:
> +	acpi_put_table(table);
> +
> +	return retval;
> +}
> +
> +/**
>   * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
>   * @cpu: Kernel logical CPU number
>   *
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index de8587c..3079232 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
>  	return core_mask;
>  }
> 
> +const struct cpumask *cpu_clustergroup_mask(int cpu)
> +{
> +	return &cpu_topology[cpu].cluster_sibling;
> +}
> +
>  void update_siblings_masks(unsigned int cpuid)
>  {
>  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
>  		if (cpuid_topo->package_id != cpu_topo->package_id)
>  			continue;
> 
> +		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> +			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
> +			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
> +		}
> +

I am seeing a machine without cluster is getting cluster,
so I guess we need the below:

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 3079232ed8ed..ccd4b3b5cc6f 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -528,7 +528,8 @@ void update_siblings_masks(unsigned int cpuid)
                if (cpuid_topo->package_id != cpu_topo->package_id)
                        continue;

-               if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+               if (cpuid_topo->cluster_id == cpu_topo->cluster_id &&
+                   cpu_topo->cluster_id != -1) {
                        cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
                        cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
                }
@@ -568,6 +569,7 @@ void __init reset_cpu_topology(void)
                struct cpu_topology *cpu_topo = &cpu_topology[cpu];

                cpu_topo->thread_id = -1;
+               cpu_topo->cluster_id = -1;
                cpu_topo->core_id = -1;
                cpu_topo->package_id = -1;
                cpu_topo->llc_id = -1;

Hi Jonathan, thoughts?

Thanks
Barry


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* RE: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
@ 2021-03-15  3:11     ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-15  3:11 UTC (permalink / raw)
  To: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, Jonathan Cameron
  Cc: msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Song Bao Hua (Barry Song)
> Sent: Tuesday, March 2, 2021 12:00 PM
> To: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de
> Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com; Jonathan
> Cameron <jonathan.cameron@huawei.com>; Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com>
> Subject: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
> 
> From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 
> Both ACPI and DT provide the ability to describe additional layers of
> topology between that of individual cores and higher level constructs
> such as the level at which the last level cache is shared.
> In ACPI this can be represented in PPTT as a Processor Hierarchy
> Node Structure [1] that is the parent of the CPU cores and in turn
> has a parent Processor Hierarchy Nodes Structure representing
> a higher level of topology.
> 
> For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> has local L3 tag. On the other hand, each clusters will share some
> internal system bus.
> 
> +-----------------------------------+                          +---------+
> |  +------+    +------+            +---------------------------+         |
> |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+   cluster   |    |    tag    |         |         |
> |  | CPU2 |    | CPU3 |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   |    |    L3     |         |         |
> |  +------+    +------+             +----+    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |   L3    |
>                                                                |   data  |
> +-----------------------------------+                          |         |
> |  +------+    +------+             |    +-----------+         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             +----+    L3     |         |         |
> |                                   |    |    tag    |         |         |
> |  +------+    +------+             |    |           |         |         |
> |  |      |    |      |            ++    +-----------+         |         |
> |  +------+    +------+            |---------------------------+         |
> +-----------------------------------|                          |         |
> +-----------------------------------|                          |         |
> |  +------+    +------+            +---------------------------+         |
> |  |      |    |      |             |    +-----------+         |         |
> |  +------+    +------+             |    |           |         |         |
> |                                   +----+    L3     |         |         |
> |  +------+    +------+             |    |    tag    |         |         |
> |  |      |    |      |             |    |           |         |         |
> |  +------+    +------+             |    +-----------+         |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                          |         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |   +-----------+          |         |
> |  +------+    +------+             |   |           |          |         |
> |                                   |   |    L3     |          |         |
> |  +------+    +------+             +---+    tag    |          |         |
> |  |      |    |      |             |   |           |          |         |
> |  +------+    +------+             |   +-----------+          |         |
> |                                   |                          |         |
> +-----------------------------------+                          |         |
> +-----------------------------------+                         ++         |
> |  +------+    +------+             +--------------------------+         |
> |  |      |    |      |             |  +-----------+           |         |
> |  +------+    +------+             |  |           |           |         |
> |                                   |  |    L3     |           |         |
> |  +------+    +------+             +--+    tag    |           |         |
> |  |      |    |      |             |  |           |           |         |
> |  +------+    +------+             |  +-----------+           |         |
> |                                   |                          +---------+
> +-----------------------------------+
> 
> That means the cost to transfer ownership of a cacheline between CPUs
> within a cluster is lower than between CPUs in different clusters on
> the same die. Hence, it can make sense to tell the scheduler to use
> the cache affinity of the cluster to make better decision on thread
> migration.
> 
> This patch simply exposes this information to userspace libraries
> like hwloc by providing cluster_cpus and related sysfs attributes.
> PoC of HWLOC support at [2].
> 
> Note this patch only handle the ACPI case.
> 
> Special consideration is needed for SMT processors, where it is
> necessary to move 2 levels up the hierarchy from the leaf nodes
> (thus skipping the processor core level).
> 
> Currently the ID provided is the offset of the Processor
> Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> it is not terribly elegant so alternative suggestions welcome.
> 
> Note that arm64 / ACPI does not provide any means of identifying
> a die level in the topology but that may be unrelate to the cluster
> level.
> 
> [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
>     structure (Type 0)
> [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
> 
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> ---
>   -v4:
>   * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)
> 
>  Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
>  arch/arm64/kernel/topology.c              |  2 +
>  drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
>  drivers/base/arch_topology.c              | 14 +++++++
>  drivers/base/topology.c                   | 10 +++++
>  include/linux/acpi.h                      |  5 +++
>  include/linux/arch_topology.h             |  5 +++
>  include/linux/topology.h                  |  6 +++
>  8 files changed, 127 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/admin-guide/cputopology.rst
> b/Documentation/admin-guide/cputopology.rst
> index b90dafc..f9d3745 100644
> --- a/Documentation/admin-guide/cputopology.rst
> +++ b/Documentation/admin-guide/cputopology.rst
> @@ -24,6 +24,12 @@ core_id:
>  	identifier (rather than the kernel's).  The actual value is
>  	architecture and platform dependent.
> 
> +cluster_id:
> +
> +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> +	identifier (rather than the kernel's).  The actual value is
> +	architecture and platform dependent.
> +
>  book_id:
> 
>  	the book ID of cpuX. Typically it is the hardware platform's
> @@ -56,6 +62,14 @@ package_cpus_list:
>  	human-readable list of CPUs sharing the same physical_package_id.
>  	(deprecated name: "core_siblings_list")
> 
> +cluster_cpus:
> +
> +	internal kernel map of CPUs within the same cluster.
> +
> +cluster_cpus_list:
> +
> +	human-readable list of CPUs within the same cluster.
> +
>  die_cpus:
> 
>  	internal kernel map of CPUs within the same die.
> @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
> 
>  	#define topology_physical_package_id(cpu)
>  	#define topology_die_id(cpu)
> +	#define topology_cluster_id(cpu)
>  	#define topology_core_id(cpu)
>  	#define topology_book_id(cpu)
>  	#define topology_drawer_id(cpu)
>  	#define topology_sibling_cpumask(cpu)
>  	#define topology_core_cpumask(cpu)
> +	#define topology_cluster_cpumask(cpu)
>  	#define topology_die_cpumask(cpu)
>  	#define topology_book_cpumask(cpu)
>  	#define topology_drawer_cpumask(cpu)
> @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
> 
>  1) topology_physical_package_id: -1
>  2) topology_die_id: -1
> -3) topology_core_id: 0
> -4) topology_sibling_cpumask: just the given CPU
> -5) topology_core_cpumask: just the given CPU
> -6) topology_die_cpumask: just the given CPU
> +3) topology_cluster_id: -1
> +4) topology_core_id: 0
> +5) topology_sibling_cpumask: just the given CPU
> +6) topology_core_cpumask: just the given CPU
> +7) topology_cluster_cpumask: just the given CPU
> +8) topology_die_cpumask: just the given CPU
> 
>  For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
>  default definitions for topology_book_id() and topology_book_cpumask().
> diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> index f6faa69..fe076b3 100644
> --- a/arch/arm64/kernel/topology.c
> +++ b/arch/arm64/kernel/topology.c
> @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
>  			cpu_topology[cpu].thread_id  = -1;
>  			cpu_topology[cpu].core_id    = topology_id;
>  		}
> +		topology_id = find_acpi_cpu_topology_cluster(cpu);
> +		cpu_topology[cpu].cluster_id = topology_id;
>  		topology_id = find_acpi_cpu_topology_package(cpu);
>  		cpu_topology[cpu].package_id = topology_id;
> 
> diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
> index 4ae9335..11f8b02 100644
> --- a/drivers/acpi/pptt.c
> +++ b/drivers/acpi/pptt.c
> @@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
>  }
> 
>  /**
> + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
> + * @cpu: Kernel logical CPU number
> + *
> + * Determine a topology unique cluster ID for the given CPU/thread.
> + * This ID can then be used to group peers, which will have matching ids.
> + *
> + * The cluster, if present is the level of topology above CPUs. In a
> + * multi-thread CPU, it will be the level above the CPU, not the thread.
> + * It may not exist in single CPU systems. In simple multi-CPU systems,
> + * it may be equal to the package topology level.
> + *
> + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
> + * or there is no toplogy level above the CPU..
> + * Otherwise returns a value which represents the package for this CPU.
> + */
> +
> +int find_acpi_cpu_topology_cluster(unsigned int cpu)
> +{
> +	struct acpi_table_header *table;
> +	acpi_status status;
> +	struct acpi_pptt_processor *cpu_node, *cluster_node;
> +	u32 acpi_cpu_id;
> +	int retval;
> +	int is_thread;
> +
> +	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
> +	if (ACPI_FAILURE(status)) {
> +		acpi_pptt_warn_missing();
> +		return -ENOENT;
> +	}
> +
> +	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
> +	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
> +	if (cpu_node == NULL || !cpu_node->parent) {
> +		retval = -ENOENT;
> +		goto put_table;
> +	}
> +
> +	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
> +	cluster_node = fetch_pptt_node(table, cpu_node->parent);
> +	if (cluster_node == NULL) {
> +		retval = -ENOENT;
> +		goto put_table;
> +	}
> +	if (is_thread) {
> +		if (!cluster_node->parent) {
> +			retval = -ENOENT;
> +			goto put_table;
> +		}
> +		cluster_node = fetch_pptt_node(table, cluster_node->parent);
> +		if (cluster_node == NULL) {
> +			retval = -ENOENT;
> +			goto put_table;
> +		}
> +	}
> +	retval = ACPI_PTR_DIFF(cluster_node, table);
> +put_table:
> +	acpi_put_table(table);
> +
> +	return retval;
> +}
> +
> +/**
>   * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
>   * @cpu: Kernel logical CPU number
>   *
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index de8587c..3079232 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
>  	return core_mask;
>  }
> 
> +const struct cpumask *cpu_clustergroup_mask(int cpu)
> +{
> +	return &cpu_topology[cpu].cluster_sibling;
> +}
> +
>  void update_siblings_masks(unsigned int cpuid)
>  {
>  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
>  		if (cpuid_topo->package_id != cpu_topo->package_id)
>  			continue;
> 
> +		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> +			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
> +			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
> +		}
> +

I am seeing a machine without cluster is getting cluster,
so I guess we need the below:

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 3079232ed8ed..ccd4b3b5cc6f 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -528,7 +528,8 @@ void update_siblings_masks(unsigned int cpuid)
                if (cpuid_topo->package_id != cpu_topo->package_id)
                        continue;

-               if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
+               if (cpuid_topo->cluster_id == cpu_topo->cluster_id &&
+                   cpu_topo->cluster_id != -1) {
                        cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
                        cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
                }
@@ -568,6 +569,7 @@ void __init reset_cpu_topology(void)
                struct cpu_topology *cpu_topo = &cpu_topology[cpu];

                cpu_topo->thread_id = -1;
+               cpu_topo->cluster_id = -1;
                cpu_topo->core_id = -1;
                cpu_topo->package_id = -1;
                cpu_topo->llc_id = -1;

Hi Jonathan, thoughts?

Thanks
Barry


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
  2021-03-15  3:11     ` Song Bao Hua (Barry Song)
@ 2021-03-15 10:52       ` Jonathan Cameron
  -1 siblings, 0 replies; 28+ messages in thread
From: Jonathan Cameron @ 2021-03-15 10:52 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, gregkh, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On Mon, 15 Mar 2021 03:11:06 +0000
"Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote:

> > -----Original Message-----
> > From: Song Bao Hua (Barry Song)
> > Sent: Tuesday, March 2, 2021 12:00 PM
> > To: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > mgorman@suse.de
> > Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com; Jonathan
> > Cameron <jonathan.cameron@huawei.com>; Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com>
> > Subject: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
> > 
> > From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > 
> > Both ACPI and DT provide the ability to describe additional layers of
> > topology between that of individual cores and higher level constructs
> > such as the level at which the last level cache is shared.
> > In ACPI this can be represented in PPTT as a Processor Hierarchy
> > Node Structure [1] that is the parent of the CPU cores and in turn
> > has a parent Processor Hierarchy Nodes Structure representing
> > a higher level of topology.
> > 
> > For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > has local L3 tag. On the other hand, each clusters will share some
> > internal system bus.
> > 
> > +-----------------------------------+                          +---------+
> > |  +------+    +------+            +---------------------------+         |
> > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   |    |    L3     |         |         |
> > |  +------+    +------+             +----+    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |   L3    |
> >                                                                |   data  |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             +----+    L3     |         |         |
> > |                                   |    |    tag    |         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |  |      |    |      |            ++    +-----------+         |         |
> > |  +------+    +------+            |---------------------------+         |
> > +-----------------------------------|                          |         |
> > +-----------------------------------|                          |         |
> > |  +------+    +------+            +---------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+             |    |    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |   +-----------+          |         |
> > |  +------+    +------+             |   |           |          |         |
> > |                                   |   |    L3     |          |         |
> > |  +------+    +------+             +---+    tag    |          |         |
> > |  |      |    |      |             |   |           |          |         |
> > |  +------+    +------+             |   +-----------+          |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                         ++         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |  +-----------+           |         |
> > |  +------+    +------+             |  |           |           |         |
> > |                                   |  |    L3     |           |         |
> > |  +------+    +------+             +--+    tag    |           |         |
> > |  |      |    |      |             |  |           |           |         |
> > |  +------+    +------+             |  +-----------+           |         |
> > |                                   |                          +---------+
> > +-----------------------------------+
> > 
> > That means the cost to transfer ownership of a cacheline between CPUs
> > within a cluster is lower than between CPUs in different clusters on
> > the same die. Hence, it can make sense to tell the scheduler to use
> > the cache affinity of the cluster to make better decision on thread
> > migration.
> > 
> > This patch simply exposes this information to userspace libraries
> > like hwloc by providing cluster_cpus and related sysfs attributes.
> > PoC of HWLOC support at [2].
> > 
> > Note this patch only handle the ACPI case.
> > 
> > Special consideration is needed for SMT processors, where it is
> > necessary to move 2 levels up the hierarchy from the leaf nodes
> > (thus skipping the processor core level).
> > 
> > Currently the ID provided is the offset of the Processor
> > Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> > it is not terribly elegant so alternative suggestions welcome.
> > 
> > Note that arm64 / ACPI does not provide any means of identifying
> > a die level in the topology but that may be unrelate to the cluster
> > level.
> > 
> > [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
> >     structure (Type 0)
> > [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
> > 
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > ---
> >   -v4:
> >   * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)
> > 
> >  Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
> >  arch/arm64/kernel/topology.c              |  2 +
> >  drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
> >  drivers/base/arch_topology.c              | 14 +++++++
> >  drivers/base/topology.c                   | 10 +++++
> >  include/linux/acpi.h                      |  5 +++
> >  include/linux/arch_topology.h             |  5 +++
> >  include/linux/topology.h                  |  6 +++
> >  8 files changed, 127 insertions(+), 4 deletions(-)
> > 
> > diff --git a/Documentation/admin-guide/cputopology.rst
> > b/Documentation/admin-guide/cputopology.rst
> > index b90dafc..f9d3745 100644
> > --- a/Documentation/admin-guide/cputopology.rst
> > +++ b/Documentation/admin-guide/cputopology.rst
> > @@ -24,6 +24,12 @@ core_id:
> >  	identifier (rather than the kernel's).  The actual value is
> >  	architecture and platform dependent.
> > 
> > +cluster_id:
> > +
> > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > +	identifier (rather than the kernel's).  The actual value is
> > +	architecture and platform dependent.
> > +
> >  book_id:
> > 
> >  	the book ID of cpuX. Typically it is the hardware platform's
> > @@ -56,6 +62,14 @@ package_cpus_list:
> >  	human-readable list of CPUs sharing the same physical_package_id.
> >  	(deprecated name: "core_siblings_list")
> > 
> > +cluster_cpus:
> > +
> > +	internal kernel map of CPUs within the same cluster.
> > +
> > +cluster_cpus_list:
> > +
> > +	human-readable list of CPUs within the same cluster.
> > +
> >  die_cpus:
> > 
> >  	internal kernel map of CPUs within the same die.
> > @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
> > 
> >  	#define topology_physical_package_id(cpu)
> >  	#define topology_die_id(cpu)
> > +	#define topology_cluster_id(cpu)
> >  	#define topology_core_id(cpu)
> >  	#define topology_book_id(cpu)
> >  	#define topology_drawer_id(cpu)
> >  	#define topology_sibling_cpumask(cpu)
> >  	#define topology_core_cpumask(cpu)
> > +	#define topology_cluster_cpumask(cpu)
> >  	#define topology_die_cpumask(cpu)
> >  	#define topology_book_cpumask(cpu)
> >  	#define topology_drawer_cpumask(cpu)
> > @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
> > 
> >  1) topology_physical_package_id: -1
> >  2) topology_die_id: -1
> > -3) topology_core_id: 0
> > -4) topology_sibling_cpumask: just the given CPU
> > -5) topology_core_cpumask: just the given CPU
> > -6) topology_die_cpumask: just the given CPU
> > +3) topology_cluster_id: -1
> > +4) topology_core_id: 0
> > +5) topology_sibling_cpumask: just the given CPU
> > +6) topology_core_cpumask: just the given CPU
> > +7) topology_cluster_cpumask: just the given CPU
> > +8) topology_die_cpumask: just the given CPU
> > 
> >  For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
> >  default definitions for topology_book_id() and topology_book_cpumask().
> > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> > index f6faa69..fe076b3 100644
> > --- a/arch/arm64/kernel/topology.c
> > +++ b/arch/arm64/kernel/topology.c
> > @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
> >  			cpu_topology[cpu].thread_id  = -1;
> >  			cpu_topology[cpu].core_id    = topology_id;
> >  		}
> > +		topology_id = find_acpi_cpu_topology_cluster(cpu);
> > +		cpu_topology[cpu].cluster_id = topology_id;
> >  		topology_id = find_acpi_cpu_topology_package(cpu);
> >  		cpu_topology[cpu].package_id = topology_id;
> > 
> > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
> > index 4ae9335..11f8b02 100644
> > --- a/drivers/acpi/pptt.c
> > +++ b/drivers/acpi/pptt.c
> > @@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
> >  }
> > 
> >  /**
> > + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
> > + * @cpu: Kernel logical CPU number
> > + *
> > + * Determine a topology unique cluster ID for the given CPU/thread.
> > + * This ID can then be used to group peers, which will have matching ids.
> > + *
> > + * The cluster, if present is the level of topology above CPUs. In a
> > + * multi-thread CPU, it will be the level above the CPU, not the thread.
> > + * It may not exist in single CPU systems. In simple multi-CPU systems,
> > + * it may be equal to the package topology level.
> > + *
> > + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
> > + * or there is no toplogy level above the CPU..
> > + * Otherwise returns a value which represents the package for this CPU.
> > + */
> > +
> > +int find_acpi_cpu_topology_cluster(unsigned int cpu)
> > +{
> > +	struct acpi_table_header *table;
> > +	acpi_status status;
> > +	struct acpi_pptt_processor *cpu_node, *cluster_node;
> > +	u32 acpi_cpu_id;
> > +	int retval;
> > +	int is_thread;
> > +
> > +	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
> > +	if (ACPI_FAILURE(status)) {
> > +		acpi_pptt_warn_missing();
> > +		return -ENOENT;
> > +	}
> > +
> > +	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
> > +	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
> > +	if (cpu_node == NULL || !cpu_node->parent) {
> > +		retval = -ENOENT;
> > +		goto put_table;
> > +	}
> > +
> > +	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
> > +	cluster_node = fetch_pptt_node(table, cpu_node->parent);
> > +	if (cluster_node == NULL) {
> > +		retval = -ENOENT;
> > +		goto put_table;
> > +	}
> > +	if (is_thread) {
> > +		if (!cluster_node->parent) {
> > +			retval = -ENOENT;
> > +			goto put_table;
> > +		}
> > +		cluster_node = fetch_pptt_node(table, cluster_node->parent);
> > +		if (cluster_node == NULL) {
> > +			retval = -ENOENT;
> > +			goto put_table;
> > +		}
> > +	}
> > +	retval = ACPI_PTR_DIFF(cluster_node, table);
> > +put_table:
> > +	acpi_put_table(table);
> > +
> > +	return retval;
> > +}
> > +
> > +/**
> >   * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
> >   * @cpu: Kernel logical CPU number
> >   *
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index de8587c..3079232 100644
> > --- a/drivers/base/arch_topology.c
> > +++ b/drivers/base/arch_topology.c
> > @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
> >  	return core_mask;
> >  }
> > 
> > +const struct cpumask *cpu_clustergroup_mask(int cpu)
> > +{
> > +	return &cpu_topology[cpu].cluster_sibling;
> > +}
> > +
> >  void update_siblings_masks(unsigned int cpuid)
> >  {
> >  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> > @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
> >  		if (cpuid_topo->package_id != cpu_topo->package_id)
> >  			continue;
> > 
> > +		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> > +			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
> > +			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
> > +		}
> > +  
> 
> I am seeing a machine without cluster is getting cluster,
> so I guess we need the below:

Makes sense and is inline with the docs above.

Thanks,

Jonathan

> 
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 3079232ed8ed..ccd4b3b5cc6f 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -528,7 +528,8 @@ void update_siblings_masks(unsigned int cpuid)
>                 if (cpuid_topo->package_id != cpu_topo->package_id)
>                         continue;
> 
> -               if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> +               if (cpuid_topo->cluster_id == cpu_topo->cluster_id &&
> +                   cpu_topo->cluster_id != -1) {
>                         cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
>                         cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
>                 }
> @@ -568,6 +569,7 @@ void __init reset_cpu_topology(void)
>                 struct cpu_topology *cpu_topo = &cpu_topology[cpu];
> 
>                 cpu_topo->thread_id = -1;
> +               cpu_topo->cluster_id = -1;
>                 cpu_topo->core_id = -1;
>                 cpu_topo->package_id = -1;
>                 cpu_topo->llc_id = -1;
> 
> Hi Jonathan, thoughts?
> 
> Thanks
> Barry
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
@ 2021-03-15 10:52       ` Jonathan Cameron
  0 siblings, 0 replies; 28+ messages in thread
From: Jonathan Cameron @ 2021-03-15 10:52 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, peterz, dietmar.eggemann, rostedt, bsegall,
	mgorman, msys.mizuma, valentin.schneider, gregkh, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa

On Mon, 15 Mar 2021 03:11:06 +0000
"Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com> wrote:

> > -----Original Message-----
> > From: Song Bao Hua (Barry Song)
> > Sent: Tuesday, March 2, 2021 12:00 PM
> > To: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org;
> > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> > mgorman@suse.de
> > Cc: msys.mizuma@gmail.com; valentin.schneider@arm.com;
> > gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> > juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> > aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> > linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> > xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> > guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> > <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com; Jonathan
> > Cameron <jonathan.cameron@huawei.com>; Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com>
> > Subject: [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die.
> > 
> > From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > 
> > Both ACPI and DT provide the ability to describe additional layers of
> > topology between that of individual cores and higher level constructs
> > such as the level at which the last level cache is shared.
> > In ACPI this can be represented in PPTT as a Processor Hierarchy
> > Node Structure [1] that is the parent of the CPU cores and in turn
> > has a parent Processor Hierarchy Nodes Structure representing
> > a higher level of topology.
> > 
> > For example Kunpeng 920 has 6 or 8 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data, but each cluster
> > has local L3 tag. On the other hand, each clusters will share some
> > internal system bus.
> > 
> > +-----------------------------------+                          +---------+
> > |  +------+    +------+            +---------------------------+         |
> > |  | CPU0 |    | cpu1 |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+   cluster   |    |    tag    |         |         |
> > |  | CPU2 |    | CPU3 |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   |    |    L3     |         |         |
> > |  +------+    +------+             +----+    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |   L3    |
> >                                                                |   data  |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             +----+    L3     |         |         |
> > |                                   |    |    tag    |         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |  |      |    |      |            ++    +-----------+         |         |
> > |  +------+    +------+            |---------------------------+         |
> > +-----------------------------------|                          |         |
> > +-----------------------------------|                          |         |
> > |  +------+    +------+            +---------------------------+         |
> > |  |      |    |      |             |    +-----------+         |         |
> > |  +------+    +------+             |    |           |         |         |
> > |                                   +----+    L3     |         |         |
> > |  +------+    +------+             |    |    tag    |         |         |
> > |  |      |    |      |             |    |           |         |         |
> > |  +------+    +------+             |    +-----------+         |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                          |         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |   +-----------+          |         |
> > |  +------+    +------+             |   |           |          |         |
> > |                                   |   |    L3     |          |         |
> > |  +------+    +------+             +---+    tag    |          |         |
> > |  |      |    |      |             |   |           |          |         |
> > |  +------+    +------+             |   +-----------+          |         |
> > |                                   |                          |         |
> > +-----------------------------------+                          |         |
> > +-----------------------------------+                         ++         |
> > |  +------+    +------+             +--------------------------+         |
> > |  |      |    |      |             |  +-----------+           |         |
> > |  +------+    +------+             |  |           |           |         |
> > |                                   |  |    L3     |           |         |
> > |  +------+    +------+             +--+    tag    |           |         |
> > |  |      |    |      |             |  |           |           |         |
> > |  +------+    +------+             |  +-----------+           |         |
> > |                                   |                          +---------+
> > +-----------------------------------+
> > 
> > That means the cost to transfer ownership of a cacheline between CPUs
> > within a cluster is lower than between CPUs in different clusters on
> > the same die. Hence, it can make sense to tell the scheduler to use
> > the cache affinity of the cluster to make better decision on thread
> > migration.
> > 
> > This patch simply exposes this information to userspace libraries
> > like hwloc by providing cluster_cpus and related sysfs attributes.
> > PoC of HWLOC support at [2].
> > 
> > Note this patch only handle the ACPI case.
> > 
> > Special consideration is needed for SMT processors, where it is
> > necessary to move 2 levels up the hierarchy from the leaf nodes
> > (thus skipping the processor core level).
> > 
> > Currently the ID provided is the offset of the Processor
> > Hierarchy Nodes Structure within PPTT.  Whilst this is unique
> > it is not terribly elegant so alternative suggestions welcome.
> > 
> > Note that arm64 / ACPI does not provide any means of identifying
> > a die level in the topology but that may be unrelate to the cluster
> > level.
> > 
> > [1] ACPI Specification 6.3 - section 5.2.29.1 processor hierarchy node
> >     structure (Type 0)
> > [2] https://github.com/hisilicon/hwloc/tree/linux-cluster
> > 
> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
> > ---
> >   -v4:
> >   * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)
> > 
> >  Documentation/admin-guide/cputopology.rst | 26 +++++++++++--
> >  arch/arm64/kernel/topology.c              |  2 +
> >  drivers/acpi/pptt.c                       | 63 +++++++++++++++++++++++++++++++
> >  drivers/base/arch_topology.c              | 14 +++++++
> >  drivers/base/topology.c                   | 10 +++++
> >  include/linux/acpi.h                      |  5 +++
> >  include/linux/arch_topology.h             |  5 +++
> >  include/linux/topology.h                  |  6 +++
> >  8 files changed, 127 insertions(+), 4 deletions(-)
> > 
> > diff --git a/Documentation/admin-guide/cputopology.rst
> > b/Documentation/admin-guide/cputopology.rst
> > index b90dafc..f9d3745 100644
> > --- a/Documentation/admin-guide/cputopology.rst
> > +++ b/Documentation/admin-guide/cputopology.rst
> > @@ -24,6 +24,12 @@ core_id:
> >  	identifier (rather than the kernel's).  The actual value is
> >  	architecture and platform dependent.
> > 
> > +cluster_id:
> > +
> > +	the Cluster ID of cpuX.  Typically it is the hardware platform's
> > +	identifier (rather than the kernel's).  The actual value is
> > +	architecture and platform dependent.
> > +
> >  book_id:
> > 
> >  	the book ID of cpuX. Typically it is the hardware platform's
> > @@ -56,6 +62,14 @@ package_cpus_list:
> >  	human-readable list of CPUs sharing the same physical_package_id.
> >  	(deprecated name: "core_siblings_list")
> > 
> > +cluster_cpus:
> > +
> > +	internal kernel map of CPUs within the same cluster.
> > +
> > +cluster_cpus_list:
> > +
> > +	human-readable list of CPUs within the same cluster.
> > +
> >  die_cpus:
> > 
> >  	internal kernel map of CPUs within the same die.
> > @@ -96,11 +110,13 @@ these macros in include/asm-XXX/topology.h::
> > 
> >  	#define topology_physical_package_id(cpu)
> >  	#define topology_die_id(cpu)
> > +	#define topology_cluster_id(cpu)
> >  	#define topology_core_id(cpu)
> >  	#define topology_book_id(cpu)
> >  	#define topology_drawer_id(cpu)
> >  	#define topology_sibling_cpumask(cpu)
> >  	#define topology_core_cpumask(cpu)
> > +	#define topology_cluster_cpumask(cpu)
> >  	#define topology_die_cpumask(cpu)
> >  	#define topology_book_cpumask(cpu)
> >  	#define topology_drawer_cpumask(cpu)
> > @@ -116,10 +132,12 @@ not defined by include/asm-XXX/topology.h:
> > 
> >  1) topology_physical_package_id: -1
> >  2) topology_die_id: -1
> > -3) topology_core_id: 0
> > -4) topology_sibling_cpumask: just the given CPU
> > -5) topology_core_cpumask: just the given CPU
> > -6) topology_die_cpumask: just the given CPU
> > +3) topology_cluster_id: -1
> > +4) topology_core_id: 0
> > +5) topology_sibling_cpumask: just the given CPU
> > +6) topology_core_cpumask: just the given CPU
> > +7) topology_cluster_cpumask: just the given CPU
> > +8) topology_die_cpumask: just the given CPU
> > 
> >  For architectures that don't support books (CONFIG_SCHED_BOOK) there are no
> >  default definitions for topology_book_id() and topology_book_cpumask().
> > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
> > index f6faa69..fe076b3 100644
> > --- a/arch/arm64/kernel/topology.c
> > +++ b/arch/arm64/kernel/topology.c
> > @@ -103,6 +103,8 @@ int __init parse_acpi_topology(void)
> >  			cpu_topology[cpu].thread_id  = -1;
> >  			cpu_topology[cpu].core_id    = topology_id;
> >  		}
> > +		topology_id = find_acpi_cpu_topology_cluster(cpu);
> > +		cpu_topology[cpu].cluster_id = topology_id;
> >  		topology_id = find_acpi_cpu_topology_package(cpu);
> >  		cpu_topology[cpu].package_id = topology_id;
> > 
> > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
> > index 4ae9335..11f8b02 100644
> > --- a/drivers/acpi/pptt.c
> > +++ b/drivers/acpi/pptt.c
> > @@ -737,6 +737,69 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
> >  }
> > 
> >  /**
> > + * find_acpi_cpu_topology_cluster() - Determine a unique CPU cluster value
> > + * @cpu: Kernel logical CPU number
> > + *
> > + * Determine a topology unique cluster ID for the given CPU/thread.
> > + * This ID can then be used to group peers, which will have matching ids.
> > + *
> > + * The cluster, if present is the level of topology above CPUs. In a
> > + * multi-thread CPU, it will be the level above the CPU, not the thread.
> > + * It may not exist in single CPU systems. In simple multi-CPU systems,
> > + * it may be equal to the package topology level.
> > + *
> > + * Return: -ENOENT if the PPTT doesn't exist, the CPU cannot be found
> > + * or there is no toplogy level above the CPU..
> > + * Otherwise returns a value which represents the package for this CPU.
> > + */
> > +
> > +int find_acpi_cpu_topology_cluster(unsigned int cpu)
> > +{
> > +	struct acpi_table_header *table;
> > +	acpi_status status;
> > +	struct acpi_pptt_processor *cpu_node, *cluster_node;
> > +	u32 acpi_cpu_id;
> > +	int retval;
> > +	int is_thread;
> > +
> > +	status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
> > +	if (ACPI_FAILURE(status)) {
> > +		acpi_pptt_warn_missing();
> > +		return -ENOENT;
> > +	}
> > +
> > +	acpi_cpu_id = get_acpi_id_for_cpu(cpu);
> > +	cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
> > +	if (cpu_node == NULL || !cpu_node->parent) {
> > +		retval = -ENOENT;
> > +		goto put_table;
> > +	}
> > +
> > +	is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
> > +	cluster_node = fetch_pptt_node(table, cpu_node->parent);
> > +	if (cluster_node == NULL) {
> > +		retval = -ENOENT;
> > +		goto put_table;
> > +	}
> > +	if (is_thread) {
> > +		if (!cluster_node->parent) {
> > +			retval = -ENOENT;
> > +			goto put_table;
> > +		}
> > +		cluster_node = fetch_pptt_node(table, cluster_node->parent);
> > +		if (cluster_node == NULL) {
> > +			retval = -ENOENT;
> > +			goto put_table;
> > +		}
> > +	}
> > +	retval = ACPI_PTR_DIFF(cluster_node, table);
> > +put_table:
> > +	acpi_put_table(table);
> > +
> > +	return retval;
> > +}
> > +
> > +/**
> >   * find_acpi_cpu_topology_hetero_id() - Get a core architecture tag
> >   * @cpu: Kernel logical CPU number
> >   *
> > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> > index de8587c..3079232 100644
> > --- a/drivers/base/arch_topology.c
> > +++ b/drivers/base/arch_topology.c
> > @@ -506,6 +506,11 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
> >  	return core_mask;
> >  }
> > 
> > +const struct cpumask *cpu_clustergroup_mask(int cpu)
> > +{
> > +	return &cpu_topology[cpu].cluster_sibling;
> > +}
> > +
> >  void update_siblings_masks(unsigned int cpuid)
> >  {
> >  	struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> > @@ -523,6 +528,11 @@ void update_siblings_masks(unsigned int cpuid)
> >  		if (cpuid_topo->package_id != cpu_topo->package_id)
> >  			continue;
> > 
> > +		if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> > +			cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
> > +			cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
> > +		}
> > +  
> 
> I am seeing a machine without cluster is getting cluster,
> so I guess we need the below:

Makes sense and is inline with the docs above.

Thanks,

Jonathan

> 
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 3079232ed8ed..ccd4b3b5cc6f 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -528,7 +528,8 @@ void update_siblings_masks(unsigned int cpuid)
>                 if (cpuid_topo->package_id != cpu_topo->package_id)
>                         continue;
> 
> -               if (cpuid_topo->cluster_id == cpu_topo->cluster_id) {
> +               if (cpuid_topo->cluster_id == cpu_topo->cluster_id &&
> +                   cpu_topo->cluster_id != -1) {
>                         cpumask_set_cpu(cpu, &cpuid_topo->cluster_sibling);
>                         cpumask_set_cpu(cpuid, &cpu_topo->cluster_sibling);
>                 }
> @@ -568,6 +569,7 @@ void __init reset_cpu_topology(void)
>                 struct cpu_topology *cpu_topo = &cpu_topology[cpu];
> 
>                 cpu_topo->thread_id = -1;
> +               cpu_topo->cluster_id = -1;
>                 cpu_topo->core_id = -1;
>                 cpu_topo->package_id = -1;
>                 cpu_topo->llc_id = -1;
> 
> Hi Jonathan, thoughts?
> 
> Thanks
> Barry
> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
  2021-03-08 22:30         ` Song Bao Hua (Barry Song)
@ 2021-03-15 20:53           ` Tim Chen
  -1 siblings, 0 replies; 28+ messages in thread
From: Tim Chen @ 2021-03-15 20:53 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song), Peter Zijlstra
  Cc: catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, dietmar.eggemann, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa


> It seems sensible the more CPU we get in the cluster, the more
> we need the kernel to be aware of its existence.
> 
> Tim, it is possible for you to bring up the cpu_cluster_mask and
> cluster_sibling for x86 so that the topology can be represented
> in sysfs and be used by scheduler? It seems your patch lacks this
> part.

You mean having something in /sys/devices/system/cpu/cpu0/topology
on cluster information so that an external program can 
affinitize to a cluster if it prefers to do so?

Tim  

> 
> BTW, I wonder if x86 can do some improvement on your KMP_AFFINITY
> by leveraging the cluster topology level.
> https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support/openmp-library-support/thread-affinity-interface-linux-and-windows.html
> 
> KMP_AFFINITY has thread affinity modes like compact and scatter,
> it seems this "compact" and "scatter" can also use the cluster
> information as you see we are also struggling with the "compact"
> and "scatter" issues here in this patchset :-)
> 
> Thanks
> Barry
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86
@ 2021-03-15 20:53           ` Tim Chen
  0 siblings, 0 replies; 28+ messages in thread
From: Tim Chen @ 2021-03-15 20:53 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song), Peter Zijlstra
  Cc: catalin.marinas, will, rjw, vincent.guittot, bp, tglx, mingo,
	lenb, dietmar.eggemann, rostedt, bsegall, mgorman, msys.mizuma,
	valentin.schneider, gregkh, Jonathan Cameron, juri.lelli,
	mark.rutland, sudeep.holla, aubrey.li, linux-arm-kernel,
	linux-kernel, linux-acpi, x86, xuwei (O), Zengtao (B),
	guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa


> It seems sensible the more CPU we get in the cluster, the more
> we need the kernel to be aware of its existence.
> 
> Tim, it is possible for you to bring up the cpu_cluster_mask and
> cluster_sibling for x86 so that the topology can be represented
> in sysfs and be used by scheduler? It seems your patch lacks this
> part.

You mean having something in /sys/devices/system/cpu/cpu0/topology
on cluster information so that an external program can 
affinitize to a cluster if it prefers to do so?

Tim  

> 
> BTW, I wonder if x86 can do some improvement on your KMP_AFFINITY
> by leveraging the cluster topology level.
> https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support/openmp-library-support/thread-affinity-interface-linux-and-windows.html
> 
> KMP_AFFINITY has thread affinity modes like compact and scatter,
> it seems this "compact" and "scatter" can also use the cluster
> information as you see we are also struggling with the "compact"
> and "scatter" issues here in this patchset :-)
> 
> Thanks
> Barry
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
  2021-03-02 10:43     ` Peter Zijlstra
@ 2021-03-16  7:33       ` Song Bao Hua (Barry Song)
  -1 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-16  7:33 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, dietmar.eggemann, rostedt, bsegall, mgorman,
	msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Tuesday, March 2, 2021 11:43 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
> 
> On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 88a2e2b..d805e59 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
> >  	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> >  		static_branch_inc_cpuslocked(&sched_smt_present);
> >  #endif
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	/*
> > +	 * When going up, increment the number of cluster cpus with
> > +	 * cluster present.
> > +	 */
> > +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +		static_branch_inc_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >  	set_cpu_active(cpu, true);
> >
> >  	if (sched_smp_initialized) {
> > @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
> >  		static_branch_dec_cpuslocked(&sched_smt_present);
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	/*
> > +	 * When going down, decrement the number of cpus with cluster present.
> > +	 */
> > +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +		static_branch_dec_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >  	if (!sched_smp_initialized)
> >  		return 0;
> 
> I don't think that's correct. IIUC this will mean the
> sched_cluster_present thing will be enabled on anything with SMT (very
> much including x86 big cores after the next patch).
> 
> I'm thinking that at the very least you should check a CLS domain
> exists, but that might be hard at this point, because the sched domains
> haven't been build yet.

might be able to achieve the same goal by:

int cls_wt = cpumask_weight(cpu_cluster_mask(cpu));
if ((cls_wt > cpumask_weight(cpu_smt_mask(cpu))) &&
     && (cls_wt < cpumask_weight(cpu_coregroup_mask(cpu))))
   sched_cluster_present...

> 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b..3db7b07 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
> >  	return -1;
> >  }
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> > +EXPORT_SYMBOL_GPL(sched_cluster_present);
> 
> I really rather think this shouldn't be exported

Ok. Make sense.

> 
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_SMT
> >  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> >  EXPORT_SYMBOL_GPL(sched_smt_present);
> 
> This is a KVM wart, it needs to know because mitigation crap.
> 

Ok.

> > @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >
> >  #endif /* CONFIG_SCHED_SMT */
> >
> > +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int
> target, struct cpumask *cpus, int *idle_cpu, int *nr)
> > +{
> > +	int cpu, i;
> > +
> > +	for_each_cpu_wrap(cpu, cpus, target) {
> > +		if (smt) {
> > +			i = select_idle_core(p, cpu, cpus, idle_cpu);
> > +		} else {
> > +			if (!--*nr)
> > +				return -1;
> > +			i = __select_idle_cpu(cpu);
> > +		}
> > +
> > +		if ((unsigned int)i < nr_cpumask_bits)
> > +			return i;
> > +	}
> > +
> > +	return -1;
> > +}
> > +
> >  /*
> >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> the
> > @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd,
> int target)
> >  {
> >  	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> > -	int i, cpu, idle_cpu = -1, nr = INT_MAX;
> > +	int i, idle_cpu = -1, nr = INT_MAX;
> >  	bool smt = test_idle_cores(target, false);
> >  	int this = smp_processor_id();
> >  	struct sched_domain *this_sd;
> > @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >  	if (!this_sd)
> >  		return -1;
> >
> > -	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +	if (!sched_cluster_active())
> > +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	if (sched_cluster_active())
> > +		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> > +#endif
> >
> >  	if (sched_feat(SIS_PROP) && !smt) {
> >  		u64 avg_cost, avg_idle, span_avg;
> > @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >  		time = cpu_clock(this);
> >  	}
> >
> > -	for_each_cpu_wrap(cpu, cpus, target) {
> > -		if (smt) {
> > -			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> > -			if ((unsigned int)i < nr_cpumask_bits)
> > -				return i;
> > +	/* scan cluster before scanning the whole llc */
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	if (sched_cluster_active()) {
> > +		i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +		if ((unsigned int) i < nr_cpumask_bits) {
> > +			idle_cpu = i;
> > +			goto done;
> > +		} else if (nr <= 0)
> > +			return -1;
> >
> > -		} else {
> > -			if (!--nr)
> > -				return -1;
> > -			idle_cpu = __select_idle_cpu(cpu);
> > -			if ((unsigned int)idle_cpu < nr_cpumask_bits)
> > -				break;
> > -		}
> > +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
> >  	}
> > +#endif
> > +
> > +	i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +	if ((unsigned int) i < nr_cpumask_bits) {
> > +		idle_cpu = i;
> > +		goto done;
> > +	} else if (nr <= 0)
> > +		return -1;
> >
> >  	if (smt)
> >  		set_idle_cores(this, false);
> >
> > +done:
> >  	if (sched_feat(SIS_PROP) && !smt) {
> >  		time = cpu_clock(this) - time;
> >  		update_avg(&this_sd->avg_scan_cost, time);
> 
> And this is just horrific :-(

I was actually quite struggling with this part.
Had tried a couple of ways before sending this.
Still the sent one was quite ugly.

Thanks
Barry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
@ 2021-03-16  7:33       ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 28+ messages in thread
From: Song Bao Hua (Barry Song) @ 2021-03-16  7:33 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: tim.c.chen, catalin.marinas, will, rjw, vincent.guittot, bp,
	tglx, mingo, lenb, dietmar.eggemann, rostedt, bsegall, mgorman,
	msys.mizuma, valentin.schneider, gregkh, Jonathan Cameron,
	juri.lelli, mark.rutland, sudeep.holla, aubrey.li,
	linux-arm-kernel, linux-kernel, linux-acpi, x86, xuwei (O),
	Zengtao (B), guodong.xu, yangyicong, Liguozhu (Kenneth),
	linuxarm, hpa



> -----Original Message-----
> From: Peter Zijlstra [mailto:peterz@infradead.org]
> Sent: Tuesday, March 2, 2021 11:43 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de;
> tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> juri.lelli@redhat.com; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org; x86@kernel.org;
> xuwei (O) <xuwei5@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>;
> guodong.xu@linaro.org; yangyicong <yangyicong@huawei.com>; Liguozhu (Kenneth)
> <liguozhu@hisilicon.com>; linuxarm@openeuler.org; hpa@zytor.com
> Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
> 
> On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 88a2e2b..d805e59 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
> >  	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> >  		static_branch_inc_cpuslocked(&sched_smt_present);
> >  #endif
> > +
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	/*
> > +	 * When going up, increment the number of cluster cpus with
> > +	 * cluster present.
> > +	 */
> > +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +		static_branch_inc_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >  	set_cpu_active(cpu, true);
> >
> >  	if (sched_smp_initialized) {
> > @@ -7873,6 +7883,14 @@ int sched_cpu_deactivate(unsigned int cpu)
> >  		static_branch_dec_cpuslocked(&sched_smt_present);
> >  #endif
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	/*
> > +	 * When going down, decrement the number of cpus with cluster present.
> > +	 */
> > +	if (cpumask_weight(cpu_cluster_mask(cpu)) > 1)
> > +		static_branch_dec_cpuslocked(&sched_cluster_present);
> > +#endif
> > +
> >  	if (!sched_smp_initialized)
> >  		return 0;
> 
> I don't think that's correct. IIUC this will mean the
> sched_cluster_present thing will be enabled on anything with SMT (very
> much including x86 big cores after the next patch).
> 
> I'm thinking that at the very least you should check a CLS domain
> exists, but that might be hard at this point, because the sched domains
> haven't been build yet.

might be able to achieve the same goal by:

int cls_wt = cpumask_weight(cpu_cluster_mask(cpu));
if ((cls_wt > cpumask_weight(cpu_smt_mask(cpu))) &&
     && (cls_wt < cpumask_weight(cpu_coregroup_mask(cpu))))
   sched_cluster_present...

> 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b..3db7b07 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6009,6 +6009,11 @@ static inline int __select_idle_cpu(int cpu)
> >  	return -1;
> >  }
> >
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +DEFINE_STATIC_KEY_FALSE(sched_cluster_present);
> > +EXPORT_SYMBOL_GPL(sched_cluster_present);
> 
> I really rather think this shouldn't be exported

Ok. Make sense.

> 
> > +#endif
> > +
> >  #ifdef CONFIG_SCHED_SMT
> >  DEFINE_STATIC_KEY_FALSE(sched_smt_present);
> >  EXPORT_SYMBOL_GPL(sched_smt_present);
> 
> This is a KVM wart, it needs to know because mitigation crap.
> 

Ok.

> > @@ -6116,6 +6121,26 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >
> >  #endif /* CONFIG_SCHED_SMT */
> >
> > +static inline int _select_idle_cpu(bool smt, struct task_struct *p, int
> target, struct cpumask *cpus, int *idle_cpu, int *nr)
> > +{
> > +	int cpu, i;
> > +
> > +	for_each_cpu_wrap(cpu, cpus, target) {
> > +		if (smt) {
> > +			i = select_idle_core(p, cpu, cpus, idle_cpu);
> > +		} else {
> > +			if (!--*nr)
> > +				return -1;
> > +			i = __select_idle_cpu(cpu);
> > +		}
> > +
> > +		if ((unsigned int)i < nr_cpumask_bits)
> > +			return i;
> > +	}
> > +
> > +	return -1;
> > +}
> > +
> >  /*
> >   * Scan the LLC domain for idle CPUs; this is dynamically regulated by
> >   * comparing the average scan cost (tracked in sd->avg_scan_cost) against
> the
> > @@ -6124,7 +6149,7 @@ static inline int select_idle_core(struct task_struct
> *p, int core, struct cpuma
> >  static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd,
> int target)
> >  {
> >  	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> > -	int i, cpu, idle_cpu = -1, nr = INT_MAX;
> > +	int i, idle_cpu = -1, nr = INT_MAX;
> >  	bool smt = test_idle_cores(target, false);
> >  	int this = smp_processor_id();
> >  	struct sched_domain *this_sd;
> > @@ -6134,7 +6159,12 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >  	if (!this_sd)
> >  		return -1;
> >
> > -	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +	if (!sched_cluster_active())
> > +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	if (sched_cluster_active())
> > +		cpumask_and(cpus, cpu_cluster_mask(target), p->cpus_ptr);
> > +#endif
> >
> >  	if (sched_feat(SIS_PROP) && !smt) {
> >  		u64 avg_cost, avg_idle, span_avg;
> > @@ -6155,24 +6185,32 @@ static int select_idle_cpu(struct task_struct *p,
> struct sched_domain *sd, int t
> >  		time = cpu_clock(this);
> >  	}
> >
> > -	for_each_cpu_wrap(cpu, cpus, target) {
> > -		if (smt) {
> > -			i = select_idle_core(p, cpu, cpus, &idle_cpu);
> > -			if ((unsigned int)i < nr_cpumask_bits)
> > -				return i;
> > +	/* scan cluster before scanning the whole llc */
> > +#ifdef CONFIG_SCHED_CLUSTER
> > +	if (sched_cluster_active()) {
> > +		i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +		if ((unsigned int) i < nr_cpumask_bits) {
> > +			idle_cpu = i;
> > +			goto done;
> > +		} else if (nr <= 0)
> > +			return -1;
> >
> > -		} else {
> > -			if (!--nr)
> > -				return -1;
> > -			idle_cpu = __select_idle_cpu(cpu);
> > -			if ((unsigned int)idle_cpu < nr_cpumask_bits)
> > -				break;
> > -		}
> > +		cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> > +		cpumask_andnot(cpus, cpus, cpu_cluster_mask(target));
> >  	}
> > +#endif
> > +
> > +	i = _select_idle_cpu(smt, p, target, cpus, &idle_cpu, &nr);
> > +	if ((unsigned int) i < nr_cpumask_bits) {
> > +		idle_cpu = i;
> > +		goto done;
> > +	} else if (nr <= 0)
> > +		return -1;
> >
> >  	if (smt)
> >  		set_idle_cores(this, false);
> >
> > +done:
> >  	if (sched_feat(SIS_PROP) && !smt) {
> >  		time = cpu_clock(this) - time;
> >  		update_avg(&this_sd->avg_scan_cost, time);
> 
> And this is just horrific :-(

I was actually quite struggling with this part.
Had tried a couple of ways before sending this.
Still the sent one was quite ugly.

Thanks
Barry


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-03-16  7:35 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-01 22:59 [RFC PATCH v4 0/3] scheduler: expose the topology of clusters and add cluster scheduler Barry Song
2021-03-01 22:59 ` Barry Song
2021-03-01 22:59 ` [RFC PATCH v4 1/3] topology: Represent clusters of CPUs within a die Barry Song
2021-03-01 22:59   ` Barry Song
2021-03-15  3:11   ` Song Bao Hua (Barry Song)
2021-03-15  3:11     ` Song Bao Hua (Barry Song)
2021-03-15 10:52     ` Jonathan Cameron
2021-03-15 10:52       ` Jonathan Cameron
2021-03-01 22:59 ` [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters Barry Song
2021-03-01 22:59   ` Barry Song
2021-03-02 10:43   ` Peter Zijlstra
2021-03-02 10:43     ` Peter Zijlstra
2021-03-16  7:33     ` Song Bao Hua (Barry Song)
2021-03-16  7:33       ` Song Bao Hua (Barry Song)
2021-03-08 11:25   ` Vincent Guittot
2021-03-08 11:25     ` Vincent Guittot
2021-03-08 22:15     ` Song Bao Hua (Barry Song)
2021-03-08 22:15       ` Song Bao Hua (Barry Song)
2021-03-01 22:59 ` [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86 Barry Song
2021-03-01 22:59   ` Barry Song
2021-03-02 10:30   ` Peter Zijlstra
2021-03-02 10:30     ` Peter Zijlstra
2021-03-03 18:34     ` Tim Chen
2021-03-03 18:34       ` Tim Chen
2021-03-08 22:30       ` [Linuxarm] " Song Bao Hua (Barry Song)
2021-03-08 22:30         ` Song Bao Hua (Barry Song)
2021-03-15 20:53         ` Tim Chen
2021-03-15 20:53           ` Tim Chen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.