linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore
@ 2018-09-20 17:22 Gautham R. Shenoy
  2018-09-20 17:22 ` [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups" Gautham R. Shenoy
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Gautham R. Shenoy @ 2018-09-20 17:22 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Michael Neuling, Vaidyanathan Srinivasan,
	Akshay Adiga, Shilpasri G Bhat, Oliver O'Halloran,
	Nicholas Piggin, Murilo Opsfelder Araujo, Anton Blanchard
  Cc: linuxppc-dev, linux-kernel, Gautham R. Shenoy

From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>

Hi,

This is the eight iteration of the patchset to add support for
big-core on POWER9. This patch also optimizes the task placement on
such big-core systems.

The previous versions can be found here:

v7: https://lkml.org/lkml/2018/8/20/52
v6: https://lkml.org/lkml/2018/8/9/119
v5: https://lkml.org/lkml/2018/8/6/587
v4: https://lkml.org/lkml/2018/7/24/79
v3: https://lkml.org/lkml/2018/7/6/255
v2: https://lkml.org/lkml/2018/7/3/401
v1: https://lkml.org/lkml/2018/5/11/245

Changes :

v7 --> v8:
   - Reorganized the patch series into three patches :

     	- First one discovers the big-cores and initializes a per-cpu
     	  cpumask with its small-core siblings.

        - The second patch uses the small-core siblings at the SMT
     	  level sched-domains on the big-core systems and also
     	  activates the CACHE domain that corresponds to the big-core
     	  where all the threads share L2 cache.

	- The third patch creates a pair of sysfs attributes named
	  /sys/devices/system/cpu/cpuN/topology/smallcore_thread_siblings
	  and
	  /sys/devices/system/cpu/cpuN/topology/smallcore_thread_siblings_list

   - The third patch addresses Michael Neuling's review comment for
     the previous iteration.

Description:
~~~~~~~~~~~~~~~~~~~~
A pair of IBM POWER9 SMT4 cores can be fused together to form a
big-core with 8 SMT threads. This can be discovered via the
"ibm,thread-groups" CPU property in the device tree which will
indicate which group of threads that share the L1 cache, translation
cache and instruction data flow.  If there are multiple such group of
threads, then the core is a big-core. Furthermore, on POWER9 the thread-ids of
such a big-core is obtained by interleaving the thread-ids of the
component SMT4 cores.

Eg: Threads in the pair of component SMT4 cores of an interleaved
big-core are numbered {0,2,4,6} and {1,3,5,7} respectively.

 	   -------------------------
	   |  	    L1 Cache       |
       ----------------------------------
       |L2|     |     |     |      |
       |  |  0  |  2  |  4  |  6   |Small Core0
       |C |     |     |     |      |
Big    |a --------------------------
Core   |c |     |     |     |      |
       |h |  1  |  3  |  5  |  7   | Small Core1
       |e |     |     |     |      |
       -----------------------------
	  |  	    L1 Cache       |
	  --------------------------

On such a big-core system, when multiple tasks are scheduled to run on
the big-core, we get the best performance when the tasks are spread
across the pair of SMT4 cores.

Eg: Suppose there 4 tasks {p1, p2, p3, p4} are run on a big core, then

An Example of Optimal Task placement:
	   --------------------------
           |     |     |     |      |
           |  0  |  2  |  4  |  6   |   Small Core0
           | (p1)| (p2)|     |      |
Big Core   --------------------------
           |     |     |     |      |
           |  1  |  3  |  5  |  7   |   Small Core1
           |     | (p3)|     | (p4) |
           --------------------------

An example of Suboptimal Task placement:
	   --------------------------
           |     |     |     |      |
           |  0  |  2  |  4  |  6   |   Small Core0
           | (p1)| (p2)|     |  (p4)|
Big Core   --------------------------
           |     |     |     |      |
           |  1  |  3  |  5  |  7   |   Small Core1
           |     | (p3)|     |      |
           --------------------------

In order to achieve optimal task placement, on big-core systems, we
define the SMT level sched-domain to consist of the threads belonging
to the small cores. The CACHE level sched domain will consist of all
the threads belonging to the big-core. With this, the Linux Kernel
load-balancer will ensure that the tasks are spread across all the
component small cores in the system, thereby yielding optimum
performance.

Furthermore, this solution works correctly across all SMT modes
(8,4,2), as the interleaved thread-ids ensures that when we go to
lower SMT modes (4,2) the threads are offlined in a descending order,
thereby leaving equal number of threads from the component small cores
online as illustrated below.

With Patches: (ppc64_cpu --smt=on) : SMT domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 CPU0 attaching sched-domain(s):
  domain-0: span=0,2,4,6 level=SMT
   groups: 0:{ span=0 cap=294 }, 2:{ span=2 cap=294 },
           4:{ span=4 cap=294 }, 6:{ span=6 cap=294 }
 CPU1 attaching sched-domain(s):
  domain-0: span=1,3,5,7 level=SMT
   groups: 1:{ span=1 cap=294 }, 3:{ span=3 cap=294 },
           5:{ span=5 cap=294 }, 7:{ span=7 cap=294 }

            Optimal Task placement (SMT 8)
	   --------------------------
           |     |     |     |      |
           |  0  |  2  |  4  |  6   |   Small Core0
           | (p1)| (p2)|     |      |
Big Core   --------------------------
           |     |     |     |      |
           |  1  |  3  |  5  |  7   |   Small Core1
           |     | (p3)|     | (p4) |
           --------------------------

With Patches : (ppc64_cpu --smt=4) : SMT domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 CPU0 attaching sched-domain(s):
  domain-0: span=0,2 level=SMT
   groups: 0:{ span=0 cap=589 }, 2:{ span=2 cap=589 }
 CPU1 attaching sched-domain(s):
  domain-0: span=1,3 level=SMT
   groups: 1:{ span=1 cap=589 }, 3:{ span=3 cap=589 }

            Optimal Task placement (SMT 4)
	   --------------------------
           |     |     |     |      |
           |  0  |  2  |  4  |  6   |   Small Core0
           | (p1)| (p2)| Off | Off  |
Big Core   --------------------------
           |     |     |     |      |
           |  1  |  3  |  5  |  7   |   Small Core1
           | (p4)| (p3)| Off | Off  |
           --------------------------

With Patches : (ppc64_cpu --smt=2) : SMT domain ceases to exist.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Optimal Task placement (SMT 2)
	   --------------------------
           | (p2)|     |     |      |
           |  0  |  2  |  4  |  6   |   Small Core0
           | (p1)| Off | Off | Off  |
Big Core   --------------------------
           | (p3)|     |     |      |
           |  1  |  3  |  5  |  7   |   Small Core1
           | (p4)| Off | Off | Off  |
           --------------------------

Thus, as an added advantage in SMT=2 mode, we will only have 3 levels
in the sched-domain topology (CACHE, DIE and NUMA).

The SMT levels, without the patches are as follows.

Without Patches: (ppc64_cpu --smt=on) : SMT domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 CPU0 attaching sched-domain(s):
  domain-0: span=0-7 level=SMT
   groups: 0:{ span=0 cap=147 }, 1:{ span=1 cap=147 },
           2:{ span=2 cap=147 }, 3:{ span=3 cap=147 },
           4:{ span=4 cap=147 }, 5:{ span=5 cap=147 },
	   6:{ span=6 cap=147 }, 7:{ span=7 cap=147 }
 CPU1 attaching sched-domain(s):
  domain-0: span=0-7 level=SMT
   groups: 1:{ span=1 cap=147 }, 2:{ span=2 cap=147 },
           3:{ span=3 cap=147 }, 4:{ span=4 cap=147 },
	   5:{ span=5 cap=147 }, 6:{ span=6 cap=147 },
	   7:{ span=7 cap=147 }, 0:{ span=0 cap=147 }

Without Patches: (ppc64_cpu --smt=4) : SMT domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 CPU0 attaching sched-domain(s):
  domain-0: span=0-3 level=SMT
   groups: 0:{ span=0 cap=294 }, 1:{ span=1 cap=294 },
           2:{ span=2 cap=294 }, 3:{ span=3 cap=294 },
 CPU1 attaching sched-domain(s):
  domain-0: span=0-3 level=SMT
   groups: 1:{ span=1 cap=294 }, 2:{ span=2 cap=294 },
           3:{ span=3 cap=294 }, 0:{ span=0 cap=294 }

Without Patches: (ppc64_cpu --smt=2) : SMT domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 CPU0 attaching sched-domain(s):
  domain-0: span=0-1 level=SMT
   groups: 0:{ span=0 cap=589 }, 1:{ span=1 cap=589 },

 CPU1 attaching sched-domain(s):
  domain-0: span=0-1 level=SMT
   groups: 1:{ span=1 cap=589 }, 0:{ span=0 cap=589 },

This patchset contains two patches which on detecting the presence of
big-cores, defines the SMT level sched domain to correspond to the
threads of the small cores.

Patch 1: adds support to detect the presence of
big-cores and reports the small-core siblings of each CPU X
via the sysfs file "/sys/devices/system/cpu/cpuX/small_core_siblings".

Patch 2: Defines the SMT level sched domain to correspond to the
threads of the small cores.

Results:
~~~~~~~~~~~~~~~~~
1) 2 thread ebizzy
~~~~~~~~~~~~~~~~~~~~~~
Experimental results for ebizzy with 2 threads, bound to a single big-core
show a marked improvement with this patchset over the 4.19-rc4 vanilla
kernel.

The result of 100 such runs for 4.19-rc4 kernel and the
4.19-rc4 + big-core-smt-patches are as follows

4.19.0-rc4 vanilla
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        records/s    :  # samples  : Histogram
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[0000000 - 1000000]  :      0      : #
[1000000 - 2000000]  :      1      : #
[2000000 - 3000000]  :      2      : #
[3000000 - 4000000]  :      17     : ####
[4000000 - 5000000]  :      9      : ##
[5000000 - 6000000]  :      5      : ##
[6000000 - 7000000]  :      66     : ##############
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

4.19-rc4 + big-core-patches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        records/s    :  # samples  : Histogram
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[0000000 - 1000000]  :      0      : #
[1000000 - 2000000]  :      0      : #
[2000000 - 3000000]  :      5      : ##
[3000000 - 4000000]  :      9      : ##
[4000000 - 5000000]  :      0      : #
[5000000 - 6000000]  :      2      : #
[6000000 - 7000000]  :      84     : #################
=================================================

2) Hackbench (perf bench sched pipe)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
500 iterations of the hackbench run both on 4.19-rc4 vanilla kernel
and v4.19-rc4 + big-core-smt-patches. There isn't a significant
difference between the two.

The values for Min, Max, Median, Avg below are in seconds. Lower the
better.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
			4.19-rc4 vanilla
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    N           Min           Max        Median           Avg        Stddev
  500         4.603         9.438         6.165      5.921446    0.47448034
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
			4.19-rc4 + big-core-patches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    N           Min           Max        Median           Avg        Stddev
  500         4.532         6.476         6.224      5.982098    0.43021891
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Gautham R. Shenoy (3):
  powerpc: Detect the presence of big-cores via "ibm,thread-groups"
  powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores
  powerpc/sysfs: Add topology/smallcore_thread_siblings[_list]

 Documentation/ABI/testing/sysfs-devices-system-cpu |  14 ++
 arch/powerpc/include/asm/cputhreads.h              |   2 +
 arch/powerpc/include/asm/smp.h                     |   6 +
 arch/powerpc/kernel/smp.c                          | 240 ++++++++++++++++++++-
 arch/powerpc/kernel/sysfs.c                        |  88 ++++++++
 5 files changed, 349 insertions(+), 1 deletion(-)

-- 
1.9.4


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups"
  2018-09-20 17:22 [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Gautham R. Shenoy
@ 2018-09-20 17:22 ` Gautham R. Shenoy
  2018-09-21  3:02   ` Michael Neuling
  2018-09-20 17:22 ` [PATCH v8 2/3] powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores Gautham R. Shenoy
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Gautham R. Shenoy @ 2018-09-20 17:22 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Michael Neuling, Vaidyanathan Srinivasan,
	Akshay Adiga, Shilpasri G Bhat, Oliver O'Halloran,
	Nicholas Piggin, Murilo Opsfelder Araujo, Anton Blanchard
  Cc: linuxppc-dev, linux-kernel, Gautham R. Shenoy

From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>

On IBM POWER9, the device tree exposes a property array identifed by
"ibm,thread-groups" which will indicate which groups of threads share a
particular set of resources.

As of today we only have one form of grouping identifying the group of
threads in the core that share the L1 cache, translation cache and
instruction data flow.

This patch

     1) Defines the helper function to parse the contents of
     "ibm,thread-groups".

     2) On boot, it parses the "ibm,thread-groups" property and caches
     the CPU-threads sharing the L1 cache in a per-cpu variable named
     cpu_l1_cache_map.

     3) Initializes a global variable named "has_big_cores" on
     big-core systems.

     4) Each time a CPU is onlined, it initializes the
     cpu_smallcore_mask which contains the online siblings of the
     CPU that share the L1 cache with this CPU.

Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/cputhreads.h |   2 +
 arch/powerpc/include/asm/smp.h        |   6 +
 arch/powerpc/kernel/smp.c             | 221 ++++++++++++++++++++++++++++++++++
 3 files changed, 229 insertions(+)

diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
index d71a909..deb99fd 100644
--- a/arch/powerpc/include/asm/cputhreads.h
+++ b/arch/powerpc/include/asm/cputhreads.h
@@ -23,11 +23,13 @@
 extern int threads_per_core;
 extern int threads_per_subcore;
 extern int threads_shift;
+extern bool has_big_cores;
 extern cpumask_t threads_core_mask;
 #else
 #define threads_per_core	1
 #define threads_per_subcore	1
 #define threads_shift		0
+#define has_big_cores		0
 #define threads_core_mask	(*get_cpu_mask(0))
 #endif
 
diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index 95b66a0..4439893 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -100,6 +100,7 @@ static inline void set_hard_smp_processor_id(int cpu, int phys)
 DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_map);
 DECLARE_PER_CPU(cpumask_var_t, cpu_l2_cache_map);
 DECLARE_PER_CPU(cpumask_var_t, cpu_core_map);
+DECLARE_PER_CPU(cpumask_var_t, cpu_smallcore_map);
 
 static inline struct cpumask *cpu_sibling_mask(int cpu)
 {
@@ -116,6 +117,11 @@ static inline struct cpumask *cpu_l2_cache_mask(int cpu)
 	return per_cpu(cpu_l2_cache_map, cpu);
 }
 
+static inline struct cpumask *cpu_smallcore_mask(int cpu)
+{
+	return per_cpu(cpu_smallcore_map, cpu);
+}
+
 extern int cpu_to_core_id(int cpu);
 
 /* Since OpenPIC has only 4 IPIs, we use slightly different message numbers.
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 61c1fad..15095110 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -74,14 +74,32 @@
 #endif
 
 struct thread_info *secondary_ti;
+bool has_big_cores;
 
 DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map);
+DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map);
 DEFINE_PER_CPU(cpumask_var_t, cpu_l2_cache_map);
 DEFINE_PER_CPU(cpumask_var_t, cpu_core_map);
 
 EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
 EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map);
 EXPORT_PER_CPU_SYMBOL(cpu_core_map);
+EXPORT_SYMBOL_GPL(has_big_cores);
+
+#define MAX_THREAD_LIST_SIZE	8
+#define THREAD_GROUP_SHARE_L1   1
+struct thread_groups {
+	unsigned int property;
+	unsigned int nr_groups;
+	unsigned int threads_per_group;
+	unsigned int thread_list[MAX_THREAD_LIST_SIZE];
+};
+
+/*
+ * On big-cores system, cpu_l1_cache_map for each CPU corresponds to
+ * the set its siblings that share the L1-cache.
+ */
+DEFINE_PER_CPU(cpumask_var_t, cpu_l1_cache_map);
 
 /* SMP operations for this machine */
 struct smp_ops_t *smp_ops;
@@ -674,6 +692,184 @@ static void set_cpus_unrelated(int i, int j,
 }
 #endif
 
+/*
+ * parse_thread_groups: Parses the "ibm,thread-groups" device tree
+ *                      property for the CPU device node @dn and stores
+ *                      the parsed output in the thread_groups
+ *                      structure @tg if the ibm,thread-groups[0]
+ *                      matches @property.
+ *
+ * @dn: The device node of the CPU device.
+ * @tg: Pointer to a thread group structure into which the parsed
+ *      output of "ibm,thread-groups" is stored.
+ * @property: The property of the thread-group that the caller is
+ *            interested in.
+ *
+ * ibm,thread-groups[0..N-1] array defines which group of threads in
+ * the CPU-device node can be grouped together based on the property.
+ *
+ * ibm,thread-groups[0] tells us the property based on which the
+ * threads are being grouped together. If this value is 1, it implies
+ * that the threads in the same group share L1, translation cache.
+ *
+ * ibm,thread-groups[1] tells us how many such thread groups exist.
+ *
+ * ibm,thread-groups[2] tells us the number of threads in each such
+ * group.
+ *
+ * ibm,thread-groups[3..N-1] is the list of threads identified by
+ * "ibm,ppc-interrupt-server#s" arranged as per their membership in
+ * the grouping.
+ *
+ * Example: If ibm,thread-groups = [1,2,4,5,6,7,8,9,10,11,12] it
+ * implies that there are 2 groups of 4 threads each, where each group
+ * of threads share L1, translation cache.
+ *
+ * The "ibm,ppc-interrupt-server#s" of the first group is {5,6,7,8}
+ * and the "ibm,ppc-interrupt-server#s" of the second group is {9, 10,
+ * 11, 12} structure
+ *
+ * Returns 0 on success, -EINVAL if the property does not exist,
+ * -ENODATA if property does not have a value, and -EOVERFLOW if the
+ * property data isn't large enough.
+ */
+static int parse_thread_groups(struct device_node *dn,
+			       struct thread_groups *tg,
+			       unsigned int property)
+{
+	int i;
+	u32 thread_group_array[3 + MAX_THREAD_LIST_SIZE];
+	u32 *thread_list;
+	size_t total_threads;
+	int ret;
+
+	ret = of_property_read_u32_array(dn, "ibm,thread-groups",
+					 thread_group_array, 3);
+	if (ret)
+		return ret;
+
+	tg->property = thread_group_array[0];
+	tg->nr_groups = thread_group_array[1];
+	tg->threads_per_group = thread_group_array[2];
+	if (tg->property != property ||
+	    tg->nr_groups < 1 ||
+	    tg->threads_per_group < 1)
+		return -ENODATA;
+
+	total_threads = tg->nr_groups * tg->threads_per_group;
+
+	ret = of_property_read_u32_array(dn, "ibm,thread-groups",
+					 thread_group_array,
+					 3 + total_threads);
+	if (ret)
+		return ret;
+
+	thread_list = &thread_group_array[3];
+
+	for (i = 0 ; i < total_threads; i++)
+		tg->thread_list[i] = thread_list[i];
+
+	return 0;
+}
+
+/*
+ * get_cpu_thread_group_start : Searches the thread group in tg->thread_list
+ *                              that @cpu belongs to.
+ *
+ * @cpu : The logical CPU whose thread group is being searched.
+ * @tg : The thread-group structure of the CPU node which @cpu belongs
+ *       to.
+ *
+ * Returns the index to tg->thread_list that points to the the start
+ * of the thread_group that @cpu belongs to.
+ *
+ * Returns -1 if cpu doesn't belong to any of the groups pointed to by
+ * tg->thread_list.
+ */
+static int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
+{
+	int hw_cpu_id = get_hard_smp_processor_id(cpu);
+	int i, j;
+
+	for (i = 0; i < tg->nr_groups; i++) {
+		int group_start = i * tg->threads_per_group;
+
+		for (j = 0; j < tg->threads_per_group; j++) {
+			int idx = group_start + j;
+
+			if (tg->thread_list[idx] == hw_cpu_id)
+				return group_start;
+		}
+	}
+
+	return -1;
+}
+
+static int init_cpu_l1_cache_map(int cpu)
+
+{
+	struct device_node *dn = of_get_cpu_node(cpu, NULL);
+	struct thread_groups tg;
+
+	int first_thread = cpu_first_thread_sibling(cpu);
+	int i, cpu_group_start = -1, err = 0;
+
+	if (!dn)
+		return -ENODATA;
+
+	err = parse_thread_groups(dn, &tg, THREAD_GROUP_SHARE_L1);
+	if (err)
+		goto out;
+
+	zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu),
+				GFP_KERNEL,
+				cpu_to_node(cpu));
+
+	cpu_group_start = get_cpu_thread_group_start(cpu, &tg);
+
+	if (unlikely(cpu_group_start == -1)) {
+		WARN_ON_ONCE(1);
+		err = -ENODATA;
+		goto out;
+	}
+
+	for (i = first_thread; i < first_thread + threads_per_core; i++) {
+		int i_group_start = get_cpu_thread_group_start(i, &tg);
+
+		if (unlikely(i_group_start == -1)) {
+			WARN_ON_ONCE(1);
+			err = -ENODATA;
+			goto out;
+		}
+
+		if (i_group_start == cpu_group_start)
+			cpumask_set_cpu(i, per_cpu(cpu_l1_cache_map, cpu));
+	}
+
+out:
+	of_node_put(dn);
+	return err;
+}
+
+static int init_big_cores(void)
+{
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		int err = init_cpu_l1_cache_map(cpu);
+
+		if (err)
+			return err;
+
+		zalloc_cpumask_var_node(&per_cpu(cpu_smallcore_map, cpu),
+					GFP_KERNEL,
+					cpu_to_node(cpu));
+	}
+
+	has_big_cores = true;
+	return 0;
+}
+
 void __init smp_prepare_cpus(unsigned int max_cpus)
 {
 	unsigned int cpu;
@@ -712,6 +908,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
 	cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
 	cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
 
+	init_big_cores();
+	if (has_big_cores) {
+		cpumask_set_cpu(boot_cpuid,
+				cpu_smallcore_mask(boot_cpuid));
+	}
+
 	if (smp_ops && smp_ops->probe)
 		smp_ops->probe();
 }
@@ -995,10 +1197,28 @@ static void remove_cpu_from_masks(int cpu)
 		set_cpus_unrelated(cpu, i, cpu_core_mask);
 		set_cpus_unrelated(cpu, i, cpu_l2_cache_mask);
 		set_cpus_unrelated(cpu, i, cpu_sibling_mask);
+		if (has_big_cores)
+			set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
 	}
 }
 #endif
 
+static inline void add_cpu_to_smallcore_masks(int cpu)
+{
+	struct cpumask *this_l1_cache_map = per_cpu(cpu_l1_cache_map, cpu);
+	int i, first_thread = cpu_first_thread_sibling(cpu);
+
+	if (!has_big_cores)
+		return;
+
+	cpumask_set_cpu(cpu, cpu_smallcore_mask(cpu));
+
+	for (i = first_thread; i < first_thread + threads_per_core; i++) {
+		if (cpu_online(i) && cpumask_test_cpu(i, this_l1_cache_map))
+			set_cpus_related(i, cpu, cpu_smallcore_mask);
+	}
+}
+
 static void add_cpu_to_masks(int cpu)
 {
 	int first_thread = cpu_first_thread_sibling(cpu);
@@ -1015,6 +1235,7 @@ static void add_cpu_to_masks(int cpu)
 		if (cpu_online(i))
 			set_cpus_related(i, cpu, cpu_sibling_mask);
 
+	add_cpu_to_smallcore_masks(cpu);
 	/*
 	 * Copy the thread sibling mask into the cache sibling mask
 	 * and mark any CPUs that share an L2 with this CPU.
-- 
1.9.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 2/3] powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores
  2018-09-20 17:22 [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Gautham R. Shenoy
  2018-09-20 17:22 ` [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups" Gautham R. Shenoy
@ 2018-09-20 17:22 ` Gautham R. Shenoy
  2018-09-20 17:22 ` [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list] Gautham R. Shenoy
  2018-09-20 18:04 ` [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Dave Hansen
  3 siblings, 0 replies; 13+ messages in thread
From: Gautham R. Shenoy @ 2018-09-20 17:22 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Michael Neuling, Vaidyanathan Srinivasan,
	Akshay Adiga, Shilpasri G Bhat, Oliver O'Halloran,
	Nicholas Piggin, Murilo Opsfelder Araujo, Anton Blanchard
  Cc: linuxppc-dev, linux-kernel, Gautham R. Shenoy

From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>

Each of the SMT4 cores forming a big-core are more or less independent
units. Thus when multiple tasks are scheduled to run on the big-core,
we get the best performance when the tasks are spread across the pair of
SMT4 cores.

This patch achieves this by setting the SMT level mask to correspond to
the smallcore sibling mask on big-core systems. This patch also the
CACHE level sched-domain corresponding to the big-core is created on
big-core systems.

With this patch, the SMT sched-domain with SMT=8,4,2 on big-core
systems are as follows:

1) ppc64_cpu --smt=8

 CPU0 attaching sched-domain(s):
  domain-0: span=0,2,4,6 level=SMT
   groups: 0:{ span=0 cap=294 }, 2:{ span=2 cap=294 },
           4:{ span=4 cap=294 }, 6:{ span=6 cap=294 }
 CPU1 attaching sched-domain(s):
  domain-0: span=1,3,5,7 level=SMT
   groups: 1:{ span=1 cap=294 }, 3:{ span=3 cap=294 },
           5:{ span=5 cap=294 }, 7:{ span=7 cap=294 }

2) ppc64_cpu --smt=4

 CPU0 attaching sched-domain(s):
  domain-0: span=0,2 level=SMT
   groups: 0:{ span=0 cap=589 }, 2:{ span=2 cap=589 }
 CPU1 attaching sched-domain(s):
  domain-0: span=1,3 level=SMT
   groups: 1:{ span=1 cap=589 }, 3:{ span=3 cap=589 }

3) ppc64_cpu --smt=2
   SMT domain is a trivial domain consisting of just
   1 CPU. Hence this domain gets collapsed leaving only CACHE, DIE and
   NUMA domains.

Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/smp.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 15095110..5cdcf44 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1265,6 +1265,7 @@ static void add_cpu_to_masks(int cpu)
 void start_secondary(void *unused)
 {
 	unsigned int cpu = smp_processor_id();
+	struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask;
 
 	mmgrab(&init_mm);
 	current->active_mm = &init_mm;
@@ -1290,11 +1291,13 @@ void start_secondary(void *unused)
 	/* Update topology CPU masks */
 	add_cpu_to_masks(cpu);
 
+	if (has_big_cores)
+		sibling_mask = cpu_smallcore_mask;
 	/*
 	 * Check for any shared caches. Note that this must be done on a
 	 * per-core basis because one core in the pair might be disabled.
 	 */
-	if (!cpumask_equal(cpu_l2_cache_mask(cpu), cpu_sibling_mask(cpu)))
+	if (!cpumask_equal(cpu_l2_cache_mask(cpu), sibling_mask(cpu)))
 		shared_caches = true;
 
 	set_numa_node(numa_cpu_lookup_table[cpu]);
@@ -1361,6 +1364,13 @@ static const struct cpumask *shared_cache_mask(int cpu)
 	return cpu_l2_cache_mask(cpu);
 }
 
+#ifdef CONFIG_SCHED_SMT
+static const struct cpumask *smallcore_smt_mask(int cpu)
+{
+	return cpu_smallcore_mask(cpu);
+}
+#endif
+
 static struct sched_domain_topology_level power9_topology[] = {
 #ifdef CONFIG_SCHED_SMT
 	{ cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
@@ -1388,6 +1398,13 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	shared_proc_topology_init();
 	dump_numa_cpu_topology();
 
+#ifdef CONFIG_SCHED_SMT
+	if (has_big_cores) {
+		pr_info("Using small cores at SMT level\n");
+		power9_topology[0].mask = smallcore_smt_mask;
+		powerpc_topology[0].mask = smallcore_smt_mask;
+	}
+#endif
 	/*
 	 * If any CPU detects that it's sharing a cache with another CPU then
 	 * use the deeper topology that is aware of this sharing.
-- 
1.9.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list]
  2018-09-20 17:22 [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Gautham R. Shenoy
  2018-09-20 17:22 ` [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups" Gautham R. Shenoy
  2018-09-20 17:22 ` [PATCH v8 2/3] powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores Gautham R. Shenoy
@ 2018-09-20 17:22 ` Gautham R. Shenoy
  2018-09-21  6:20   ` kbuild test robot
  2018-09-20 18:04 ` [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Dave Hansen
  3 siblings, 1 reply; 13+ messages in thread
From: Gautham R. Shenoy @ 2018-09-20 17:22 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Michael Neuling, Vaidyanathan Srinivasan,
	Akshay Adiga, Shilpasri G Bhat, Oliver O'Halloran,
	Nicholas Piggin, Murilo Opsfelder Araujo, Anton Blanchard
  Cc: linuxppc-dev, linux-kernel, Gautham R. Shenoy

From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>

This patch adds two sysfs attributes named smallcore_thread_siblings
and smallcore_thread_siblings_list to the "topology" attribute group
for each CPU device.

The read-only attributes
/sys/device/system/cpu/cpuN/topology/smallcore_thread_siblings and
/sys/device/system/cpu/cpuN/topology/smallcore_thread_siblings_list
will the online siblings of CPU N that share the L1 cache with it on
big-core configurations in cpumask format and cpu-list format
respectively.

Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
---
 Documentation/ABI/testing/sysfs-devices-system-cpu | 14 ++++
 arch/powerpc/kernel/sysfs.c                        | 88 ++++++++++++++++++++++
 2 files changed, 102 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 7331822..2a80dc2 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -511,3 +511,17 @@ Description:	Control Symetric Multi Threading (SMT)
 
 			 If control status is "forceoff" or "notsupported" writes
 			 are rejected.
+
+What:		/sys/devices/system/cpu/cpu#/topology/smallcore_thread_siblings
+		/sys/devices/system/cpu/cpu#/topology/smallcore_thread_siblings_list
+Date:		Sept 2018
+Contact:	Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org>
+Description: 	CPU topology files that describe the thread siblings of a
+		logical CPU that share the L1-cache with it on POWER9
+		big-core configurations.
+
+		smallcore_thread_siblings: internal kernel map of
+		cpu#'s hardware threads that share L1-cache with cpu#.
+
+		smallcore_thread_siblings_list: human-readable list of
+		cpu#'s hardware threads that share L1-cache with cpu#.
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index 755dc98..f9c7d96 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -18,6 +18,7 @@
 #include <asm/smp.h>
 #include <asm/pmc.h>
 #include <asm/firmware.h>
+#include <asm/cputhreads.h>
 
 #include "cacheinfo.h"
 #include "setup.h"
@@ -714,6 +715,62 @@ static void sysfs_create_dscr_default(void)
 #endif /* HAS_PPC_PMC_PA6T */
 #endif /* HAS_PPC_PMC_CLASSIC */
 
+static ssize_t smallcore_thread_siblings_show(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	int cpu = dev->id;
+
+	return cpumap_print_to_pagebuf(false, buf, cpu_smallcore_mask(cpu));
+}
+static DEVICE_ATTR_RO(smallcore_thread_siblings);
+
+static ssize_t smallcore_thread_siblings_list_show(struct device *dev,
+					struct device_attribute *attr,
+					char *buf)
+{
+	int cpu = dev->id;
+
+	return cpumap_print_to_pagebuf(true, buf, cpu_smallcore_mask(cpu));
+}
+static DEVICE_ATTR_RO(smallcore_thread_siblings_list);
+
+static struct attribute *smallcore_attrs[] = {
+	&dev_attr_smallcore_thread_siblings.attr,
+	&dev_attr_smallcore_thread_siblings_list.attr,
+	NULL
+};
+
+static const struct attribute_group smallcore_attr_group = {
+	.name = "topology",
+	.attrs = smallcore_attrs
+};
+
+static int smallcore_register_cpu_online(unsigned int cpu)
+{
+	int err;
+	struct device *cpu_dev = get_cpu_device(cpu);
+
+	if (!has_big_cores)
+		return 0;
+
+	err = sysfs_merge_group(&cpu_dev->kobj, &smallcore_attr_group);
+
+	return err;
+}
+
+static int smallcore_unregister_cpu_online(unsigned int cpu)
+{
+	struct device *cpu_dev = get_cpu_device(cpu);
+
+	if (!has_big_cores)
+		return 0;
+
+	sysfs_unmerge_group(&cpu_dev->kobj, &smallcore_attr_group);
+
+	return 0;
+}
+
 static int register_cpu_online(unsigned int cpu)
 {
 	struct cpu *c = &per_cpu(cpu_devices, cpu);
@@ -1060,3 +1117,34 @@ static int __init topology_init(void)
 	return 0;
 }
 subsys_initcall(topology_init);
+
+/*
+ * NOTE: The smallcore_register_cpu_online
+ *       (resp. smallcore_unregister_cpu_online) callback will merge
+ *       (resp. unmerge) a couple of additional attributes to the
+ *       "topology" attribute group of a CPU device when the CPU comes
+ *       online (resp. goes offline).
+ *
+ *       Hence, the registration of these callbacks must happen after
+ *       topology_sysfs_init() is called so that the topology
+ *       attribute group is created before these additional attributes
+ *       can be merged/unmerged. We cannot register these callbacks in
+ *       topology_init() since this function is called before
+ *       topology_sysfs_init(). Hence we define the following
+ *       late_initcall for this purpose.
+ */
+static int __init smallcore_topology_init(void)
+{
+	int r;
+
+	if (!has_big_cores)
+		return 0;
+
+	r = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
+			      "powerpc/topology/smallcore:online",
+			      smallcore_register_cpu_online,
+			      smallcore_unregister_cpu_online);
+	WARN_ON(r < 0);
+	return 0;
+}
+late_initcall(smallcore_topology_init);
-- 
1.9.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore
  2018-09-20 17:22 [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Gautham R. Shenoy
                   ` (2 preceding siblings ...)
  2018-09-20 17:22 ` [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list] Gautham R. Shenoy
@ 2018-09-20 18:04 ` Dave Hansen
  2018-09-22 11:03   ` Gautham R Shenoy
  3 siblings, 1 reply; 13+ messages in thread
From: Dave Hansen @ 2018-09-20 18:04 UTC (permalink / raw)
  To: Gautham R. Shenoy, Aneesh Kumar K.V, Srikar Dronamraju,
	Michael Ellerman, Benjamin Herrenschmidt, Michael Neuling,
	Vaidyanathan Srinivasan, Akshay Adiga, Shilpasri G Bhat,
	Oliver O'Halloran, Nicholas Piggin, Murilo Opsfelder Araujo,
	Anton Blanchard
  Cc: linuxppc-dev, linux-kernel

On 09/20/2018 10:22 AM, Gautham R. Shenoy wrote:
>  	   -------------------------
> 	   |  	    L1 Cache       |
>        ----------------------------------
>        |L2|     |     |     |      |
>        |  |  0  |  2  |  4  |  6   |Small Core0
>        |C |     |     |     |      |
> Big    |a --------------------------
> Core   |c |     |     |     |      |
>        |h |  1  |  3  |  5  |  7   | Small Core1
>        |e |     |     |     |      |
>        -----------------------------
> 	  |  	    L1 Cache       |
> 	  --------------------------

The scheduler already knows about shared caches.  Could you elaborate on
how this is different from the situation today where we have multiple
cores sharing an L2/L3?

Adding the new sysfs stuff seems like overkill if that's all that you
are trying to do.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups"
  2018-09-20 17:22 ` [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups" Gautham R. Shenoy
@ 2018-09-21  3:02   ` Michael Neuling
  2018-09-21 17:17     ` Gautham R Shenoy
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Neuling @ 2018-09-21  3:02 UTC (permalink / raw)
  To: Gautham R. Shenoy, Aneesh Kumar K.V, Srikar Dronamraju,
	Michael Ellerman, Benjamin Herrenschmidt,
	Vaidyanathan Srinivasan, Akshay Adiga, Shilpasri G Bhat,
	Oliver O'Halloran, Nicholas Piggin, Murilo Opsfelder Araujo,
	Anton Blanchard
  Cc: linuxppc-dev, linux-kernel

This doesn't compile for me with:

arch/powerpc/kernel/smp.c: In function ‘smp_prepare_cpus’:
arch/powerpc/kernel/smp.c:812:23: error: ‘tg.threads_per_group’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
  struct thread_groups tg;
                       ^
arch/powerpc/kernel/smp.c:812:23: error: ‘tg.nr_groups’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
cc1: all warnings being treated as errors
/home/mikey/src/linux-ozlabs/scripts/Makefile.build:305: recipe for target 'arch/powerpc/kernel/smp.o' failed


On Thu, 2018-09-20 at 22:52 +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>
> 
> On IBM POWER9, the device tree exposes a property array identifed by
> "ibm,thread-groups" which will indicate which groups of threads share a
> particular set of resources.
> 
> As of today we only have one form of grouping identifying the group of
> threads in the core that share the L1 cache, translation cache and
> instruction data flow.
> 
> This patch
> 
>      1) Defines the helper function to parse the contents of
>      "ibm,thread-groups".
> 
>      2) On boot, it parses the "ibm,thread-groups" property and caches
>      the CPU-threads sharing the L1 cache in a per-cpu variable named
>      cpu_l1_cache_map.
> 
>      3) Initializes a global variable named "has_big_cores" on
>      big-core systems.
> 
>      4) Each time a CPU is onlined, it initializes the
>      cpu_smallcore_mask which contains the online siblings of the
>      CPU that share the L1 cache with this CPU.
> 
> Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/cputhreads.h |   2 +
>  arch/powerpc/include/asm/smp.h        |   6 +
>  arch/powerpc/kernel/smp.c             | 221
> ++++++++++++++++++++++++++++++++++
>  3 files changed, 229 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/cputhreads.h
> b/arch/powerpc/include/asm/cputhreads.h
> index d71a909..deb99fd 100644
> --- a/arch/powerpc/include/asm/cputhreads.h
> +++ b/arch/powerpc/include/asm/cputhreads.h
> @@ -23,11 +23,13 @@
>  extern int threads_per_core;
>  extern int threads_per_subcore;
>  extern int threads_shift;
> +extern bool has_big_cores;
>  extern cpumask_t threads_core_mask;
>  #else
>  #define threads_per_core	1
>  #define threads_per_subcore	1
>  #define threads_shift		0
> +#define has_big_cores		0
>  #define threads_core_mask	(*get_cpu_mask(0))
>  #endif
>  
> diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
> index 95b66a0..4439893 100644
> --- a/arch/powerpc/include/asm/smp.h
> +++ b/arch/powerpc/include/asm/smp.h
> @@ -100,6 +100,7 @@ static inline void set_hard_smp_processor_id(int cpu, int
> phys)
>  DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_map);
>  DECLARE_PER_CPU(cpumask_var_t, cpu_l2_cache_map);
>  DECLARE_PER_CPU(cpumask_var_t, cpu_core_map);
> +DECLARE_PER_CPU(cpumask_var_t, cpu_smallcore_map);
>  
>  static inline struct cpumask *cpu_sibling_mask(int cpu)
>  {
> @@ -116,6 +117,11 @@ static inline struct cpumask *cpu_l2_cache_mask(int cpu)
>  	return per_cpu(cpu_l2_cache_map, cpu);
>  }
>  
> +static inline struct cpumask *cpu_smallcore_mask(int cpu)
> +{
> +	return per_cpu(cpu_smallcore_map, cpu);
> +}
> +
>  extern int cpu_to_core_id(int cpu);
>  
>  /* Since OpenPIC has only 4 IPIs, we use slightly different message numbers.
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index 61c1fad..15095110 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -74,14 +74,32 @@
>  #endif
>  
>  struct thread_info *secondary_ti;
> +bool has_big_cores;
>  
>  DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map);
> +DEFINE_PER_CPU(cpumask_var_t, cpu_smallcore_map);
>  DEFINE_PER_CPU(cpumask_var_t, cpu_l2_cache_map);
>  DEFINE_PER_CPU(cpumask_var_t, cpu_core_map);
>  
>  EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
>  EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map);
>  EXPORT_PER_CPU_SYMBOL(cpu_core_map);
> +EXPORT_SYMBOL_GPL(has_big_cores);
> +
> +#define MAX_THREAD_LIST_SIZE	8
> +#define THREAD_GROUP_SHARE_L1   1
> +struct thread_groups {
> +	unsigned int property;
> +	unsigned int nr_groups;
> +	unsigned int threads_per_group;
> +	unsigned int thread_list[MAX_THREAD_LIST_SIZE];
> +};
> +
> +/*
> + * On big-cores system, cpu_l1_cache_map for each CPU corresponds to
> + * the set its siblings that share the L1-cache.
> + */
> +DEFINE_PER_CPU(cpumask_var_t, cpu_l1_cache_map);
>  
>  /* SMP operations for this machine */
>  struct smp_ops_t *smp_ops;
> @@ -674,6 +692,184 @@ static void set_cpus_unrelated(int i, int j,
>  }
>  #endif
>  
> +/*
> + * parse_thread_groups: Parses the "ibm,thread-groups" device tree
> + *                      property for the CPU device node @dn and stores
> + *                      the parsed output in the thread_groups
> + *                      structure @tg if the ibm,thread-groups[0]
> + *                      matches @property.
> + *
> + * @dn: The device node of the CPU device.
> + * @tg: Pointer to a thread group structure into which the parsed
> + *      output of "ibm,thread-groups" is stored.
> + * @property: The property of the thread-group that the caller is
> + *            interested in.
> + *
> + * ibm,thread-groups[0..N-1] array defines which group of threads in
> + * the CPU-device node can be grouped together based on the property.
> + *
> + * ibm,thread-groups[0] tells us the property based on which the
> + * threads are being grouped together. If this value is 1, it implies
> + * that the threads in the same group share L1, translation cache.
> + *
> + * ibm,thread-groups[1] tells us how many such thread groups exist.
> + *
> + * ibm,thread-groups[2] tells us the number of threads in each such
> + * group.
> + *
> + * ibm,thread-groups[3..N-1] is the list of threads identified by
> + * "ibm,ppc-interrupt-server#s" arranged as per their membership in
> + * the grouping.
> + *
> + * Example: If ibm,thread-groups = [1,2,4,5,6,7,8,9,10,11,12] it
> + * implies that there are 2 groups of 4 threads each, where each group
> + * of threads share L1, translation cache.
> + *
> + * The "ibm,ppc-interrupt-server#s" of the first group is {5,6,7,8}
> + * and the "ibm,ppc-interrupt-server#s" of the second group is {9, 10,
> + * 11, 12} structure
> + *
> + * Returns 0 on success, -EINVAL if the property does not exist,
> + * -ENODATA if property does not have a value, and -EOVERFLOW if the
> + * property data isn't large enough.
> + */
> +static int parse_thread_groups(struct device_node *dn,
> +			       struct thread_groups *tg,
> +			       unsigned int property)
> +{
> +	int i;
> +	u32 thread_group_array[3 + MAX_THREAD_LIST_SIZE];
> +	u32 *thread_list;
> +	size_t total_threads;
> +	int ret;
> +
> +	ret = of_property_read_u32_array(dn, "ibm,thread-groups",
> +					 thread_group_array, 3);
> +	if (ret)
> +		return ret;
> +
> +	tg->property = thread_group_array[0];
> +	tg->nr_groups = thread_group_array[1];
> +	tg->threads_per_group = thread_group_array[2];
> +	if (tg->property != property ||
> +	    tg->nr_groups < 1 ||
> +	    tg->threads_per_group < 1)
> +		return -ENODATA;
> +
> +	total_threads = tg->nr_groups * tg->threads_per_group;
> +
> +	ret = of_property_read_u32_array(dn, "ibm,thread-groups",
> +					 thread_group_array,
> +					 3 + total_threads);
> +	if (ret)
> +		return ret;
> +
> +	thread_list = &thread_group_array[3];
> +
> +	for (i = 0 ; i < total_threads; i++)
> +		tg->thread_list[i] = thread_list[i];
> +
> +	return 0;
> +}
> +
> +/*
> + * get_cpu_thread_group_start : Searches the thread group in tg->thread_list
> + *                              that @cpu belongs to.
> + *
> + * @cpu : The logical CPU whose thread group is being searched.
> + * @tg : The thread-group structure of the CPU node which @cpu belongs
> + *       to.
> + *
> + * Returns the index to tg->thread_list that points to the the start
> + * of the thread_group that @cpu belongs to.
> + *
> + * Returns -1 if cpu doesn't belong to any of the groups pointed to by
> + * tg->thread_list.
> + */
> +static int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
> +{
> +	int hw_cpu_id = get_hard_smp_processor_id(cpu);
> +	int i, j;
> +
> +	for (i = 0; i < tg->nr_groups; i++) {
> +		int group_start = i * tg->threads_per_group;
> +
> +		for (j = 0; j < tg->threads_per_group; j++) {
> +			int idx = group_start + j;
> +
> +			if (tg->thread_list[idx] == hw_cpu_id)
> +				return group_start;
> +		}
> +	}
> +
> +	return -1;
> +}
> +
> +static int init_cpu_l1_cache_map(int cpu)
> +
> +{
> +	struct device_node *dn = of_get_cpu_node(cpu, NULL);
> +	struct thread_groups tg;
> +
> +	int first_thread = cpu_first_thread_sibling(cpu);
> +	int i, cpu_group_start = -1, err = 0;
> +
> +	if (!dn)
> +		return -ENODATA;
> +
> +	err = parse_thread_groups(dn, &tg, THREAD_GROUP_SHARE_L1);
> +	if (err)
> +		goto out;
> +
> +	zalloc_cpumask_var_node(&per_cpu(cpu_l1_cache_map, cpu),
> +				GFP_KERNEL,
> +				cpu_to_node(cpu));
> +
> +	cpu_group_start = get_cpu_thread_group_start(cpu, &tg);
> +
> +	if (unlikely(cpu_group_start == -1)) {
> +		WARN_ON_ONCE(1);
> +		err = -ENODATA;
> +		goto out;
> +	}
> +
> +	for (i = first_thread; i < first_thread + threads_per_core; i++) {
> +		int i_group_start = get_cpu_thread_group_start(i, &tg);
> +
> +		if (unlikely(i_group_start == -1)) {
> +			WARN_ON_ONCE(1);
> +			err = -ENODATA;
> +			goto out;
> +		}
> +
> +		if (i_group_start == cpu_group_start)
> +			cpumask_set_cpu(i, per_cpu(cpu_l1_cache_map, cpu));
> +	}
> +
> +out:
> +	of_node_put(dn);
> +	return err;
> +}
> +
> +static int init_big_cores(void)
> +{
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		int err = init_cpu_l1_cache_map(cpu);
> +
> +		if (err)
> +			return err;
> +
> +		zalloc_cpumask_var_node(&per_cpu(cpu_smallcore_map, cpu),
> +					GFP_KERNEL,
> +					cpu_to_node(cpu));
> +	}
> +
> +	has_big_cores = true;
> +	return 0;
> +}
> +
>  void __init smp_prepare_cpus(unsigned int max_cpus)
>  {
>  	unsigned int cpu;
> @@ -712,6 +908,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
>  	cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
>  	cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
>  
> +	init_big_cores();
> +	if (has_big_cores) {
> +		cpumask_set_cpu(boot_cpuid,
> +				cpu_smallcore_mask(boot_cpuid));
> +	}
> +
>  	if (smp_ops && smp_ops->probe)
>  		smp_ops->probe();
>  }
> @@ -995,10 +1197,28 @@ static void remove_cpu_from_masks(int cpu)
>  		set_cpus_unrelated(cpu, i, cpu_core_mask);
>  		set_cpus_unrelated(cpu, i, cpu_l2_cache_mask);
>  		set_cpus_unrelated(cpu, i, cpu_sibling_mask);
> +		if (has_big_cores)
> +			set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
>  	}
>  }
>  #endif
>  
> +static inline void add_cpu_to_smallcore_masks(int cpu)
> +{
> +	struct cpumask *this_l1_cache_map = per_cpu(cpu_l1_cache_map, cpu);
> +	int i, first_thread = cpu_first_thread_sibling(cpu);
> +
> +	if (!has_big_cores)
> +		return;
> +
> +	cpumask_set_cpu(cpu, cpu_smallcore_mask(cpu));
> +
> +	for (i = first_thread; i < first_thread + threads_per_core; i++) {
> +		if (cpu_online(i) && cpumask_test_cpu(i, this_l1_cache_map))
> +			set_cpus_related(i, cpu, cpu_smallcore_mask);
> +	}
> +}
> +
>  static void add_cpu_to_masks(int cpu)
>  {
>  	int first_thread = cpu_first_thread_sibling(cpu);
> @@ -1015,6 +1235,7 @@ static void add_cpu_to_masks(int cpu)
>  		if (cpu_online(i))
>  			set_cpus_related(i, cpu, cpu_sibling_mask);
>  
> +	add_cpu_to_smallcore_masks(cpu);
>  	/*
>  	 * Copy the thread sibling mask into the cache sibling mask
>  	 * and mark any CPUs that share an L2 with this CPU.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list]
  2018-09-20 17:22 ` [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list] Gautham R. Shenoy
@ 2018-09-21  6:20   ` kbuild test robot
  2018-09-21 17:20     ` Gautham R Shenoy
  0 siblings, 1 reply; 13+ messages in thread
From: kbuild test robot @ 2018-09-21  6:20 UTC (permalink / raw)
  To: Gautham R. Shenoy
  Cc: kbuild-all, Aneesh Kumar K.V, Srikar Dronamraju,
	Michael Ellerman, Benjamin Herrenschmidt, Michael Neuling,
	Vaidyanathan Srinivasan, Akshay Adiga, Shilpasri G Bhat,
	Oliver O'Halloran, Nicholas Piggin, Murilo Opsfelder Araujo,
	Anton Blanchard, linuxppc-dev, linux-kernel, Gautham R. Shenoy

[-- Attachment #1: Type: text/plain, Size: 3746 bytes --]

Hi Gautham,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.19-rc4 next-20180919]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Gautham-R-Shenoy/powerpc-Detection-and-scheduler-optimization-for-POWER9-bigcore/20180921-085812
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-mpc837x_mds_defconfig (attached as .config)
compiler: powerpc-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   arch/powerpc/kernel/sysfs.c: In function 'smallcore_thread_siblings_show':
>> arch/powerpc/kernel/sysfs.c:724:45: error: implicit declaration of function 'cpu_smallcore_mask'; did you mean 'cpu_all_mask'? [-Werror=implicit-function-declaration]
     return cpumap_print_to_pagebuf(false, buf, cpu_smallcore_mask(cpu));
                                                ^~~~~~~~~~~~~~~~~~
                                                cpu_all_mask
>> arch/powerpc/kernel/sysfs.c:724:45: error: passing argument 3 of 'cpumap_print_to_pagebuf' makes pointer from integer without a cast [-Werror=int-conversion]
   In file included from include/linux/rcupdate.h:44:0,
                    from include/linux/radix-tree.h:28,
                    from include/linux/idr.h:15,
                    from include/linux/kernfs.h:14,
                    from include/linux/sysfs.h:16,
                    from include/linux/kobject.h:20,
                    from include/linux/device.h:16,
                    from arch/powerpc/kernel/sysfs.c:1:
   include/linux/cpumask.h:892:1: note: expected 'const struct cpumask *' but argument is of type 'int'
    cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
    ^~~~~~~~~~~~~~~~~~~~~~~
   arch/powerpc/kernel/sysfs.c: In function 'smallcore_thread_siblings_list_show':
   arch/powerpc/kernel/sysfs.c:734:44: error: passing argument 3 of 'cpumap_print_to_pagebuf' makes pointer from integer without a cast [-Werror=int-conversion]
     return cpumap_print_to_pagebuf(true, buf, cpu_smallcore_mask(cpu));
                                               ^~~~~~~~~~~~~~~~~~
   In file included from include/linux/rcupdate.h:44:0,
                    from include/linux/radix-tree.h:28,
                    from include/linux/idr.h:15,
                    from include/linux/kernfs.h:14,
                    from include/linux/sysfs.h:16,
                    from include/linux/kobject.h:20,
                    from include/linux/device.h:16,
                    from arch/powerpc/kernel/sysfs.c:1:
   include/linux/cpumask.h:892:1: note: expected 'const struct cpumask *' but argument is of type 'int'
    cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
    ^~~~~~~~~~~~~~~~~~~~~~~
   cc1: all warnings being treated as errors

vim +724 arch/powerpc/kernel/sysfs.c

   717	
   718	static ssize_t smallcore_thread_siblings_show(struct device *dev,
   719						struct device_attribute *attr,
   720						char *buf)
   721	{
   722		int cpu = dev->id;
   723	
 > 724		return cpumap_print_to_pagebuf(false, buf, cpu_smallcore_mask(cpu));
   725	}
   726	static DEVICE_ATTR_RO(smallcore_thread_siblings);
   727	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 15057 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups"
  2018-09-21  3:02   ` Michael Neuling
@ 2018-09-21 17:17     ` Gautham R Shenoy
  2018-09-23 23:48       ` Michael Neuling
  0 siblings, 1 reply; 13+ messages in thread
From: Gautham R Shenoy @ 2018-09-21 17:17 UTC (permalink / raw)
  To: Michael Neuling
  Cc: Gautham R. Shenoy, Aneesh Kumar K.V, Srikar Dronamraju,
	Michael Ellerman, Benjamin Herrenschmidt,
	Vaidyanathan Srinivasan, Akshay Adiga, Shilpasri G Bhat,
	Oliver O'Halloran, Nicholas Piggin, Murilo Opsfelder Araujo,
	Anton Blanchard, linuxppc-dev, linux-kernel

Hello Michael,

On Fri, Sep 21, 2018 at 01:02:45PM +1000, Michael Neuling wrote:
> This doesn't compile for me with:
> 
> arch/powerpc/kernel/smp.c: In function ‘smp_prepare_cpus’:
> arch/powerpc/kernel/smp.c:812:23: error: ‘tg.threads_per_group’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
>   struct thread_groups tg;
>                        ^
> arch/powerpc/kernel/smp.c:812:23: error: ‘tg.nr_groups’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> cc1: all warnings being treated as errors
> /home/mikey/src/linux-ozlabs/scripts/Makefile.build:305: recipe for target 'arch/powerpc/kernel/smp.o' failed
>

I couldn't get this error with gcc 4.8.5 and 8.1.1 with
pseries_defconfig and powernv_defconfig with CONFIG_PPC_WERROR=y.

Does the following the following delta patch make it work?

-----------------------------X8----------------------------------

From 6699ce20573dddd0d3d45ea79015751880740e9b Mon Sep 17 00:00:00 2001
From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>
Date: Fri, 21 Sep 2018 22:43:05 +0530
Subject: [PATCH] powerpc/smp: Initialize thread_groups local variable

Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/smp.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 5cdcf44..356751e 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -809,8 +809,9 @@ static int init_cpu_l1_cache_map(int cpu)
 
 {
 	struct device_node *dn = of_get_cpu_node(cpu, NULL);
-	struct thread_groups tg;
-
+	struct thread_groups tg = {.property = 0,
+				   .nr_groups = 0,
+				   .threads_per_group = 0};
 	int first_thread = cpu_first_thread_sibling(cpu);
 	int i, cpu_group_start = -1, err = 0;
 
-- 
1.9.4









^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list]
  2018-09-21  6:20   ` kbuild test robot
@ 2018-09-21 17:20     ` Gautham R Shenoy
  0 siblings, 0 replies; 13+ messages in thread
From: Gautham R Shenoy @ 2018-09-21 17:20 UTC (permalink / raw)
  To: kbuild test robot
  Cc: Gautham R. Shenoy, kbuild-all, Aneesh Kumar K.V,
	Srikar Dronamraju, Michael Ellerman, Benjamin Herrenschmidt,
	Michael Neuling, Vaidyanathan Srinivasan, Akshay Adiga,
	Shilpasri G Bhat, Oliver O'Halloran, Nicholas Piggin,
	Murilo Opsfelder Araujo, Anton Blanchard, linuxppc-dev,
	linux-kernel


On Fri, Sep 21, 2018 at 02:20:15PM +0800, kbuild test robot wrote:
> Hi Gautham,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.19-rc4 next-20180919]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Gautham-R-Shenoy/powerpc-Detection-and-scheduler-optimization-for-POWER9-bigcore/20180921-085812
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc-mpc837x_mds_defconfig (attached as .config)
> compiler: powerpc-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         GCC_VERSION=7.2.0 make.cross ARCH=powerpc 
> 
> All errors (new ones prefixed by >>):
> 
>    arch/powerpc/kernel/sysfs.c: In function 'smallcore_thread_siblings_show':
> >> arch/powerpc/kernel/sysfs.c:724:45: error: implicit declaration of function 'cpu_smallcore_mask'; did you mean 'cpu_all_mask'? [-Werror=implicit-function-declaration]
>      return cpumap_print_to_pagebuf(false, buf, cpu_smallcore_mask(cpu));
>                                                 ^~~~~~~~~~~~~~~~~~


No, the smallcore_thread_siblings_show, and the other functions should
only be compiled for CONFIG_SMP.


Will add this. Thanks bot!

>                                                 cpu_all_mask
> >> arch/powerpc/kernel/sysfs.c:724:45: error: passing argument 3 of 'cpumap_print_to_pagebuf' makes pointer from integer without a cast [-Werror=int-conversion]
>    In file included from include/linux/rcupdate.h:44:0,
>                     from include/linux/radix-tree.h:28,
>                     from include/linux/idr.h:15,
>                     from include/linux/kernfs.h:14,
>                     from include/linux/sysfs.h:16,
>                     from include/linux/kobject.h:20,
>                     from include/linux/device.h:16,
>                     from arch/powerpc/kernel/sysfs.c:1:
>    include/linux/cpumask.h:892:1: note: expected 'const struct cpumask *' but argument is of type 'int'
>     cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
>     ^~~~~~~~~~~~~~~~~~~~~~~
>    arch/powerpc/kernel/sysfs.c: In function 'smallcore_thread_siblings_list_show':
>    arch/powerpc/kernel/sysfs.c:734:44: error: passing argument 3 of 'cpumap_print_to_pagebuf' makes pointer from integer without a cast [-Werror=int-conversion]
>      return cpumap_print_to_pagebuf(true, buf, cpu_smallcore_mask(cpu));
>                                                ^~~~~~~~~~~~~~~~~~
>    In file included from include/linux/rcupdate.h:44:0,
>                     from include/linux/radix-tree.h:28,
>                     from include/linux/idr.h:15,
>                     from include/linux/kernfs.h:14,
>                     from include/linux/sysfs.h:16,
>                     from include/linux/kobject.h:20,
>                     from include/linux/device.h:16,
>                     from arch/powerpc/kernel/sysfs.c:1:
>    include/linux/cpumask.h:892:1: note: expected 'const struct cpumask *' but argument is of type 'int'
>     cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
>     ^~~~~~~~~~~~~~~~~~~~~~~
>    cc1: all warnings being treated as errors
> 
> vim +724 arch/powerpc/kernel/sysfs.c
> 
>    717	
>    718	static ssize_t smallcore_thread_siblings_show(struct device *dev,
>    719						struct device_attribute *attr,
>    720						char *buf)
>    721	{
>    722		int cpu = dev->id;
>    723	
>  > 724		return cpumap_print_to_pagebuf(false, buf, cpu_smallcore_mask(cpu));
>    725	}
>    726	static DEVICE_ATTR_RO(smallcore_thread_siblings);
>    727	
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore
  2018-09-20 18:04 ` [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Dave Hansen
@ 2018-09-22 11:03   ` Gautham R Shenoy
  2018-09-25 22:16     ` Dave Hansen
  0 siblings, 1 reply; 13+ messages in thread
From: Gautham R Shenoy @ 2018-09-22 11:03 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Gautham R. Shenoy, Aneesh Kumar K.V, Srikar Dronamraju,
	Michael Ellerman, Benjamin Herrenschmidt, Michael Neuling,
	Vaidyanathan Srinivasan, Akshay Adiga, Shilpasri G Bhat,
	Oliver O'Halloran, Nicholas Piggin, Murilo Opsfelder Araujo,
	Anton Blanchard, linuxppc-dev, linux-kernel

Hi Dave,

On Thu, Sep 20, 2018 at 11:04:54AM -0700, Dave Hansen wrote:
> On 09/20/2018 10:22 AM, Gautham R. Shenoy wrote:
> >  	   -------------------------
> > 	   |  	    L1 Cache       |
> >        ----------------------------------
> >        |L2|     |     |     |      |
> >        |  |  0  |  2  |  4  |  6   |Small Core0
> >        |C |     |     |     |      |
> > Big    |a --------------------------
> > Core   |c |     |     |     |      |
> >        |h |  1  |  3  |  5  |  7   | Small Core1
> >        |e |     |     |     |      |
> >        -----------------------------
> > 	  |  	    L1 Cache       |
> > 	  --------------------------
> 
> The scheduler already knows about shared caches.  Could you elaborate on
> how this is different from the situation today where we have multiple
> cores sharing an L2/L3?

The issue is not so much about the threads in the core sharing L2
cache. But the two group of threads in the core, each of which has its
own L1-cache. This patchset (the second patch in the series) informs
the scheduler of this distinction by defining the SMT sched-domain
have groups correspond to the threads that share L1 cache. With this
the scheduler will treat a pair of threads {1,2} differently from
{1,3} when threads 1 and 3 share the L1 cache, while 1 and 2 don't.

The next sched-domain (CACHE domain) is defined as the group of
threads that share the L2 cache, which happens to be the entire big
core.

Without this patchset, the SMT domain would be defined as the group of
threads that share L2 cache. Thus, the scheduler would treat any two
threads in the big-core in the same way, resulting in run-to-run
variance when the software threads are placed on pair of threads
within the same L1-cache group or on separate ones.

> 
> Adding the new sysfs stuff seems like overkill if that's all that you
> are trying to do.
>

The sysfs attributes are to inform the users that we have a big-core
configuration comprising of two small cores, thereby allowing them to
make informed choices should they want to pin the tasks to the CPUs.

--
Thanks and Regards
gautham.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups"
  2018-09-21 17:17     ` Gautham R Shenoy
@ 2018-09-23 23:48       ` Michael Neuling
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Neuling @ 2018-09-23 23:48 UTC (permalink / raw)
  To: ego
  Cc: Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Vaidyanathan Srinivasan, Akshay Adiga,
	Shilpasri G Bhat, Oliver O'Halloran, Nicholas Piggin,
	Murilo Opsfelder Araujo, Anton Blanchard, linuxppc-dev,
	linux-kernel

On Fri, 2018-09-21 at 22:47 +0530, Gautham R Shenoy wrote:
> Hello Michael,
> 
> On Fri, Sep 21, 2018 at 01:02:45PM +1000, Michael Neuling wrote:
> > This doesn't compile for me with:
> > 
> > arch/powerpc/kernel/smp.c: In function ‘smp_prepare_cpus’:
> > arch/powerpc/kernel/smp.c:812:23: error: ‘tg.threads_per_group’ may be used
> > uninitialized in this function [-Werror=maybe-uninitialized]
> >   struct thread_groups tg;
> >                        ^
> > arch/powerpc/kernel/smp.c:812:23: error: ‘tg.nr_groups’ may be used
> > uninitialized in this function [-Werror=maybe-uninitialized]
> > cc1: all warnings being treated as errors
> > /home/mikey/src/linux-ozlabs/scripts/Makefile.build:305: recipe for target
> > 'arch/powerpc/kernel/smp.o' failed
> > 
> 
> I couldn't get this error with gcc 4.8.5 and 8.1.1 with
> pseries_defconfig and powernv_defconfig with CONFIG_PPC_WERROR=y.

I'm using 5.4.0 which is in Ubuntu 16.04 (LTS).

> Does the following the following delta patch make it work?

Yep that fixes it, thanks.

Mikey


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore
  2018-09-22 11:03   ` Gautham R Shenoy
@ 2018-09-25 22:16     ` Dave Hansen
  2018-09-26  6:06       ` Gautham R Shenoy
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Hansen @ 2018-09-25 22:16 UTC (permalink / raw)
  To: ego
  Cc: Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Michael Neuling, Vaidyanathan Srinivasan,
	Akshay Adiga, Shilpasri G Bhat, Oliver O'Halloran,
	Nicholas Piggin, Murilo Opsfelder Araujo, Anton Blanchard,
	linuxppc-dev, linux-kernel

On 09/22/2018 04:03 AM, Gautham R Shenoy wrote:
> Without this patchset, the SMT domain would be defined as the group of
> threads that share L2 cache.

Could you try to make a more clear, concise statement about the current
state of the art vs. what you want it to be?  Right now, the sched
domains do something like this in terms of ordering:

1. SMT siblings
2. Caches
3. NUMA

It sounds like you don't want SMT siblings to be the things that we use,
right?  Because some siblings share caches and some do not.  Right?  You
want something like this:

1. SMT siblings (sharing L1)
2. SMT siblings (sharing L2)
3. Other caches
4. NUMA

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore
  2018-09-25 22:16     ` Dave Hansen
@ 2018-09-26  6:06       ` Gautham R Shenoy
  0 siblings, 0 replies; 13+ messages in thread
From: Gautham R Shenoy @ 2018-09-26  6:06 UTC (permalink / raw)
  To: Dave Hansen
  Cc: ego, Aneesh Kumar K.V, Srikar Dronamraju, Michael Ellerman,
	Benjamin Herrenschmidt, Michael Neuling, Vaidyanathan Srinivasan,
	Akshay Adiga, Shilpasri G Bhat, Oliver O'Halloran,
	Nicholas Piggin, Murilo Opsfelder Araujo, Anton Blanchard,
	linuxppc-dev, linux-kernel

Hello Dave,

On Tue, Sep 25, 2018 at 03:16:30PM -0700, Dave Hansen wrote:
> On 09/22/2018 04:03 AM, Gautham R Shenoy wrote:
> > Without this patchset, the SMT domain would be defined as the group of
> > threads that share L2 cache.
> 
> Could you try to make a more clear, concise statement about the current
> state of the art vs. what you want it to be?  Right now, the sched
> domains do something like this in terms of ordering:
> 
> 1. SMT siblings
> 2. Caches
> 3. NUMA

Yes. you are right. The state of art on POWER9 machines having SMT8
cores is as you described above with

1. SMT siblings sharing L2-cache, called the SMT domain
2. Cores on the same die, called the DIE domain
3. NUMA

> 
> It sounds like you don't want SMT siblings to be the things that we use,
> right?  Because some siblings share caches and some do not.  Right?  You
> want something like this:
> 
> 1. SMT siblings (sharing L1)
> 2. SMT siblings (sharing L2)
> 3. Other caches
> 4. NUMA
>

Yes, with the patchset the sched-domain hierarchy on POWER9 machines
having SMT8 will be:

1. SMT siblings sharing L1 cache, called the SMT domain
2. SMT siblings sharing L2 cache, called the CACHE domain (introduced in
   commit 96d91431d691 "powerpc/smp: Add Power9 scheduler topology")
3. Cores on the same die, called the DIE domain.
4. NUMA

--
Thanks and Regards
gautham.


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-09-26  6:06 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-20 17:22 [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Gautham R. Shenoy
2018-09-20 17:22 ` [PATCH v8 1/3] powerpc: Detect the presence of big-cores via "ibm,thread-groups" Gautham R. Shenoy
2018-09-21  3:02   ` Michael Neuling
2018-09-21 17:17     ` Gautham R Shenoy
2018-09-23 23:48       ` Michael Neuling
2018-09-20 17:22 ` [PATCH v8 2/3] powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores Gautham R. Shenoy
2018-09-20 17:22 ` [PATCH v8 3/3] powerpc/sysfs: Add topology/smallcore_thread_siblings[_list] Gautham R. Shenoy
2018-09-21  6:20   ` kbuild test robot
2018-09-21 17:20     ` Gautham R Shenoy
2018-09-20 18:04 ` [PATCH v8 0/3] powerpc: Detection and scheduler optimization for POWER9 bigcore Dave Hansen
2018-09-22 11:03   ` Gautham R Shenoy
2018-09-25 22:16     ` Dave Hansen
2018-09-26  6:06       ` Gautham R Shenoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).