linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Reintroduce cpu_core_mask
@ 2021-04-15 12:09 Srikar Dronamraju
  2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-15 12:09 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: linuxppc-dev, Srikar Dronamraju

Daniel had reported that
 QEMU is now unable to see requested topologies in a multi socket single
 NUMA node configurations.
 -smp 8,maxcpus=8,cores=2,threads=2,sockets=2

This patchset reintroduces cpu_core_mask so that users can see requested
topologies while still maintaining the boot time of very large system
configurations.

It includes caching the chip_id as suggested by Michael Ellermann

4 Threads/Core; 4 cores/Socket; 4 Sockets/Node, 2 Nodes in System
  -numa node,nodeid=0,memdev=m0 \
  -numa node,nodeid=1,memdev=m1 \
  -smp 128,sockets=8,threads=4,maxcpus=128  \

5.12.0-rc5 (or any kernel with commit 4ca234a9cbd7)
---------------------------------------------------
srikar@cloudy:~$ lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Thread(s) per core:              4
Core(s) per socket:              16
Socket(s):                       2                 <<<<<-----
NUMA node(s):                    2
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
Hypervisor vendor:               KVM
Virtualization type:             para
L1d cache:                       1 MiB
L1i cache:                       1 MiB
NUMA node0 CPU(s):               0-15,32-47,64-79,96-111
NUMA node1 CPU(s):               16-31,48-63,80-95,112-127
--
srikar@cloudy:~$ dmesg |grep smp
[    0.010658] smp: Bringing up secondary CPUs ...
[    0.424681] smp: Brought up 2 nodes, 128 CPUs
--

5.12.0-rc5 + 3 patches
----------------------
srikar@cloudy:~$ lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Thread(s) per core:              4
Core(s) per socket:              4
Socket(s):                       8    <<<<-----
NUMA node(s):                    2
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
Hypervisor vendor:               KVM
Virtualization type:             para
L1d cache:                       1 MiB
L1i cache:                       1 MiB
NUMA node0 CPU(s):               0-15,32-47,64-79,96-111
NUMA node1 CPU(s):               16-31,48-63,80-95,112-127
--
srikar@cloudy:~$ dmesg |grep smp
[    0.010372] smp: Bringing up secondary CPUs ...
[    0.417892] smp: Brought up 2 nodes, 128 CPUs

5.12.0-rc5
----------
srikar@cloudy:~$  lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          1024
On-line CPU(s) list:             0-1023
Thread(s) per core:              8
Core(s) per socket:              128
Socket(s):                       1
NUMA node(s):                    1
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
Hypervisor vendor:               KVM
Virtualization type:             para
L1d cache:                       4 MiB
L1i cache:                       4 MiB
NUMA node0 CPU(s):               0-1023
srikar@cloudy:~$ dmesg | grep smp
[    0.027753 ] smp: Bringing up secondary CPUs ...
[    2.315193 ] smp: Brought up 1 node, 1024 CPUs

5.12.0-rc5 + 3 patches
----------------------
srikar@cloudy:~$ dmesg | grep smp
[    0.027659 ] smp: Bringing up secondary CPUs ...
[    2.532739 ] smp: Brought up 1 node, 1024 CPUs

I also have booted and tested the kernels on PowerVM and PowerNV and
even there I see a very negligible increase in the bringing up time of
secondary CPUs

Srikar Dronamraju (3):
  powerpc/smp: Reintroduce cpu_core_mask
  Revert "powerpc/topology: Update topology_core_cpumask"
  powerpc/smp: Cache CPU to chip lookup

 arch/powerpc/include/asm/smp.h      |  6 ++++
 arch/powerpc/include/asm/topology.h |  2 +-
 arch/powerpc/kernel/prom.c          | 19 +++++++---
 arch/powerpc/kernel/smp.c           | 56 +++++++++++++++++++++++++----
 4 files changed, 71 insertions(+), 12 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask
  2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
@ 2021-04-15 12:09 ` Srikar Dronamraju
  2021-04-15 17:11   ` Gautham R Shenoy
  2021-04-16  3:21   ` David Gibson
  2021-04-15 12:09 ` [PATCH 2/3] Revert "powerpc/topology: Update topology_core_cpumask" Srikar Dronamraju
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-15 12:09 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Nathan Lynch, Gautham R Shenoy, Srikar Dronamraju,
	Peter Zijlstra, Daniel Henrique Barboza, Valentin Schneider,
	qemu-ppc, Cedric Le Goater, linuxppc-dev, Ingo Molnar,
	David Gibson

Daniel reported that with Commit 4ca234a9cbd7 ("powerpc/smp: Stop
updating cpu_core_mask") QEMU was unable to set single NUMA node SMP
topologies such as:
 -smp 8,maxcpus=8,cores=2,threads=2,sockets=2
 i.e he expected 2 sockets in one NUMA node.

The above commit helped to reduce boot time on Large Systems for
example 4096 vCPU single socket QEMU instance. PAPR is silent on
having more than one socket within a NUMA node.

cpu_core_mask and cpu_cpu_mask for any CPU would be same unless the
number of sockets is different from the number of NUMA nodes.

One option is to reintroduce cpu_core_mask but use a slightly
different method to arrive at the cpu_core_mask. Previously each CPU's
chip-id would be compared with all other CPU's chip-id to verify if
both the CPUs were related at the chip level. Now if a CPU 'A' is
found related / (unrelated) to another CPU 'B', all the thread
siblings of 'A' and thread siblings of 'B' are automatically marked as
related / (unrelated).

Also if a platform doesn't support ibm,chip-id property, i.e its
cpu_to_chip_id returns -1, cpu_core_map holds a copy of
cpu_cpu_mask().

Fixes: 4ca234a9cbd7 ("powerpc/smp: Stop updating cpu_core_mask")
Cc: linuxppc-dev@lists.ozlabs.org
Cc: qemu-ppc@nongnu.org
Cc: Cedric Le Goater <clg@kaod.org>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Reported-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/smp.h |  5 +++++
 arch/powerpc/kernel/smp.c      | 39 ++++++++++++++++++++++++++++------
 2 files changed, 37 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index 7a13bc20f0a0..47081a9e13ca 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -121,6 +121,11 @@ static inline struct cpumask *cpu_sibling_mask(int cpu)
 	return per_cpu(cpu_sibling_map, cpu);
 }
 
+static inline struct cpumask *cpu_core_mask(int cpu)
+{
+	return per_cpu(cpu_core_map, cpu);
+}
+
 static inline struct cpumask *cpu_l2_cache_mask(int cpu)
 {
 	return per_cpu(cpu_l2_cache_map, cpu);
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 5a4d59a1070d..5c7ce1d50631 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1057,17 +1057,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
 				local_memory_node(numa_cpu_lookup_table[cpu]));
 		}
 #endif
-		/*
-		 * cpu_core_map is now more updated and exists only since
-		 * its been exported for long. It only will have a snapshot
-		 * of cpu_cpu_mask.
-		 */
-		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
 	}
 
 	/* Init the cpumasks so the boot CPU is related to itself */
 	cpumask_set_cpu(boot_cpuid, cpu_sibling_mask(boot_cpuid));
 	cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
+	cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
 
 	if (has_coregroup_support())
 		cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid));
@@ -1408,6 +1403,9 @@ static void remove_cpu_from_masks(int cpu)
 			set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
 	}
 
+	for_each_cpu(i, cpu_core_mask(cpu))
+		set_cpus_unrelated(cpu, i, cpu_core_mask);
+
 	if (has_coregroup_support()) {
 		for_each_cpu(i, cpu_coregroup_mask(cpu))
 			set_cpus_unrelated(cpu, i, cpu_coregroup_mask);
@@ -1468,8 +1466,11 @@ static void update_coregroup_mask(int cpu, cpumask_var_t *mask)
 
 static void add_cpu_to_masks(int cpu)
 {
+	struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
 	int first_thread = cpu_first_thread_sibling(cpu);
+	int chip_id = cpu_to_chip_id(cpu);
 	cpumask_var_t mask;
+	bool ret;
 	int i;
 
 	/*
@@ -1485,12 +1486,36 @@ static void add_cpu_to_masks(int cpu)
 	add_cpu_to_smallcore_masks(cpu);
 
 	/* In CPU-hotplug path, hence use GFP_ATOMIC */
-	alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
+	ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
 	update_mask_by_l2(cpu, &mask);
 
 	if (has_coregroup_support())
 		update_coregroup_mask(cpu, &mask);
 
+	if (chip_id == -1 || !ret) {
+		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+		goto out;
+	}
+
+	if (shared_caches)
+		submask_fn = cpu_l2_cache_mask;
+
+	/* Update core_mask with all the CPUs that are part of submask */
+	or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
+
+	/* Skip all CPUs already part of current CPU core mask */
+	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
+
+	for_each_cpu(i, mask) {
+		if (chip_id == cpu_to_chip_id(i)) {
+			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
+			cpumask_andnot(mask, mask, submask_fn(i));
+		} else {
+			cpumask_andnot(mask, mask, cpu_core_mask(i));
+		}
+	}
+
+out:
 	free_cpumask_var(mask);
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] Revert "powerpc/topology: Update topology_core_cpumask"
  2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
  2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
@ 2021-04-15 12:09 ` Srikar Dronamraju
  2021-04-15 12:09 ` [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup Srikar Dronamraju
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-15 12:09 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Nathan Lynch, Gautham R Shenoy, Srikar Dronamraju,
	Peter Zijlstra, Daniel Henrique Barboza, Valentin Schneider,
	qemu-ppc, Cedric Le Goater, linuxppc-dev, Ingo Molnar,
	David Gibson

Now that cpu_core_mask has been reintroduced, lets revert
commit 4bce545903fa ("powerpc/topology: Update topology_core_cpumask")

Post this commit, lscpu should reflect topologies as requested by a user
when a QEMU instance is launched with NUMA spanning multiple sockets.

Cc: linuxppc-dev@lists.ozlabs.org
Cc: qemu-ppc@nongnu.org
Cc: Cedric Le Goater <clg@kaod.org>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Reported-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/topology.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
index 3beeb030cd78..e4db64c0e184 100644
--- a/arch/powerpc/include/asm/topology.h
+++ b/arch/powerpc/include/asm/topology.h
@@ -126,7 +126,7 @@ static inline int cpu_to_coregroup_id(int cpu)
 #define topology_physical_package_id(cpu)	(cpu_to_chip_id(cpu))
 
 #define topology_sibling_cpumask(cpu)	(per_cpu(cpu_sibling_map, cpu))
-#define topology_core_cpumask(cpu)	(cpu_cpu_mask(cpu))
+#define topology_core_cpumask(cpu)	(per_cpu(cpu_core_map, cpu))
 #define topology_core_id(cpu)		(cpu_to_core_id(cpu))
 
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
  2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
  2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
  2021-04-15 12:09 ` [PATCH 2/3] Revert "powerpc/topology: Update topology_core_cpumask" Srikar Dronamraju
@ 2021-04-15 12:09 ` Srikar Dronamraju
  2021-04-15 17:19   ` Gautham R Shenoy
  2021-04-15 12:17 ` [PATCH 0/3] Reintroduce cpu_core_mask Daniel Henrique Barboza
  2021-04-19  4:00 ` Michael Ellerman
  4 siblings, 1 reply; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-15 12:09 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Nathan Lynch, Gautham R Shenoy, Srikar Dronamraju,
	Peter Zijlstra, Daniel Henrique Barboza, Valentin Schneider,
	qemu-ppc, Cedric Le Goater, linuxppc-dev, Ingo Molnar,
	David Gibson

On systems with large CPUs per node, even with the filtered matching of
related CPUs, there can be large number of calls to cpu_to_chip_id for
the same CPU. For example with 4096 vCPU, 1 node QEMU configuration,
with 4 threads per core, system could be see upto 1024 calls to
cpu_to_chip_id() for the same CPU. On a given system, cpu_to_chip_id()
for a given CPU would always return the same. Hence cache the result in
a lookup table for use in subsequent calls.

Since all CPUs sharing the same core will belong to the same chip, the
lookup_table has an entry for one CPU per core.  chip_id_lookup_table is
not being freed and would be used on subsequent CPU online post CPU
offline.

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: qemu-ppc@nongnu.org
Cc: Cedric Le Goater <clg@kaod.org>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
Reported-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/smp.h |  1 +
 arch/powerpc/kernel/prom.c     | 19 +++++++++++++++----
 arch/powerpc/kernel/smp.c      | 21 +++++++++++++++++++--
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index 47081a9e13ca..03b3d010cbab 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -31,6 +31,7 @@ extern u32 *cpu_to_phys_id;
 extern bool coregroup_enabled;
 
 extern int cpu_to_chip_id(int cpu);
+extern int *chip_id_lookup_table;
 
 #ifdef CONFIG_SMP
 
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 9a4797d1d40d..6d2e4a5bc471 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -65,6 +65,8 @@
 #define DBG(fmt...)
 #endif
 
+int *chip_id_lookup_table;
+
 #ifdef CONFIG_PPC64
 int __initdata iommu_is_off;
 int __initdata iommu_force_on;
@@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
 int cpu_to_chip_id(int cpu)
 {
 	struct device_node *np;
+	int ret = -1, idx;
+
+	idx = cpu / threads_per_core;
+	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)
+		return chip_id_lookup_table[idx];
 
 	np = of_get_cpu_node(cpu, NULL);
-	if (!np)
-		return -1;
+	if (np) {
+		ret = of_get_ibm_chip_id(np);
+		of_node_put(np);
+
+		if (chip_id_lookup_table)
+			chip_id_lookup_table[idx] = ret;
+	}
 
-	of_node_put(np);
-	return of_get_ibm_chip_id(np);
+	return ret;
 }
 EXPORT_SYMBOL(cpu_to_chip_id);
 
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 5c7ce1d50631..50520fbea424 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1073,6 +1073,20 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
 				cpu_smallcore_mask(boot_cpuid));
 	}
 
+	if (cpu_to_chip_id(boot_cpuid) != -1) {
+		int idx = num_possible_cpus() / threads_per_core;
+
+		/*
+		 * All threads of a core will all belong to the same core,
+		 * chip_id_lookup_table will have one entry per core.
+		 * Assumption: if boot_cpuid doesn't have a chip-id, then no
+		 * other CPUs, will also not have chip-id.
+		 */
+		chip_id_lookup_table = kcalloc(idx, sizeof(int), GFP_KERNEL);
+		if (chip_id_lookup_table)
+			memset(chip_id_lookup_table, -1, sizeof(int) * idx);
+	}
+
 	if (smp_ops && smp_ops->probe)
 		smp_ops->probe();
 }
@@ -1468,8 +1482,8 @@ static void add_cpu_to_masks(int cpu)
 {
 	struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
 	int first_thread = cpu_first_thread_sibling(cpu);
-	int chip_id = cpu_to_chip_id(cpu);
 	cpumask_var_t mask;
+	int chip_id = -1;
 	bool ret;
 	int i;
 
@@ -1492,7 +1506,10 @@ static void add_cpu_to_masks(int cpu)
 	if (has_coregroup_support())
 		update_coregroup_mask(cpu, &mask);
 
-	if (chip_id == -1 || !ret) {
+	if (chip_id_lookup_table && ret)
+		chip_id = cpu_to_chip_id(cpu);
+
+	if (chip_id == -1) {
 		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
 		goto out;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] Reintroduce cpu_core_mask
  2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
                   ` (2 preceding siblings ...)
  2021-04-15 12:09 ` [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup Srikar Dronamraju
@ 2021-04-15 12:17 ` Daniel Henrique Barboza
  2021-04-19  4:00 ` Michael Ellerman
  4 siblings, 0 replies; 16+ messages in thread
From: Daniel Henrique Barboza @ 2021-04-15 12:17 UTC (permalink / raw)
  To: Srikar Dronamraju, Michael Ellerman; +Cc: linuxppc-dev

Hi,


Using a QEMU pseries guest with this follwing SMP topology, with a
single NUMA node:


(...) -smp 32,threads=4,cores=4,sockets=2, (...)

This is the output of lscpu with a guest running v5.12-rc5:

[root@localhost ~]# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              32
On-line CPU(s) list: 0-31
Thread(s) per core:  4
Core(s) per socket:  8
Socket(s):           1
NUMA node(s):        1
Model:               2.2 (pvr 004e 1202)
Model name:          POWER9 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           32K
L1i cache:           32K
NUMA node0 CPU(s):   0-31
[root@localhost ~]#


The changes with cpu_core_mask made the topology sockets matching NUMA nodes.
In this case, given that we have a single NUMA node, the SMP topology got
adjusted to have 8 cores instead of 4 so we can have a single socket as well.

Although sockets equal to NUMA nodes is true for Power hardware, QEMU doesn't
have this constraint and users expect sockets and NUMA nodes to be kind of
independent, regardless of how unpractical that would be with real hardware.


The same guest running a kernel with this series applied:


[root@localhost ~]# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              32
On-line CPU(s) list: 0-31
Thread(s) per core:  4
Core(s) per socket:  4
Socket(s):           2
NUMA node(s):        1
Model:               2.2 (pvr 004e 1202)
Model name:          POWER9 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           32K
L1i cache:           32K
NUMA node0 CPU(s):   0-31


The sockets and NUMA nodes are being represented separately, as intended via
the QEMU command line.


Thanks for the looking this up, Srikar. For all patches:


Tested-by: Daniel Henrique Barboza <danielhb413@gmail.com>



On 4/15/21 9:09 AM, Srikar Dronamraju wrote:
> Daniel had reported that
>   QEMU is now unable to see requested topologies in a multi socket single
>   NUMA node configurations.
>   -smp 8,maxcpus=8,cores=2,threads=2,sockets=2
> 
> This patchset reintroduces cpu_core_mask so that users can see requested
> topologies while still maintaining the boot time of very large system
> configurations.
> 
> It includes caching the chip_id as suggested by Michael Ellermann
> 
> 4 Threads/Core; 4 cores/Socket; 4 Sockets/Node, 2 Nodes in System
>    -numa node,nodeid=0,memdev=m0 \
>    -numa node,nodeid=1,memdev=m1 \
>    -smp 128,sockets=8,threads=4,maxcpus=128  \
> 
> 5.12.0-rc5 (or any kernel with commit 4ca234a9cbd7)
> ---------------------------------------------------
> srikar@cloudy:~$ lscpu
> Architecture:                    ppc64le
> Byte Order:                      Little Endian
> CPU(s):                          128
> On-line CPU(s) list:             0-127
> Thread(s) per core:              4
> Core(s) per socket:              16
> Socket(s):                       2                 <<<<<-----
> NUMA node(s):                    2
> Model:                           2.3 (pvr 004e 1203)
> Model name:                      POWER9 (architected), altivec supported
> Hypervisor vendor:               KVM
> Virtualization type:             para
> L1d cache:                       1 MiB
> L1i cache:                       1 MiB
> NUMA node0 CPU(s):               0-15,32-47,64-79,96-111
> NUMA node1 CPU(s):               16-31,48-63,80-95,112-127
> --
> srikar@cloudy:~$ dmesg |grep smp
> [    0.010658] smp: Bringing up secondary CPUs ...
> [    0.424681] smp: Brought up 2 nodes, 128 CPUs
> --
> 
> 5.12.0-rc5 + 3 patches
> ----------------------
> srikar@cloudy:~$ lscpu
> Architecture:                    ppc64le
> Byte Order:                      Little Endian
> CPU(s):                          128
> On-line CPU(s) list:             0-127
> Thread(s) per core:              4
> Core(s) per socket:              4
> Socket(s):                       8    <<<<-----
> NUMA node(s):                    2
> Model:                           2.3 (pvr 004e 1203)
> Model name:                      POWER9 (architected), altivec supported
> Hypervisor vendor:               KVM
> Virtualization type:             para
> L1d cache:                       1 MiB
> L1i cache:                       1 MiB
> NUMA node0 CPU(s):               0-15,32-47,64-79,96-111
> NUMA node1 CPU(s):               16-31,48-63,80-95,112-127
> --
> srikar@cloudy:~$ dmesg |grep smp
> [    0.010372] smp: Bringing up secondary CPUs ...
> [    0.417892] smp: Brought up 2 nodes, 128 CPUs
> 
> 5.12.0-rc5
> ----------
> srikar@cloudy:~$  lscpu
> Architecture:                    ppc64le
> Byte Order:                      Little Endian
> CPU(s):                          1024
> On-line CPU(s) list:             0-1023
> Thread(s) per core:              8
> Core(s) per socket:              128
> Socket(s):                       1
> NUMA node(s):                    1
> Model:                           2.3 (pvr 004e 1203)
> Model name:                      POWER9 (architected), altivec supported
> Hypervisor vendor:               KVM
> Virtualization type:             para
> L1d cache:                       4 MiB
> L1i cache:                       4 MiB
> NUMA node0 CPU(s):               0-1023
> srikar@cloudy:~$ dmesg | grep smp
> [    0.027753 ] smp: Bringing up secondary CPUs ...
> [    2.315193 ] smp: Brought up 1 node, 1024 CPUs
> 
> 5.12.0-rc5 + 3 patches
> ----------------------
> srikar@cloudy:~$ dmesg | grep smp
> [    0.027659 ] smp: Bringing up secondary CPUs ...
> [    2.532739 ] smp: Brought up 1 node, 1024 CPUs
> 
> I also have booted and tested the kernels on PowerVM and PowerNV and
> even there I see a very negligible increase in the bringing up time of
> secondary CPUs
> 
> Srikar Dronamraju (3):
>    powerpc/smp: Reintroduce cpu_core_mask
>    Revert "powerpc/topology: Update topology_core_cpumask"
>    powerpc/smp: Cache CPU to chip lookup
> 
>   arch/powerpc/include/asm/smp.h      |  6 ++++
>   arch/powerpc/include/asm/topology.h |  2 +-
>   arch/powerpc/kernel/prom.c          | 19 +++++++---
>   arch/powerpc/kernel/smp.c           | 56 +++++++++++++++++++++++++----
>   4 files changed, 71 insertions(+), 12 deletions(-)
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask
  2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
@ 2021-04-15 17:11   ` Gautham R Shenoy
  2021-04-15 17:36     ` Srikar Dronamraju
  2021-04-16  3:21   ` David Gibson
  1 sibling, 1 reply; 16+ messages in thread
From: Gautham R Shenoy @ 2021-04-15 17:11 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Nathan Lynch, Gautham R Shenoy, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, qemu-ppc,
	Cedric Le Goater, linuxppc-dev, Ingo Molnar, David Gibson

Hi Srikar,

On Thu, Apr 15, 2021 at 05:39:32PM +0530, Srikar Dronamraju wrote:
 [..snip..]



> @@ -1485,12 +1486,36 @@ static void add_cpu_to_masks(int cpu)
>  	add_cpu_to_smallcore_masks(cpu);
> 
>  	/* In CPU-hotplug path, hence use GFP_ATOMIC */
> -	alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
> +	ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
>  	update_mask_by_l2(cpu, &mask);
> 
>  	if (has_coregroup_support())
>  		update_coregroup_mask(cpu, &mask);
> 
> +	if (chip_id == -1 || !ret) {
> +		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
> +		goto out;
> +	}
> +
> +	if (shared_caches)
> +		submask_fn = cpu_l2_cache_mask;
> +
> +	/* Update core_mask with all the CPUs that are part of submask */
> +	or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
>

If coregroups exist, we can add the cpus of the coregroup to the
cpu_core_mask thereby reducing the scope of the for_each_cpu() search
below. This will still cut down the time on Baremetal systems
supporting coregroups.


> +	/* Skip all CPUs already part of current CPU core mask */
> +	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
> +
> +	for_each_cpu(i, mask) {
> +		if (chip_id == cpu_to_chip_id(i)) {
> +			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
> +			cpumask_andnot(mask, mask, submask_fn(i));
> +		} else {
> +			cpumask_andnot(mask, mask, cpu_core_mask(i));
> +		}
> +	}
> +
> +out:
>  	free_cpumask_var(mask);
>  }
> 
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
  2021-04-15 12:09 ` [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup Srikar Dronamraju
@ 2021-04-15 17:19   ` Gautham R Shenoy
  2021-04-15 17:51     ` Srikar Dronamraju
  0 siblings, 1 reply; 16+ messages in thread
From: Gautham R Shenoy @ 2021-04-15 17:19 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Nathan Lynch, Gautham R Shenoy, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, qemu-ppc,
	Cedric Le Goater, linuxppc-dev, Ingo Molnar, David Gibson

On Thu, Apr 15, 2021 at 05:39:34PM +0530, Srikar Dronamraju wrote:
> On systems with large CPUs per node, even with the filtered matching of
> related CPUs, there can be large number of calls to cpu_to_chip_id for
> the same CPU. For example with 4096 vCPU, 1 node QEMU configuration,
> with 4 threads per core, system could be see upto 1024 calls to
> cpu_to_chip_id() for the same CPU. On a given system, cpu_to_chip_id()
> for a given CPU would always return the same. Hence cache the result in
> a lookup table for use in subsequent calls.
> 
> Since all CPUs sharing the same core will belong to the same chip, the
> lookup_table has an entry for one CPU per core.  chip_id_lookup_table is
> not being freed and would be used on subsequent CPU online post CPU
> offline.
> 
> Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: qemu-ppc@nongnu.org
> Cc: Cedric Le Goater <clg@kaod.org>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Nathan Lynch <nathanl@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
> Reported-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/smp.h |  1 +
>  arch/powerpc/kernel/prom.c     | 19 +++++++++++++++----
>  arch/powerpc/kernel/smp.c      | 21 +++++++++++++++++++--
>  3 files changed, 35 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
> index 47081a9e13ca..03b3d010cbab 100644
> --- a/arch/powerpc/include/asm/smp.h
> +++ b/arch/powerpc/include/asm/smp.h
> @@ -31,6 +31,7 @@ extern u32 *cpu_to_phys_id;
>  extern bool coregroup_enabled;
> 
>  extern int cpu_to_chip_id(int cpu);
> +extern int *chip_id_lookup_table;
> 
>  #ifdef CONFIG_SMP
> 
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index 9a4797d1d40d..6d2e4a5bc471 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -65,6 +65,8 @@
>  #define DBG(fmt...)
>  #endif
> 
> +int *chip_id_lookup_table;
> +
>  #ifdef CONFIG_PPC64
>  int __initdata iommu_is_off;
>  int __initdata iommu_force_on;
> @@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
>  int cpu_to_chip_id(int cpu)
>  {
>  	struct device_node *np;
> +	int ret = -1, idx;
> +
> +	idx = cpu / threads_per_core;
> +	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)

The value -1 is ambiguous since we won't be able to determine if
it is because we haven't yet made a of_get_ibm_chip_id() call
or if of_get_ibm_chip_id() call was made and it returned a -1.

Thus, perhaps we can initialize chip_id_lookup_table[idx] with a
different unique negative value. How about S32_MIN ? and check
chip_id_lookup_table[idx] is different here ?


> +		return chip_id_lookup_table[idx];
> 
>  	np = of_get_cpu_node(cpu, NULL);
> -	if (!np)
> -		return -1;
> +	if (np) {
> +		ret = of_get_ibm_chip_id(np);
> +		of_node_put(np);
> +
> +		if (chip_id_lookup_table)
> +			chip_id_lookup_table[idx] = ret;
> +	}
> 
> -	of_node_put(np);
> -	return of_get_ibm_chip_id(np);
> +	return ret;
>  }
>  EXPORT_SYMBOL(cpu_to_chip_id);
> 
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index 5c7ce1d50631..50520fbea424 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -1073,6 +1073,20 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
>  				cpu_smallcore_mask(boot_cpuid));
>  	}
> 
> +	if (cpu_to_chip_id(boot_cpuid) != -1) {
> +		int idx = num_possible_cpus() / threads_per_core;
> +
> +		/*
> +		 * All threads of a core will all belong to the same core,
> +		 * chip_id_lookup_table will have one entry per core.
> +		 * Assumption: if boot_cpuid doesn't have a chip-id, then no
> +		 * other CPUs, will also not have chip-id.
> +		 */
> +		chip_id_lookup_table = kcalloc(idx, sizeof(int), GFP_KERNEL);
> +		if (chip_id_lookup_table)
> +			memset(chip_id_lookup_table, -1, sizeof(int) * idx);
> +	}
> +
>  	if (smp_ops && smp_ops->probe)
>  		smp_ops->probe();
>  }
> @@ -1468,8 +1482,8 @@ static void add_cpu_to_masks(int cpu)
>  {
>  	struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
>  	int first_thread = cpu_first_thread_sibling(cpu);
> -	int chip_id = cpu_to_chip_id(cpu);
>  	cpumask_var_t mask;
> +	int chip_id = -1;
>  	bool ret;
>  	int i;
> 
> @@ -1492,7 +1506,10 @@ static void add_cpu_to_masks(int cpu)
>  	if (has_coregroup_support())
>  		update_coregroup_mask(cpu, &mask);
> 
> -	if (chip_id == -1 || !ret) {
> +	if (chip_id_lookup_table && ret)
> +		chip_id = cpu_to_chip_id(cpu);
> +
> +	if (chip_id == -1) {
>  		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
>  		goto out;
>  	}
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask
  2021-04-15 17:11   ` Gautham R Shenoy
@ 2021-04-15 17:36     ` Srikar Dronamraju
  0 siblings, 0 replies; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-15 17:36 UTC (permalink / raw)
  To: Gautham R Shenoy
  Cc: Nathan Lynch, Peter Zijlstra, Daniel Henrique Barboza,
	Valentin Schneider, qemu-ppc, Cedric Le Goater, linuxppc-dev,
	Ingo Molnar, David Gibson

* Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-15 22:41:34]:

> Hi Srikar,
> 
> 

Thanks for taking a look.

> > @@ -1485,12 +1486,36 @@ static void add_cpu_to_masks(int cpu)
> >  	add_cpu_to_smallcore_masks(cpu);
> > 
> >  	/* In CPU-hotplug path, hence use GFP_ATOMIC */
> > -	alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
> > +	ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
> >  	update_mask_by_l2(cpu, &mask);
> > 
> >  	if (has_coregroup_support())
> >  		update_coregroup_mask(cpu, &mask);
> > 
> > +	if (chip_id == -1 || !ret) {
> > +		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
> > +		goto out;
> > +	}
> > +
> > +	if (shared_caches)
> > +		submask_fn = cpu_l2_cache_mask;
> > +
> > +	/* Update core_mask with all the CPUs that are part of submask */
> > +	or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
> >
> 
> If coregroups exist, we can add the cpus of the coregroup to the
> cpu_core_mask thereby reducing the scope of the for_each_cpu() search
> below. This will still cut down the time on Baremetal systems
> supporting coregroups.
> 

Yes, once we upstream coregroup support to Baremetal, we should look
at adding it. Also do note, number of CPUs we support for Baremetal is
comparatively lower than in PowerVM + QEMU. And more importantly the
number of cores per coregroup is also very low. So the optimization
may not yield too much of a benefit.

Its only in the QEMU case, where we end up having too many cores in
the same chip, where we see a drastic increase in the boot-up time.

-- 
Thanks and Regards
Srikar Dronamraju

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
  2021-04-15 17:19   ` Gautham R Shenoy
@ 2021-04-15 17:51     ` Srikar Dronamraju
  2021-04-16 15:57       ` Gautham R Shenoy
  0 siblings, 1 reply; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-15 17:51 UTC (permalink / raw)
  To: Gautham R Shenoy
  Cc: Nathan Lynch, Peter Zijlstra, Daniel Henrique Barboza,
	Valentin Schneider, qemu-ppc, Cedric Le Goater, linuxppc-dev,
	Ingo Molnar, David Gibson

* Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-15 22:49:21]:

> > 
> > +int *chip_id_lookup_table;
> > +
> >  #ifdef CONFIG_PPC64
> >  int __initdata iommu_is_off;
> >  int __initdata iommu_force_on;
> > @@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
> >  int cpu_to_chip_id(int cpu)
> >  {
> >  	struct device_node *np;
> > +	int ret = -1, idx;
> > +
> > +	idx = cpu / threads_per_core;
> > +	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)
> 

> The value -1 is ambiguous since we won't be able to determine if
> it is because we haven't yet made a of_get_ibm_chip_id() call
> or if of_get_ibm_chip_id() call was made and it returned a -1.
> 

We don't allocate chip_id_lookup_table unless cpu_to_chip_id() return
!-1 value for the boot-cpuid. So this ensures that we dont
unnecessarily allocate chip_id_lookup_table. Also I check for
chip_id_lookup_table before calling cpu_to_chip_id() for other CPUs.
So this avoids overhead of calling cpu_to_chip_id() for platforms that
dont support it.  Also its most likely that if the
chip_id_lookup_table is initialized then of_get_ibm_chip_id() call
would return a valid value.

+ Below we are only populating the lookup table, only when the
of_get_cpu_node is valid.

So I dont see any drawbacks of initializing it to -1. Do you see any?

> Thus, perhaps we can initialize chip_id_lookup_table[idx] with a
> different unique negative value. How about S32_MIN ? and check
> chip_id_lookup_table[idx] is different here ?
> 

I had initially initialized to -2, But then I thought we adding in
more confusion than necessary and it was not solving any issues.


-- 
Thanks and Regards
Srikar Dronamraju

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask
  2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
  2021-04-15 17:11   ` Gautham R Shenoy
@ 2021-04-16  3:21   ` David Gibson
  2021-04-16  5:45     ` Srikar Dronamraju
  1 sibling, 1 reply; 16+ messages in thread
From: David Gibson @ 2021-04-16  3:21 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Nathan Lynch, Gautham R Shenoy, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, qemu-ppc,
	Cedric Le Goater, linuxppc-dev, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 6647 bytes --]

On Thu, Apr 15, 2021 at 05:39:32PM +0530, Srikar Dronamraju wrote:
> Daniel reported that with Commit 4ca234a9cbd7 ("powerpc/smp: Stop
> updating cpu_core_mask") QEMU was unable to set single NUMA node SMP
> topologies such as:
>  -smp 8,maxcpus=8,cores=2,threads=2,sockets=2
>  i.e he expected 2 sockets in one NUMA node.

Well, strictly speaking, you can still set that toplogy in qemu but a
PAPR guest with that commit will show as having 1 socket in lscpu and
similar things.

Basically, this is because PAPR has no meaningful distinction between
cores and sockets.  So it's kind of a cosmetic problem, but it is a
user-unexpected behaviour that it would be nice to avoid if it's not
excessively difficult.

> The above commit helped to reduce boot time on Large Systems for
> example 4096 vCPU single socket QEMU instance. PAPR is silent on
> having more than one socket within a NUMA node.
> 
> cpu_core_mask and cpu_cpu_mask for any CPU would be same unless the
> number of sockets is different from the number of NUMA nodes.

Number of sockets being different from number of NUMA nodes is routine
in qemu, and I don't think it's something we should enforce.

> One option is to reintroduce cpu_core_mask but use a slightly
> different method to arrive at the cpu_core_mask. Previously each CPU's
> chip-id would be compared with all other CPU's chip-id to verify if
> both the CPUs were related at the chip level. Now if a CPU 'A' is
> found related / (unrelated) to another CPU 'B', all the thread
> siblings of 'A' and thread siblings of 'B' are automatically marked as
> related / (unrelated).
> 
> Also if a platform doesn't support ibm,chip-id property, i.e its
> cpu_to_chip_id returns -1, cpu_core_map holds a copy of
> cpu_cpu_mask().

Yeah, the other weirdness here is that ibm,chip-id isn't a PAPR
property at all - it was added for powernv.  We then added it to qemu
for PAPR guests because that was the way at the time to get the guest
to advertise the expected number of sockets.  It therefore basically
*only* exists on PAPR/qemu for that purpose, so if it's not serving it
we need to come up with something else.

> 
> Fixes: 4ca234a9cbd7 ("powerpc/smp: Stop updating cpu_core_mask")
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: qemu-ppc@nongnu.org
> Cc: Cedric Le Goater <clg@kaod.org>
> Cc: David Gibson <david@gibson.dropbear.id.au>
> Cc: Nathan Lynch <nathanl@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Valentin Schneider <valentin.schneider@arm.com>
> Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
> Reported-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/smp.h |  5 +++++
>  arch/powerpc/kernel/smp.c      | 39 ++++++++++++++++++++++++++++------
>  2 files changed, 37 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
> index 7a13bc20f0a0..47081a9e13ca 100644
> --- a/arch/powerpc/include/asm/smp.h
> +++ b/arch/powerpc/include/asm/smp.h
> @@ -121,6 +121,11 @@ static inline struct cpumask *cpu_sibling_mask(int cpu)
>  	return per_cpu(cpu_sibling_map, cpu);
>  }
>  
> +static inline struct cpumask *cpu_core_mask(int cpu)
> +{
> +	return per_cpu(cpu_core_map, cpu);
> +}
> +
>  static inline struct cpumask *cpu_l2_cache_mask(int cpu)
>  {
>  	return per_cpu(cpu_l2_cache_map, cpu);
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index 5a4d59a1070d..5c7ce1d50631 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -1057,17 +1057,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
>  				local_memory_node(numa_cpu_lookup_table[cpu]));
>  		}
>  #endif
> -		/*
> -		 * cpu_core_map is now more updated and exists only since
> -		 * its been exported for long. It only will have a snapshot
> -		 * of cpu_cpu_mask.
> -		 */
> -		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
>  	}
>  
>  	/* Init the cpumasks so the boot CPU is related to itself */
>  	cpumask_set_cpu(boot_cpuid, cpu_sibling_mask(boot_cpuid));
>  	cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
> +	cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
>  
>  	if (has_coregroup_support())
>  		cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid));
> @@ -1408,6 +1403,9 @@ static void remove_cpu_from_masks(int cpu)
>  			set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
>  	}
>  
> +	for_each_cpu(i, cpu_core_mask(cpu))
> +		set_cpus_unrelated(cpu, i, cpu_core_mask);
> +
>  	if (has_coregroup_support()) {
>  		for_each_cpu(i, cpu_coregroup_mask(cpu))
>  			set_cpus_unrelated(cpu, i, cpu_coregroup_mask);
> @@ -1468,8 +1466,11 @@ static void update_coregroup_mask(int cpu, cpumask_var_t *mask)
>  
>  static void add_cpu_to_masks(int cpu)
>  {
> +	struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
>  	int first_thread = cpu_first_thread_sibling(cpu);
> +	int chip_id = cpu_to_chip_id(cpu);
>  	cpumask_var_t mask;
> +	bool ret;
>  	int i;
>  
>  	/*
> @@ -1485,12 +1486,36 @@ static void add_cpu_to_masks(int cpu)
>  	add_cpu_to_smallcore_masks(cpu);
>  
>  	/* In CPU-hotplug path, hence use GFP_ATOMIC */
> -	alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
> +	ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
>  	update_mask_by_l2(cpu, &mask);
>  
>  	if (has_coregroup_support())
>  		update_coregroup_mask(cpu, &mask);
>  
> +	if (chip_id == -1 || !ret) {
> +		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
> +		goto out;
> +	}
> +
> +	if (shared_caches)
> +		submask_fn = cpu_l2_cache_mask;
> +
> +	/* Update core_mask with all the CPUs that are part of submask */
> +	or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
> +
> +	/* Skip all CPUs already part of current CPU core mask */
> +	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
> +
> +	for_each_cpu(i, mask) {
> +		if (chip_id == cpu_to_chip_id(i)) {
> +			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
> +			cpumask_andnot(mask, mask, submask_fn(i));
> +		} else {
> +			cpumask_andnot(mask, mask, cpu_core_mask(i));
> +		}
> +	}
> +
> +out:
>  	free_cpumask_var(mask);
>  }
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask
  2021-04-16  3:21   ` David Gibson
@ 2021-04-16  5:45     ` Srikar Dronamraju
  2021-04-19  1:17       ` David Gibson
  0 siblings, 1 reply; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-16  5:45 UTC (permalink / raw)
  To: David Gibson
  Cc: Nathan Lynch, Gautham R Shenoy, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, hegdevasant,
	qemu-ppc, Cedric Le Goater, linuxppc-dev, Ingo Molnar

* David Gibson <david@gibson.dropbear.id.au> [2021-04-16 13:21:34]:

Thanks for having a look at the patches.

> On Thu, Apr 15, 2021 at 05:39:32PM +0530, Srikar Dronamraju wrote:
> > Daniel reported that with Commit 4ca234a9cbd7 ("powerpc/smp: Stop
> > updating cpu_core_mask") QEMU was unable to set single NUMA node SMP
> > topologies such as:
> >  -smp 8,maxcpus=8,cores=2,threads=2,sockets=2
> >  i.e he expected 2 sockets in one NUMA node.
> 
> Well, strictly speaking, you can still set that toplogy in qemu but a
> PAPR guest with that commit will show as having 1 socket in lscpu and
> similar things.
> 

Right, I did mention the o/p of lscpu in QEMU with the said commit and
with the new patches in the cover letter. Somehow I goofed up the cc
list for the cover letter.

Reference for the cover letter:
https://lore.kernel.org/linuxppc-dev/20210415120934.232271-1-srikar@linux.vnet.ibm.com/t/#u

> Basically, this is because PAPR has no meaningful distinction between
> cores and sockets.  So it's kind of a cosmetic problem, but it is a
> user-unexpected behaviour that it would be nice to avoid if it's not
> excessively difficult.
> 
> > The above commit helped to reduce boot time on Large Systems for
> > example 4096 vCPU single socket QEMU instance. PAPR is silent on
> > having more than one socket within a NUMA node.
> > 
> > cpu_core_mask and cpu_cpu_mask for any CPU would be same unless the
> > number of sockets is different from the number of NUMA nodes.
> 
> Number of sockets being different from number of NUMA nodes is routine
> in qemu, and I don't think it's something we should enforce.
> 
> > One option is to reintroduce cpu_core_mask but use a slightly
> > different method to arrive at the cpu_core_mask. Previously each CPU's
> > chip-id would be compared with all other CPU's chip-id to verify if
> > both the CPUs were related at the chip level. Now if a CPU 'A' is
> > found related / (unrelated) to another CPU 'B', all the thread
> > siblings of 'A' and thread siblings of 'B' are automatically marked as
> > related / (unrelated).
> > 
> > Also if a platform doesn't support ibm,chip-id property, i.e its
> > cpu_to_chip_id returns -1, cpu_core_map holds a copy of
> > cpu_cpu_mask().
> 
> Yeah, the other weirdness here is that ibm,chip-id isn't a PAPR
> property at all - it was added for powernv.  We then added it to qemu
> for PAPR guests because that was the way at the time to get the guest
> to advertise the expected number of sockets.  It therefore basically
> *only* exists on PAPR/qemu for that purpose, so if it's not serving it
> we need to come up with something else.
> 

Do you have ideas on what that something could be like? So if that's
more beneficial then we could move over to that scheme. Also apart
from ibm,chip-id being not a PAPR property, do you have any other
concerns with it.


-- 
Thanks and Regards
Srikar Dronamraju

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
  2021-04-15 17:51     ` Srikar Dronamraju
@ 2021-04-16 15:57       ` Gautham R Shenoy
  2021-04-16 16:57         ` Srikar Dronamraju
  2021-04-19  1:19         ` David Gibson
  0 siblings, 2 replies; 16+ messages in thread
From: Gautham R Shenoy @ 2021-04-16 15:57 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Nathan Lynch, Gautham R Shenoy, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, qemu-ppc,
	Cedric Le Goater, linuxppc-dev, Ingo Molnar, David Gibson

On Thu, Apr 15, 2021 at 11:21:10PM +0530, Srikar Dronamraju wrote:
> * Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-15 22:49:21]:
> 
> > > 
> > > +int *chip_id_lookup_table;
> > > +
> > >  #ifdef CONFIG_PPC64
> > >  int __initdata iommu_is_off;
> > >  int __initdata iommu_force_on;
> > > @@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
> > >  int cpu_to_chip_id(int cpu)
> > >  {
> > >  	struct device_node *np;
> > > +	int ret = -1, idx;
> > > +
> > > +	idx = cpu / threads_per_core;
> > > +	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)
> > 
> 
> > The value -1 is ambiguous since we won't be able to determine if
> > it is because we haven't yet made a of_get_ibm_chip_id() call
> > or if of_get_ibm_chip_id() call was made and it returned a -1.
> > 
> 
> We don't allocate chip_id_lookup_table unless cpu_to_chip_id() return
> !-1 value for the boot-cpuid. So this ensures that we dont
> unnecessarily allocate chip_id_lookup_table. Also I check for
> chip_id_lookup_table before calling cpu_to_chip_id() for other CPUs.
> So this avoids overhead of calling cpu_to_chip_id() for platforms that
> dont support it.  Also its most likely that if the
> chip_id_lookup_table is initialized then of_get_ibm_chip_id() call
> would return a valid value.
> 
> + Below we are only populating the lookup table, only when the
> of_get_cpu_node is valid.
> 
> So I dont see any drawbacks of initializing it to -1. Do you see
any?


Only if other callers of cpu_to_chip_id() don't check for whether the
chip_id_lookup_table() has been allocated or not. From a code
readability point of view, it is easier to have that check  this inside
cpu_to_chip_id() instead of requiring all its callers to make that
check.

> 
> > Thus, perhaps we can initialize chip_id_lookup_table[idx] with a
> > different unique negative value. How about S32_MIN ? and check
> > chip_id_lookup_table[idx] is different here ?
> > 
> 
> I had initially initialized to -2, But then I thought we adding in
> more confusion than necessary and it was not solving any issues.
> 
> 
> -- 
> Thanks and Regards
> Srikar Dronamraju

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
  2021-04-16 15:57       ` Gautham R Shenoy
@ 2021-04-16 16:57         ` Srikar Dronamraju
  2021-04-19  1:19         ` David Gibson
  1 sibling, 0 replies; 16+ messages in thread
From: Srikar Dronamraju @ 2021-04-16 16:57 UTC (permalink / raw)
  To: Gautham R Shenoy
  Cc: Nathan Lynch, Peter Zijlstra, Daniel Henrique Barboza,
	Valentin Schneider, qemu-ppc, Cedric Le Goater, linuxppc-dev,
	Ingo Molnar, David Gibson

* Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-16 21:27:48]:

> On Thu, Apr 15, 2021 at 11:21:10PM +0530, Srikar Dronamraju wrote:
> > * Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-15 22:49:21]:
> > 
> > > > 
> > > > +int *chip_id_lookup_table;
> > > > +
> > > >  #ifdef CONFIG_PPC64
> > > >  int __initdata iommu_is_off;
> > > >  int __initdata iommu_force_on;
> > > > @@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
> > > >  int cpu_to_chip_id(int cpu)
> > > >  {
> > > >  	struct device_node *np;
> > > > +	int ret = -1, idx;
> > > > +
> > > > +	idx = cpu / threads_per_core;
> > > > +	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)
> > > 
> > 
> > > The value -1 is ambiguous since we won't be able to determine if
> > > it is because we haven't yet made a of_get_ibm_chip_id() call
> > > or if of_get_ibm_chip_id() call was made and it returned a -1.
> > > 
> > 
> > We don't allocate chip_id_lookup_table unless cpu_to_chip_id() return
> > !-1 value for the boot-cpuid. So this ensures that we dont
> > unnecessarily allocate chip_id_lookup_table. Also I check for
> > chip_id_lookup_table before calling cpu_to_chip_id() for other CPUs.
> > So this avoids overhead of calling cpu_to_chip_id() for platforms that
> > dont support it.  Also its most likely that if the
> > chip_id_lookup_table is initialized then of_get_ibm_chip_id() call
> > would return a valid value.
> > 
> > + Below we are only populating the lookup table, only when the
> > of_get_cpu_node is valid.
> > 
> > So I dont see any drawbacks of initializing it to -1. Do you see
> any?
> 
> 
> Only if other callers of cpu_to_chip_id() don't check for whether the
> chip_id_lookup_table() has been allocated or not. From a code
> readability point of view, it is easier to have that check  this inside
> cpu_to_chip_id() instead of requiring all its callers to make that
> check.
> 

I didn't understand your comment. However let me reiterate what I said
earlier. We don't have control over who and when cpu_to_chip_id() gets
called. If the cpu_to_chip_id() might be called for non present CPU,
in which case it will return -1, Should we cache it or not?

If we cache it, we will return wrong value when the CPU may turn out
to be present. If we cache and retry it then having one value for
initializing and another for invalid is all the same as having just 1
value for initializing and invalid. Just that we end up adding more
confusing code. Atleast to me, code isnt readable if I say retry for
-1 and -2 too. After few years, we ourselves will wonder why we have
two values if we are checking and performing same actions.

> > 
> > > Thus, perhaps we can initialize chip_id_lookup_table[idx] with a
> > > different unique negative value. How about S32_MIN ? and check
> > > chip_id_lookup_table[idx] is different here ?
> > > 
> > 
> > I had initially initialized to -2, But then I thought we adding in
> > more confusion than necessary and it was not solving any issues.
> > 
> > 
> > -- 
> > Thanks and Regards
> > Srikar Dronamraju

-- 
Thanks and Regards
Srikar Dronamraju

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask
  2021-04-16  5:45     ` Srikar Dronamraju
@ 2021-04-19  1:17       ` David Gibson
  0 siblings, 0 replies; 16+ messages in thread
From: David Gibson @ 2021-04-19  1:17 UTC (permalink / raw)
  To: Srikar Dronamraju
  Cc: Nathan Lynch, Gautham R Shenoy, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, hegdevasant,
	qemu-ppc, Cedric Le Goater, linuxppc-dev, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 3670 bytes --]

On Fri, Apr 16, 2021 at 11:15:49AM +0530, Srikar Dronamraju wrote:
> * David Gibson <david@gibson.dropbear.id.au> [2021-04-16 13:21:34]:
> 
> Thanks for having a look at the patches.
> 
> > On Thu, Apr 15, 2021 at 05:39:32PM +0530, Srikar Dronamraju wrote:
> > > Daniel reported that with Commit 4ca234a9cbd7 ("powerpc/smp: Stop
> > > updating cpu_core_mask") QEMU was unable to set single NUMA node SMP
> > > topologies such as:
> > >  -smp 8,maxcpus=8,cores=2,threads=2,sockets=2
> > >  i.e he expected 2 sockets in one NUMA node.
> > 
> > Well, strictly speaking, you can still set that toplogy in qemu but a
> > PAPR guest with that commit will show as having 1 socket in lscpu and
> > similar things.
> > 
> 
> Right, I did mention the o/p of lscpu in QEMU with the said commit and
> with the new patches in the cover letter. Somehow I goofed up the cc
> list for the cover letter.
> 
> Reference for the cover letter:
> https://lore.kernel.org/linuxppc-dev/20210415120934.232271-1-srikar@linux.vnet.ibm.com/t/#u
> 
> > Basically, this is because PAPR has no meaningful distinction between
> > cores and sockets.  So it's kind of a cosmetic problem, but it is a
> > user-unexpected behaviour that it would be nice to avoid if it's not
> > excessively difficult.
> > 
> > > The above commit helped to reduce boot time on Large Systems for
> > > example 4096 vCPU single socket QEMU instance. PAPR is silent on
> > > having more than one socket within a NUMA node.
> > > 
> > > cpu_core_mask and cpu_cpu_mask for any CPU would be same unless the
> > > number of sockets is different from the number of NUMA nodes.
> > 
> > Number of sockets being different from number of NUMA nodes is routine
> > in qemu, and I don't think it's something we should enforce.
> > 
> > > One option is to reintroduce cpu_core_mask but use a slightly
> > > different method to arrive at the cpu_core_mask. Previously each CPU's
> > > chip-id would be compared with all other CPU's chip-id to verify if
> > > both the CPUs were related at the chip level. Now if a CPU 'A' is
> > > found related / (unrelated) to another CPU 'B', all the thread
> > > siblings of 'A' and thread siblings of 'B' are automatically marked as
> > > related / (unrelated).
> > > 
> > > Also if a platform doesn't support ibm,chip-id property, i.e its
> > > cpu_to_chip_id returns -1, cpu_core_map holds a copy of
> > > cpu_cpu_mask().
> > 
> > Yeah, the other weirdness here is that ibm,chip-id isn't a PAPR
> > property at all - it was added for powernv.  We then added it to qemu
> > for PAPR guests because that was the way at the time to get the guest
> > to advertise the expected number of sockets.  It therefore basically
> > *only* exists on PAPR/qemu for that purpose, so if it's not serving it
> > we need to come up with something else.
> > 
> 
> Do you have ideas on what that something could be like?

Not really, sorry.

> So if that's
> more beneficial then we could move over to that scheme. Also apart
> from ibm,chip-id being not a PAPR property, do you have any other
> concerns with it.

I think if we can keep ibm,chip-id doing this job, that would be
simplest - as long as our PAPR usage isn't implying semantics which
contradict what it does on powernv.  AIUI Cédric thought it did that,
but with further discussion it seems like that might have been a
misunderstanding incorrectly conflating chip-id with NUMA nodes.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup
  2021-04-16 15:57       ` Gautham R Shenoy
  2021-04-16 16:57         ` Srikar Dronamraju
@ 2021-04-19  1:19         ` David Gibson
  1 sibling, 0 replies; 16+ messages in thread
From: David Gibson @ 2021-04-19  1:19 UTC (permalink / raw)
  To: Gautham R Shenoy
  Cc: Nathan Lynch, Srikar Dronamraju, Peter Zijlstra,
	Daniel Henrique Barboza, Valentin Schneider, qemu-ppc,
	Cedric Le Goater, linuxppc-dev, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 2832 bytes --]

On Fri, Apr 16, 2021 at 09:27:48PM +0530, Gautham R Shenoy wrote:
> On Thu, Apr 15, 2021 at 11:21:10PM +0530, Srikar Dronamraju wrote:
> > * Gautham R Shenoy <ego@linux.vnet.ibm.com> [2021-04-15 22:49:21]:
> > 
> > > > 
> > > > +int *chip_id_lookup_table;
> > > > +
> > > >  #ifdef CONFIG_PPC64
> > > >  int __initdata iommu_is_off;
> > > >  int __initdata iommu_force_on;
> > > > @@ -914,13 +916,22 @@ EXPORT_SYMBOL(of_get_ibm_chip_id);
> > > >  int cpu_to_chip_id(int cpu)
> > > >  {
> > > >  	struct device_node *np;
> > > > +	int ret = -1, idx;
> > > > +
> > > > +	idx = cpu / threads_per_core;
> > > > +	if (chip_id_lookup_table && chip_id_lookup_table[idx] != -1)
> > > 
> > 
> > > The value -1 is ambiguous since we won't be able to determine if
> > > it is because we haven't yet made a of_get_ibm_chip_id() call
> > > or if of_get_ibm_chip_id() call was made and it returned a -1.
> > > 
> > 
> > We don't allocate chip_id_lookup_table unless cpu_to_chip_id() return
> > !-1 value for the boot-cpuid. So this ensures that we dont
> > unnecessarily allocate chip_id_lookup_table. Also I check for
> > chip_id_lookup_table before calling cpu_to_chip_id() for other CPUs.
> > So this avoids overhead of calling cpu_to_chip_id() for platforms that
> > dont support it.  Also its most likely that if the
> > chip_id_lookup_table is initialized then of_get_ibm_chip_id() call
> > would return a valid value.
> > 
> > + Below we are only populating the lookup table, only when the
> > of_get_cpu_node is valid.
> > 
> > So I dont see any drawbacks of initializing it to -1. Do you see
> any?
> 
> 
> Only if other callers of cpu_to_chip_id() don't check for whether the
> chip_id_lookup_table() has been allocated or not. From a code
> readability point of view, it is easier to have that check  this inside
> cpu_to_chip_id() instead of requiring all its callers to make that
> check.

Even if they do, and the bad invalid value should never be read, I
think it's worth initializing that way.  If means if there's a mistake
and we do accidentally read the value, then the error is likely to be
much clearer.  Likewise if someone looks at this value from a
debugger, it will be clearer what's going on.

> 
> > 
> > > Thus, perhaps we can initialize chip_id_lookup_table[idx] with a
> > > different unique negative value. How about S32_MIN ? and check
> > > chip_id_lookup_table[idx] is different here ?
> > > 
> > 
> > I had initially initialized to -2, But then I thought we adding in
> > more confusion than necessary and it was not solving any issues.
> > 
> > 
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/3] Reintroduce cpu_core_mask
  2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
                   ` (3 preceding siblings ...)
  2021-04-15 12:17 ` [PATCH 0/3] Reintroduce cpu_core_mask Daniel Henrique Barboza
@ 2021-04-19  4:00 ` Michael Ellerman
  4 siblings, 0 replies; 16+ messages in thread
From: Michael Ellerman @ 2021-04-19  4:00 UTC (permalink / raw)
  To: Srikar Dronamraju, Michael Ellerman; +Cc: linuxppc-dev

On Thu, 15 Apr 2021 17:39:31 +0530, Srikar Dronamraju wrote:
> Daniel had reported that
>  QEMU is now unable to see requested topologies in a multi socket single
>  NUMA node configurations.
>  -smp 8,maxcpus=8,cores=2,threads=2,sockets=2
> 
> This patchset reintroduces cpu_core_mask so that users can see requested
> topologies while still maintaining the boot time of very large system
> configurations.
> 
> [...]

Applied to powerpc/next.

[1/3] powerpc/smp: Reintroduce cpu_core_mask
      https://git.kernel.org/powerpc/c/c47f892d7aa62765bf0689073f75990b4517a4cf
[2/3] Revert "powerpc/topology: Update topology_core_cpumask"
      https://git.kernel.org/powerpc/c/131c82b6a1d261705a6f98368e501d43d994018d
[3/3] powerpc/smp: Cache CPU to chip lookup
      https://git.kernel.org/powerpc/c/c1e53367dab15e41814cff4e37df8ec4ac8fb9d7

cheers

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-04-19  4:12 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-15 12:09 [PATCH 0/3] Reintroduce cpu_core_mask Srikar Dronamraju
2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: " Srikar Dronamraju
2021-04-15 17:11   ` Gautham R Shenoy
2021-04-15 17:36     ` Srikar Dronamraju
2021-04-16  3:21   ` David Gibson
2021-04-16  5:45     ` Srikar Dronamraju
2021-04-19  1:17       ` David Gibson
2021-04-15 12:09 ` [PATCH 2/3] Revert "powerpc/topology: Update topology_core_cpumask" Srikar Dronamraju
2021-04-15 12:09 ` [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup Srikar Dronamraju
2021-04-15 17:19   ` Gautham R Shenoy
2021-04-15 17:51     ` Srikar Dronamraju
2021-04-16 15:57       ` Gautham R Shenoy
2021-04-16 16:57         ` Srikar Dronamraju
2021-04-19  1:19         ` David Gibson
2021-04-15 12:17 ` [PATCH 0/3] Reintroduce cpu_core_mask Daniel Henrique Barboza
2021-04-19  4:00 ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).