linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch v5 00/19] x86/cpu: Rework topology evaluation
@ 2024-01-23 12:53 Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 01/19] x86/cpu: Provide cpuid_read() et al Thomas Gleixner
                   ` (19 more replies)
  0 siblings, 20 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

This is a follow up on V4 of this work:

  https://lore.kernel.org/all/20230814085006.593997112@linutronix.de

and contains only the not yet applied part which reworks the CPUID
parsing. This is also preparatory work for the general overhaul of APIC ID
enumeration and management.

Changes vs. V4:

  - Add DIEGRP level explicitly

This applies on Linus tree and is available from git:

  git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git topo-cpuid-v5

Thanks,

	tglx
---
 arch/x86/kernel/cpu/topology.c          |  167 ----------------------
 b/arch/x86/events/amd/core.c            |    2 
 b/arch/x86/include/asm/apic.h           |    1 
 b/arch/x86/include/asm/cpuid.h          |   36 ++++
 b/arch/x86/include/asm/processor.h      |    5 
 b/arch/x86/include/asm/topology.h       |   39 +++++
 b/arch/x86/kernel/amd_nb.c              |    4 
 b/arch/x86/kernel/apic/apic_flat_64.c   |    7 
 b/arch/x86/kernel/apic/apic_noop.c      |    3 
 b/arch/x86/kernel/apic/apic_numachip.c  |    7 
 b/arch/x86/kernel/apic/bigsmp_32.c      |    6 
 b/arch/x86/kernel/apic/local.h          |    1 
 b/arch/x86/kernel/apic/probe_32.c       |    6 
 b/arch/x86/kernel/apic/x2apic_cluster.c |    1 
 b/arch/x86/kernel/apic/x2apic_phys.c    |    6 
 b/arch/x86/kernel/apic/x2apic_uv_x.c    |   63 +-------
 b/arch/x86/kernel/cpu/Makefile          |    3 
 b/arch/x86/kernel/cpu/amd.c             |  146 -------------------
 b/arch/x86/kernel/cpu/cacheinfo.c       |    6 
 b/arch/x86/kernel/cpu/centaur.c         |    4 
 b/arch/x86/kernel/cpu/common.c          |   91 +-----------
 b/arch/x86/kernel/cpu/cpu.h             |   13 -
 b/arch/x86/kernel/cpu/debugfs.c         |   40 +++++
 b/arch/x86/kernel/cpu/hygon.c           |  129 -----------------
 b/arch/x86/kernel/cpu/intel.c           |   25 ---
 b/arch/x86/kernel/cpu/mce/amd.c         |    4 
 b/arch/x86/kernel/cpu/mce/inject.c      |    7 
 b/arch/x86/kernel/cpu/topology.h        |   56 +++++++
 b/arch/x86/kernel/cpu/topology_amd.c    |  182 ++++++++++++++++++++++++
 b/arch/x86/kernel/cpu/topology_common.c |  241 ++++++++++++++++++++++++++++++++
 b/arch/x86/kernel/cpu/topology_ext.c    |  130 +++++++++++++++++
 b/arch/x86/kernel/cpu/zhaoxin.c         |    4 
 b/arch/x86/kernel/smpboot.c             |   12 +
 b/arch/x86/kernel/vsmp_64.c             |   13 -
 b/arch/x86/mm/amdtopology.c             |   35 ++--
 b/arch/x86/xen/apic.c                   |    6 
 b/arch/x86/xen/smp_pv.c                 |    3 
 b/drivers/edac/amd64_edac.c             |    4 
 b/drivers/edac/mce_amd.c                |    4 
 39 files changed, 792 insertions(+), 720 deletions(-)



^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 01/19] x86/cpu: Provide cpuid_read() et al.
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-24 12:25   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 02/19] x86/cpu: Provide cpu_init/parse_topology() Thomas Gleixner
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Provide a few helper functions to read CPUID leafs or individual registers
into a data structure without requiring unions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Zhang Rui <rui.zhang@intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>


---
 arch/x86/include/asm/cpuid.h |   36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)
---
--- a/arch/x86/include/asm/cpuid.h
+++ b/arch/x86/include/asm/cpuid.h
@@ -127,6 +127,42 @@ static inline unsigned int cpuid_edx(uns
 	return edx;
 }
 
+static inline void __cpuid_read(unsigned int leaf, unsigned int subleaf, u32 *regs)
+{
+	regs[CPUID_EAX] = leaf;
+	regs[CPUID_ECX] = subleaf;
+	__cpuid(regs, regs + 1, regs + 2, regs + 3);
+}
+
+#define cpuid_subleaf(leaf, subleaf, regs) {		\
+	BUILD_BUG_ON(sizeof(*(regs)) != 16);		\
+	__cpuid_read(leaf, subleaf, (u32 *)(regs));	\
+}
+
+#define cpuid_leaf(leaf, regs) {			\
+	BUILD_BUG_ON(sizeof(*(regs)) != 16);		\
+	__cpuid_read(leaf, 0, (u32 *)(regs));		\
+}
+
+static inline void __cpuid_read_reg(unsigned int leaf, unsigned int subleaf,
+				    enum cpuid_regs_idx regidx, u32 *reg)
+{
+	u32 regs[4];
+
+	__cpuid_read(leaf, subleaf, regs);
+	*reg = regs[regidx];
+}
+
+#define cpuid_subleaf_reg(leaf, subleaf, regidx, reg) {		\
+	BUILD_BUG_ON(sizeof(*(reg)) != 4);			\
+	__cpuid_read_reg(leaf, subleaf, regidx, (u32 *)(reg));	\
+}
+
+#define cpuid_leaf_reg(leaf, regidx, reg) {			\
+	BUILD_BUG_ON(sizeof(*(reg)) != 4);			\
+	__cpuid_read_reg(leaf, 0, regidx, (u32 *)(reg));	\
+}
+
 static __always_inline bool cpuid_function_is_indexed(u32 function)
 {
 	switch (function) {


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 02/19] x86/cpu: Provide cpu_init/parse_topology()
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 01/19] x86/cpu: Provide cpuid_read() et al Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-01 22:16   ` Sohil Mehta
  2024-01-23 12:53 ` [patch v5 03/19] x86/cpu: Add legacy topology parser Thomas Gleixner
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Topology evaluation is a complete disaster and impenetrable mess. It's
scattered all over the place with some vendor implementations doing early
evaluation and some not. The most horrific part is the permanent
overwriting of smt_max_siblings and __max_die_per_package, instead of
establishing them once on the boot CPU and validating the result on the
APs.

The goals are:

  - One topology evaluation entry point

  - Proper sharing of pointlessly duplicated code

  - Proper structuring of the evaluation logic and preferences.

  - Evaluating important system wide information only once on the boot CPU

  - Making the 0xb/0x1f leaf parsing less convoluted and actually fixing
    the short comings of leaf 0x1f evaluation.

Start to consolidate the topology evaluation code by providing the entry
points for the early boot CPU evaluation and for the final parsing on the
boot CPU and the APs.

Move the trivial pieces into that new code:

   - The initialization of cpuinfo_x86::topo

   - The evaluation of CPUID leaf 1, which presets topo::initial_apicid

   - topo_apicid is set to topo::initial_apicid when invoked from early
     boot. When invoked for the final evaluation on the boot CPU it reads
     the actual APIC ID, which makes apic_get_initial_apicid() obsolete
     once everything is converted over.

Provide a temporary helper function topo_converted() which shields off the
not yet converted CPU vendors from invoking code which would break them.
This shielding covers all vendor CPUs which support SMP, but not the
historical pure UP ones as they only need the topology info init and
eventually the initial APIC initialization.

Provide two new members in cpuinfo_x86::topo to store the maximum number of
SMT siblings and the number of dies per package and add them to the debugfs
readout. These two members will be used to populate this information on the
boot CPU and to validate the APs against it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Zhang Rui <rui.zhang@intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>


---
 arch/x86/include/asm/topology.h       |   19 +++
 arch/x86/kernel/cpu/Makefile          |    3 
 arch/x86/kernel/cpu/common.c          |   24 +---
 arch/x86/kernel/cpu/cpu.h             |    6 +
 arch/x86/kernel/cpu/debugfs.c         |   38 ++++++
 arch/x86/kernel/cpu/topology.h        |   36 ++++++
 arch/x86/kernel/cpu/topology_common.c |  188 ++++++++++++++++++++++++++++++++++
 7 files changed, 296 insertions(+), 18 deletions(-)
 create mode 100644 arch/x86/kernel/cpu/topology.h
 create mode 100644 arch/x86/kernel/cpu/topology_common.c
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -102,6 +102,25 @@ static inline void setup_node_to_cpumask
 
 #include <asm-generic/topology.h>
 
+/* Topology information */
+enum x86_topology_domains {
+	TOPO_SMT_DOMAIN,
+	TOPO_CORE_DOMAIN,
+	TOPO_MODULE_DOMAIN,
+	TOPO_TILE_DOMAIN,
+	TOPO_DIE_DOMAIN,
+	TOPO_DIEGRP_DOMAIN,
+	TOPO_PKG_DOMAIN,
+	TOPO_MAX_DOMAIN,
+};
+
+struct x86_topology_system {
+	unsigned int	dom_shifts[TOPO_MAX_DOMAIN];
+	unsigned int	dom_size[TOPO_MAX_DOMAIN];
+};
+
+extern struct x86_topology_system x86_topo_system;
+
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
 extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -17,7 +17,8 @@ KMSAN_SANITIZE_common.o := n
 # As above, instrumenting secondary CPU boot code causes boot hangs.
 KCSAN_SANITIZE_common.o := n
 
-obj-y			:= cacheinfo.o scattered.o topology.o
+obj-y			:= cacheinfo.o scattered.o
+obj-y			+= topology_common.o topology.o
 obj-y			+= common.o
 obj-y			+= rdrand.o
 obj-y			+= match.o
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1590,6 +1590,8 @@ static void __init early_identify_cpu(st
 		setup_force_cpu_cap(X86_FEATURE_CPUID);
 		cpu_parse_early_param();
 
+		cpu_init_topology(c);
+
 		if (this_cpu->c_early_init)
 			this_cpu->c_early_init(c);
 
@@ -1600,6 +1602,7 @@ static void __init early_identify_cpu(st
 			this_cpu->c_bsp_init(c);
 	} else {
 		setup_clear_cpu_cap(X86_FEATURE_CPUID);
+		cpu_init_topology(c);
 	}
 
 	get_cpu_address_sizes(c);
@@ -1747,18 +1750,6 @@ static void generic_identify(struct cpui
 
 	get_cpu_address_sizes(c);
 
-	if (c->cpuid_level >= 0x00000001) {
-		c->topo.initial_apicid = (cpuid_ebx(1) >> 24) & 0xFF;
-#ifdef CONFIG_X86_32
-# ifdef CONFIG_SMP
-		c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
-# else
-		c->topo.apicid = c->topo.initial_apicid;
-# endif
-#endif
-		c->topo.pkg_id = c->topo.initial_apicid;
-	}
-
 	get_model_name(c); /* Default name */
 
 	/*
@@ -1817,9 +1808,6 @@ static void identify_cpu(struct cpuinfo_
 	c->x86_model_id[0] = '\0';  /* Unset */
 	c->x86_max_cores = 1;
 	c->x86_coreid_bits = 0;
-	c->topo.cu_id = 0xff;
-	c->topo.llc_id = BAD_APICID;
-	c->topo.l2c_id = BAD_APICID;
 #ifdef CONFIG_X86_64
 	c->x86_clflush_size = 64;
 	c->x86_phys_bits = 36;
@@ -1838,6 +1826,8 @@ static void identify_cpu(struct cpuinfo_
 
 	generic_identify(c);
 
+	cpu_parse_topology(c);
+
 	if (this_cpu->c_identify)
 		this_cpu->c_identify(c);
 
@@ -1845,10 +1835,10 @@ static void identify_cpu(struct cpuinfo_
 	apply_forced_caps(c);
 
 #ifdef CONFIG_X86_64
-	c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
+	if (!topo_is_converted(c))
+		c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
 #endif
 
-
 	/*
 	 * Set default APIC and TSC_DEADLINE MSR fencing flag. AMD and
 	 * Hygon will clear it in ->c_init() below.
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -2,6 +2,11 @@
 #ifndef ARCH_X86_CPU_H
 #define ARCH_X86_CPU_H
 
+#include <asm/cpu.h>
+#include <asm/topology.h>
+
+#include "topology.h"
+
 /* attempt to consolidate cpu attributes */
 struct cpu_dev {
 	const char	*c_vendor;
@@ -96,4 +101,5 @@ static inline bool spectre_v2_in_eibrs_m
 	       mode == SPECTRE_V2_EIBRS_RETPOLINE ||
 	       mode == SPECTRE_V2_EIBRS_LFENCE;
 }
+
 #endif /* ARCH_X86_CPU_H */
--- a/arch/x86/kernel/cpu/debugfs.c
+++ b/arch/x86/kernel/cpu/debugfs.c
@@ -5,6 +5,8 @@
 #include <asm/apic.h>
 #include <asm/processor.h>
 
+#include "cpu.h"
+
 static int cpu_debug_show(struct seq_file *m, void *p)
 {
 	unsigned long cpu = (unsigned long)m->private;
@@ -42,12 +44,48 @@ static const struct file_operations dfs_
 	.release	= single_release,
 };
 
+static int dom_debug_show(struct seq_file *m, void *p)
+{
+	static const char *domain_names[TOPO_MAX_DOMAIN] = {
+		[TOPO_SMT_DOMAIN]	= "Thread",
+		[TOPO_CORE_DOMAIN]	= "Core",
+		[TOPO_MODULE_DOMAIN]	= "Module",
+		[TOPO_TILE_DOMAIN]	= "Tile",
+		[TOPO_DIE_DOMAIN]	= "Die",
+		[TOPO_DIEGRP_DOMAIN]	= "DieGrp",
+		[TOPO_PKG_DOMAIN]	= "Package",
+	};
+	unsigned int dom, nthreads = 1;
+
+	for (dom = 0; dom < TOPO_MAX_DOMAIN; dom++) {
+		nthreads *= x86_topo_system.dom_size[dom];
+		seq_printf(m, "domain: %-10s shift: %u dom_size: %5u max_threads: %5u\n",
+			   domain_names[dom], x86_topo_system.dom_shifts[dom],
+			   x86_topo_system.dom_size[dom], nthreads);
+	}
+	return 0;
+}
+
+static int dom_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, dom_debug_show, inode->i_private);
+}
+
+static const struct file_operations dfs_dom_ops = {
+	.open		= dom_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
 static __init int cpu_init_debugfs(void)
 {
 	struct dentry *dir, *base = debugfs_create_dir("topo", arch_debugfs_dir);
 	unsigned long id;
 	char name[24];
 
+	debugfs_create_file("domains", 0444, base, NULL, &dfs_dom_ops);
+
 	dir = debugfs_create_dir("cpus", base);
 	for_each_possible_cpu(id) {
 		sprintf(name, "%lu", id);
--- /dev/null
+++ b/arch/x86/kernel/cpu/topology.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef ARCH_X86_TOPOLOGY_H
+#define ARCH_X86_TOPOLOGY_H
+
+struct topo_scan {
+	struct cpuinfo_x86	*c;
+	unsigned int		dom_shifts[TOPO_MAX_DOMAIN];
+	unsigned int		dom_ncpus[TOPO_MAX_DOMAIN];
+};
+
+bool topo_is_converted(struct cpuinfo_x86 *c);
+void cpu_init_topology(struct cpuinfo_x86 *c);
+void cpu_parse_topology(struct cpuinfo_x86 *c);
+void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
+		      unsigned int shift, unsigned int ncpus);
+
+static inline u32 topo_shift_apicid(u32 apicid, enum x86_topology_domains dom)
+{
+	if (dom == TOPO_SMT_DOMAIN)
+		return apicid;
+	return apicid >> x86_topo_system.dom_shifts[dom - 1];
+}
+
+static inline u32 topo_relative_domain_id(u32 apicid, enum x86_topology_domains dom)
+{
+	if (dom != TOPO_SMT_DOMAIN)
+		apicid >>= x86_topo_system.dom_shifts[dom - 1];
+	return apicid & (x86_topo_system.dom_size[dom] - 1);
+}
+
+static inline u32 topo_domain_mask(enum x86_topology_domains dom)
+{
+	return (1U << x86_topo_system.dom_shifts[dom]) - 1;
+}
+
+#endif /* ARCH_X86_TOPOLOGY_H */
--- /dev/null
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/cpu.h>
+
+#include <xen/xen.h>
+
+#include <asm/apic.h>
+#include <asm/processor.h>
+#include <asm/smp.h>
+
+#include "cpu.h"
+
+struct x86_topology_system x86_topo_system __ro_after_init;
+
+void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
+		      unsigned int shift, unsigned int ncpus)
+{
+	tscan->dom_shifts[dom] = shift;
+	tscan->dom_ncpus[dom] = ncpus;
+
+	/* Propagate to the upper levels */
+	for (dom++; dom < TOPO_MAX_DOMAIN; dom++) {
+		tscan->dom_shifts[dom] = tscan->dom_shifts[dom - 1];
+		tscan->dom_ncpus[dom] = tscan->dom_ncpus[dom - 1];
+	}
+}
+
+bool topo_is_converted(struct cpuinfo_x86 *c)
+{
+	/* Temporary until everything is converted over. */
+	switch (boot_cpu_data.x86_vendor) {
+	case X86_VENDOR_AMD:
+	case X86_VENDOR_CENTAUR:
+	case X86_VENDOR_INTEL:
+	case X86_VENDOR_HYGON:
+	case X86_VENDOR_ZHAOXIN:
+		return false;
+	default:
+		/* Let all UP systems use the below */
+		return true;
+	}
+}
+
+static bool fake_topology(struct topo_scan *tscan)
+{
+	/*
+	 * Preset the CORE level shift for CPUID less systems and XEN_PV,
+	 * which has useless CPUID information.
+	 */
+	topology_set_dom(tscan, TOPO_SMT_DOMAIN, 0, 1);
+	topology_set_dom(tscan, TOPO_CORE_DOMAIN, 1, 1);
+
+	return tscan->c->cpuid_level < 1 || xen_pv_domain();
+}
+
+static void parse_topology(struct topo_scan *tscan, bool early)
+{
+	const struct cpuinfo_topology topo_defaults = {
+		.cu_id			= 0xff,
+		.llc_id			= BAD_APICID,
+		.l2c_id			= BAD_APICID,
+	};
+	struct cpuinfo_x86 *c = tscan->c;
+	struct {
+		u32	unused0		: 16,
+			nproc		:  8,
+			apicid		:  8;
+	} ebx;
+
+	c->topo = topo_defaults;
+
+	if (fake_topology(tscan))
+	    return;
+
+	/* Preset Initial APIC ID from CPUID leaf 1 */
+	cpuid_leaf_reg(1, CPUID_EBX, &ebx);
+	c->topo.initial_apicid = ebx.apicid;
+
+	/*
+	 * The initial invocation from early_identify_cpu() happens before
+	 * the APIC is mapped or X2APIC enabled. For establishing the
+	 * topology, that's not required. Use the initial APIC ID.
+	 */
+	if (early)
+		c->topo.apicid = c->topo.initial_apicid;
+	else
+		c->topo.apicid = read_apic_id();
+
+	/* The above is sufficient for UP */
+	if (!IS_ENABLED(CONFIG_SMP))
+		return;
+}
+
+static void topo_set_ids(struct topo_scan *tscan)
+{
+	struct cpuinfo_x86 *c = tscan->c;
+	u32 apicid = c->topo.apicid;
+
+	c->topo.pkg_id = topo_shift_apicid(apicid, TOPO_PKG_DOMAIN);
+	c->topo.die_id = topo_shift_apicid(apicid, TOPO_DIE_DOMAIN);
+
+	/* Package relative core ID */
+	c->topo.core_id = (apicid & topo_domain_mask(TOPO_PKG_DOMAIN)) >>
+		x86_topo_system.dom_shifts[TOPO_SMT_DOMAIN];
+}
+
+static void topo_set_max_cores(struct topo_scan *tscan)
+{
+	/*
+	 * Bug compatible for now. This is broken on hybrid systems:
+	 * 8 cores SMT + 8 cores w/o SMT
+	 * tscan.dom_ncpus[TOPO_DIEGRP_DOMAIN] = 24; 24 / 2 = 12 !!
+	 *
+	 * Cannot be fixed without further topology enumeration changes.
+	 */
+	tscan->c->x86_max_cores = tscan->dom_ncpus[TOPO_DIEGRP_DOMAIN] >>
+		x86_topo_system.dom_shifts[TOPO_SMT_DOMAIN];
+}
+
+void cpu_parse_topology(struct cpuinfo_x86 *c)
+{
+	unsigned int dom, cpu = smp_processor_id();
+	struct topo_scan tscan = { .c = c, };
+
+	parse_topology(&tscan, false);
+
+	if (!topo_is_converted(c))
+		return;
+
+	for (dom = TOPO_SMT_DOMAIN; dom < TOPO_MAX_DOMAIN; dom++) {
+		if (tscan.dom_shifts[dom] == x86_topo_system.dom_shifts[dom])
+			continue;
+		pr_err(FW_BUG "CPU%d: Topology domain %u shift %u != %u\n", cpu, dom,
+		       tscan.dom_shifts[dom], x86_topo_system.dom_shifts[dom]);
+	}
+
+	/* Bug compatible with the existing parsers */
+	if (tscan.dom_ncpus[TOPO_SMT_DOMAIN] > smp_num_siblings) {
+		if (system_state == SYSTEM_BOOTING) {
+			pr_warn_once("CPU%d: SMT detected and enabled late\n", cpu);
+			smp_num_siblings = tscan.dom_ncpus[TOPO_SMT_DOMAIN];
+		} else {
+			pr_warn_once("CPU%d: SMT detected after init. Too late!\n", cpu);
+		}
+	}
+
+	topo_set_ids(&tscan);
+	topo_set_max_cores(&tscan);
+}
+
+void __init cpu_init_topology(struct cpuinfo_x86 *c)
+{
+	struct topo_scan tscan = { .c = c, };
+	unsigned int dom, sft;
+
+	parse_topology(&tscan, true);
+
+	if (!topo_is_converted(c))
+		return;
+
+	/* Copy the shift values and calculate the unit sizes. */
+	memcpy(x86_topo_system.dom_shifts, tscan.dom_shifts, sizeof(x86_topo_system.dom_shifts));
+
+	dom = TOPO_SMT_DOMAIN;
+	x86_topo_system.dom_size[dom] = 1U << x86_topo_system.dom_shifts[dom];
+
+	for (dom++; dom < TOPO_MAX_DOMAIN; dom++) {
+		sft = x86_topo_system.dom_shifts[dom] - x86_topo_system.dom_shifts[dom - 1];
+		x86_topo_system.dom_size[dom] = 1U << sft;
+	}
+
+	topo_set_ids(&tscan);
+	topo_set_max_cores(&tscan);
+
+	/*
+	 * Bug compatible with the existing code. If the boot CPU does not
+	 * have SMT this ends up with one sibling. This needs way deeper
+	 * changes further down the road to get it right during early boot.
+	 */
+	smp_num_siblings = tscan.dom_ncpus[TOPO_SMT_DOMAIN];
+
+	/*
+	 * Neither it's clear whether there are as many dies as the APIC
+	 * space indicating die level is. But assume that the actual number
+	 * of CPUs gives a proper indication for now to stay bug compatible.
+	 */
+	__max_die_per_package = tscan.dom_ncpus[TOPO_DIE_DOMAIN] /
+		tscan.dom_ncpus[TOPO_DIE_DOMAIN - 1];
+}


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 03/19] x86/cpu: Add legacy topology parser
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 01/19] x86/cpu: Provide cpuid_read() et al Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 02/19] x86/cpu: Provide cpu_init/parse_topology() Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-24 20:12   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 04/19] x86/cpu: Use common topology code for Centaur and Zhaoxin Thomas Gleixner
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

The legacy topology detection via CPUID leaf 4, which provides the number
of cores in the package and CPUID leaf 1 which provides the number of
logical CPUs in case that FEATURE_HT is enabled and the CMP_LEGACY feature
is not set, is shared for Intel, Centaur amd Zhaoxin CPUs.

Lift the code from common.c without the early detection hack and provide it
as common fallback mechanism.

Will be utilized in later changes.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/common.c          |    3 ++
 arch/x86/kernel/cpu/topology.h        |    3 ++
 arch/x86/kernel/cpu/topology_common.c |   46 +++++++++++++++++++++++++++++++++-
 3 files changed, 51 insertions(+), 1 deletion(-)
---
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -891,6 +891,9 @@ void detect_ht(struct cpuinfo_x86 *c)
 #ifdef CONFIG_SMP
 	int index_msb, core_bits;
 
+	if (topo_is_converted(c))
+		return;
+
 	if (detect_ht_early(c) < 0)
 		return;
 
--- a/arch/x86/kernel/cpu/topology.h
+++ b/arch/x86/kernel/cpu/topology.h
@@ -6,6 +6,9 @@ struct topo_scan {
 	struct cpuinfo_x86	*c;
 	unsigned int		dom_shifts[TOPO_MAX_DOMAIN];
 	unsigned int		dom_ncpus[TOPO_MAX_DOMAIN];
+
+	// Legacy CPUID[1]:EBX[23:16] number of logical processors
+	unsigned int		ebx1_nproc_shift;
 };
 
 bool topo_is_converted(struct cpuinfo_x86 *c);
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -24,6 +24,48 @@ void topology_set_dom(struct topo_scan *
 	}
 }
 
+static unsigned int parse_num_cores(struct cpuinfo_x86 *c)
+{
+	struct {
+		u32	cache_type	:  5,
+			unused		: 21,
+			ncores		:  6;
+	} eax;
+
+	if (c->cpuid_level < 4)
+		return 1;
+
+	cpuid_subleaf_reg(4, 0, CPUID_EAX, &eax);
+	if (!eax.cache_type)
+		return 1;
+
+	return eax.ncores + 1;
+}
+
+static void __maybe_unused parse_legacy(struct topo_scan *tscan)
+{
+	unsigned int cores, core_shift, smt_shift = 0;
+	struct cpuinfo_x86 *c = tscan->c;
+
+	cores = parse_num_cores(c);
+	core_shift = get_count_order(cores);
+
+	if (cpu_has(c, X86_FEATURE_HT)) {
+		if (!WARN_ON_ONCE(tscan->ebx1_nproc_shift < core_shift))
+			smt_shift = tscan->ebx1_nproc_shift - core_shift;
+		/*
+		 * The parser expects leaf 0xb/0x1f format, which means
+		 * the number of logical processors at core level is
+		 * counting threads.
+		 */
+		core_shift += smt_shift;
+		cores <<= smt_shift;
+	}
+
+	topology_set_dom(tscan, TOPO_SMT_DOMAIN, smt_shift, 1U << smt_shift);
+	topology_set_dom(tscan, TOPO_CORE_DOMAIN, core_shift, cores);
+}
+
 bool topo_is_converted(struct cpuinfo_x86 *c)
 {
 	/* Temporary until everything is converted over. */
@@ -47,7 +89,7 @@ static bool fake_topology(struct topo_sc
 	 * which has useless CPUID information.
 	 */
 	topology_set_dom(tscan, TOPO_SMT_DOMAIN, 0, 1);
-	topology_set_dom(tscan, TOPO_CORE_DOMAIN, 1, 1);
+	topology_set_dom(tscan, TOPO_CORE_DOMAIN, 0, 1);
 
 	return tscan->c->cpuid_level < 1 || xen_pv_domain();
 }
@@ -88,6 +130,8 @@ static void parse_topology(struct topo_s
 	/* The above is sufficient for UP */
 	if (!IS_ENABLED(CONFIG_SMP))
 		return;
+
+	tscan->ebx1_nproc_shift = get_count_order(ebx.nproc);
 }
 
 static void topo_set_ids(struct topo_scan *tscan)


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 04/19] x86/cpu: Use common topology code for Centaur and Zhaoxin
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (2 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 03/19] x86/cpu: Add legacy topology parser Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-30 19:09   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 05/19] x86/cpu: Move __max_die_per_package to common.c Thomas Gleixner
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Centaur and Zhaoxin CPUs use only the legacy SMP detection. Remove the
invocations from their 32bit path and exempt them from the call 64bit.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/centaur.c         |    4 ----
 arch/x86/kernel/cpu/topology_common.c |   11 ++++++++---
 arch/x86/kernel/cpu/zhaoxin.c         |    4 ----
 3 files changed, 8 insertions(+), 11 deletions(-)
---
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -128,10 +128,6 @@ static void init_centaur(struct cpuinfo_
 #endif
 	early_init_centaur(c);
 	init_intel_cacheinfo(c);
-	detect_num_cpu_cores(c);
-#ifdef CONFIG_X86_32
-	detect_ht(c);
-#endif
 
 	if (c->cpuid_level > 9) {
 		unsigned int eax = cpuid_eax(10);
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -42,7 +42,7 @@ static unsigned int parse_num_cores(stru
 	return eax.ncores + 1;
 }
 
-static void __maybe_unused parse_legacy(struct topo_scan *tscan)
+static void parse_legacy(struct topo_scan *tscan)
 {
 	unsigned int cores, core_shift, smt_shift = 0;
 	struct cpuinfo_x86 *c = tscan->c;
@@ -71,10 +71,8 @@ bool topo_is_converted(struct cpuinfo_x8
 	/* Temporary until everything is converted over. */
 	switch (boot_cpu_data.x86_vendor) {
 	case X86_VENDOR_AMD:
-	case X86_VENDOR_CENTAUR:
 	case X86_VENDOR_INTEL:
 	case X86_VENDOR_HYGON:
-	case X86_VENDOR_ZHAOXIN:
 		return false;
 	default:
 		/* Let all UP systems use the below */
@@ -132,6 +130,13 @@ static void parse_topology(struct topo_s
 		return;
 
 	tscan->ebx1_nproc_shift = get_count_order(ebx.nproc);
+
+	switch (c->x86_vendor) {
+	case X86_VENDOR_CENTAUR:
+	case X86_VENDOR_ZHAOXIN:
+		parse_legacy(tscan);
+		break;
+	}
 }
 
 static void topo_set_ids(struct topo_scan *tscan)
--- a/arch/x86/kernel/cpu/zhaoxin.c
+++ b/arch/x86/kernel/cpu/zhaoxin.c
@@ -71,10 +71,6 @@ static void init_zhaoxin(struct cpuinfo_
 {
 	early_init_zhaoxin(c);
 	init_intel_cacheinfo(c);
-	detect_num_cpu_cores(c);
-#ifdef CONFIG_X86_32
-	detect_ht(c);
-#endif
 
 	if (c->cpuid_level > 9) {
 		unsigned int eax = cpuid_eax(10);


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 05/19] x86/cpu: Move __max_die_per_package to common.c
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (3 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 04/19] x86/cpu: Use common topology code for Centaur and Zhaoxin Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser Thomas Gleixner
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

In preparation of a complete replacement for the topology leaf 0xb/0x1f
evaluation, move __max_die_per_package into the common code.

Will be removed once everything is converted over.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/common.c   |    3 +++
 arch/x86/kernel/cpu/topology.c |    3 ---
 2 files changed, 3 insertions(+), 3 deletions(-)
---
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -75,6 +75,9 @@ u32 elf_hwcap2 __read_mostly;
 int smp_num_siblings = 1;
 EXPORT_SYMBOL(smp_num_siblings);
 
+unsigned int __max_die_per_package __read_mostly = 1;
+EXPORT_SYMBOL(__max_die_per_package);
+
 static struct ppin_info {
 	int	feature;
 	int	msr_ppin_ctl;
--- a/arch/x86/kernel/cpu/topology.c
+++ b/arch/x86/kernel/cpu/topology.c
@@ -25,9 +25,6 @@
 #define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
 #define LEVEL_MAX_SIBLINGS(ebx)		((ebx) & 0xffff)
 
-unsigned int __max_die_per_package __read_mostly = 1;
-EXPORT_SYMBOL(__max_die_per_package);
-
 #ifdef CONFIG_SMP
 /*
  * Check if given CPUID extended topology "leaf" is implemented


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (4 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 05/19] x86/cpu: Move __max_die_per_package to common.c Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-30 19:31   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 07/19] x86/cpu: Use common topology code for Intel Thomas Gleixner
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

detect_extended_topology() along with it's early() variant is a classic
example for duct tape engineering:

  - It evaluates an array of subleafs with a boatload of local variables
    for the relevant topology levels instead of using an array to save the
    enumerated information and propagate it to the right level

  - It has no boundary checks for subleafs

  - It prevents updating the die_id with a crude workaround instead of
    checking for leaf 0xb which does not provide die information.

  - It's broken vs. the number of dies evaluation as it uses:

      num_processors[DIE_LEVEL] / num_processors[CORE_LEVEL]

    which "works" only correctly if there is none of the intermediate
    topology levels (MODULE/TILE) enumerated.

There is zero value in trying to "fix" that code as the only proper fix is
to rewrite it from scratch.

Implement a sane parser with proper code documentation, which will be used
for the consolidated topology evaluation in the next step.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>

---
 arch/x86/kernel/cpu/Makefile       |    2 
 arch/x86/kernel/cpu/topology.h     |   12 +++
 arch/x86/kernel/cpu/topology_ext.c |  130 +++++++++++++++++++++++++++++++++++++
 3 files changed, 143 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/kernel/cpu/topology_ext.c
---
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -18,7 +18,7 @@ KMSAN_SANITIZE_common.o := n
 KCSAN_SANITIZE_common.o := n
 
 obj-y			:= cacheinfo.o scattered.o
-obj-y			+= topology_common.o topology.o
+obj-y			+= topology_common.o topology_ext.o topology.o
 obj-y			+= common.o
 obj-y			+= rdrand.o
 obj-y			+= match.o
--- a/arch/x86/kernel/cpu/topology.h
+++ b/arch/x86/kernel/cpu/topology.h
@@ -16,6 +16,7 @@ void cpu_init_topology(struct cpuinfo_x8
 void cpu_parse_topology(struct cpuinfo_x86 *c);
 void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
 		      unsigned int shift, unsigned int ncpus);
+bool cpu_parse_topology_ext(struct topo_scan *tscan);
 
 static inline u32 topo_shift_apicid(u32 apicid, enum x86_topology_domains dom)
 {
@@ -36,4 +37,15 @@ static inline u32 topo_domain_mask(enum
 	return (1U << x86_topo_system.dom_shifts[dom]) - 1;
 }
 
+/*
+ * Update a domain level after the fact without propagating. Used to fixup
+ * broken CPUID enumerations.
+ */
+static inline void topology_update_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
+				       unsigned int shift, unsigned int ncpus)
+{
+	tscan->dom_shifts[dom] = shift;
+	tscan->dom_ncpus[dom] = ncpus;
+}
+
 #endif /* ARCH_X86_TOPOLOGY_H */
--- /dev/null
+++ b/arch/x86/kernel/cpu/topology_ext.c
@@ -0,0 +1,130 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/cpu.h>
+
+#include <asm/apic.h>
+#include <asm/memtype.h>
+#include <asm/processor.h>
+
+#include "cpu.h"
+
+enum topo_types {
+	INVALID_TYPE		= 0,
+	SMT_TYPE		= 1,
+	CORE_TYPE		= 2,
+	MAX_TYPE_0B		= 3,
+	MODULE_TYPE		= 3,
+	TILE_TYPE		= 4,
+	DIE_TYPE		= 5,
+	DIEGRP_TYPE		= 6,
+	MAX_TYPE_1F		= 7,
+};
+
+/*
+ * Use a lookup table for the case that there are future types > 6 which
+ * describe an intermediate domain level which does not exist today.
+ */
+static const unsigned int topo_domain_map_0b_1f[MAX_TYPE_1F] = {
+	[SMT_TYPE]	= TOPO_SMT_DOMAIN,
+	[CORE_TYPE]	= TOPO_CORE_DOMAIN,
+	[MODULE_TYPE]	= TOPO_MODULE_DOMAIN,
+	[TILE_TYPE]	= TOPO_TILE_DOMAIN,
+	[DIE_TYPE]	= TOPO_DIE_DOMAIN,
+	[DIEGRP_TYPE]	= TOPO_DIEGRP_DOMAIN,
+};
+
+static inline bool topo_subleaf(struct topo_scan *tscan, u32 leaf, u32 subleaf,
+				unsigned int *last_dom)
+{
+	unsigned int dom, maxtype;
+	const unsigned int *map;
+	struct {
+		// eax
+		u32	x2apic_shift	:  5, // Number of bits to shift APIC ID right
+					      // for the topology ID at the next level
+					: 27; // Reserved
+		// ebx
+		u32	num_processors	: 16, // Number of processors at current level
+					: 16; // Reserved
+		// ecx
+		u32	level		:  8, // Current topology level. Same as sub leaf number
+			type		:  8, // Level type. If 0, invalid
+					: 16; // Reserved
+		// edx
+		u32	x2apic_id	: 32; // X2APIC ID of the current logical processor
+	} sl;
+
+	switch (leaf) {
+	case 0x0b: maxtype = MAX_TYPE_0B; map = topo_domain_map_0b_1f; break;
+	case 0x1f: maxtype = MAX_TYPE_1F; map = topo_domain_map_0b_1f; break;
+	default: return false;
+	}
+
+	cpuid_subleaf(leaf, subleaf, &sl);
+
+	if (!sl.num_processors || sl.type == INVALID_TYPE)
+		return false;
+
+	if (sl.type >= maxtype) {
+		pr_err_once("Topology: leaf 0x%x:%d Unknown domain type %u\n",
+			    leaf, subleaf, sl.type);
+		/*
+		 * It really would have been too obvious to make the domain
+		 * type space sparse and leave a few reserved types between
+		 * the points which might change instead of following the
+		 * usual "this can be fixed in software" principle.
+		 */
+		dom = *last_dom + 1;
+	} else {
+		dom = map[sl.type];
+		*last_dom = dom;
+	}
+
+	if (!dom) {
+		tscan->c->topo.initial_apicid = sl.x2apic_id;
+	} else if (tscan->c->topo.initial_apicid != sl.x2apic_id) {
+		pr_warn_once(FW_BUG "CPUID leaf 0x%x subleaf %d APIC ID mismatch %x != %x\n",
+			     leaf, subleaf, tscan->c->topo.initial_apicid, sl.x2apic_id);
+	}
+
+	topology_set_dom(tscan, dom, sl.x2apic_shift, sl.num_processors);
+	return true;
+}
+
+static bool parse_topology_leaf(struct topo_scan *tscan, u32 leaf)
+{
+	unsigned int last_dom;
+	u32 subleaf;
+
+	/* Read all available subleafs and populate the levels */
+	for (subleaf = 0, last_dom = 0; topo_subleaf(tscan, leaf, subleaf, &last_dom); subleaf++);
+
+	/* If subleaf 0 failed to parse, give up */
+	if (!subleaf)
+		return false;
+
+	/*
+	 * There are machines in the wild which have shift 0 in the subleaf
+	 * 0, but advertise 2 logical processors at that level. They are
+	 * truly SMT.
+	 */
+	if (!tscan->dom_shifts[TOPO_SMT_DOMAIN] && tscan->dom_ncpus[TOPO_SMT_DOMAIN] > 1) {
+		unsigned int sft = get_count_order(tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
+
+		pr_warn_once(FW_BUG "CPUID leaf 0x%x subleaf 0 has shift level 0 but %u CPUs\n",
+			     leaf, tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
+		topology_update_dom(tscan, TOPO_SMT_DOMAIN, sft, tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
+	}
+
+	set_cpu_cap(tscan->c, X86_FEATURE_XTOPOLOGY);
+	return true;
+}
+
+bool cpu_parse_topology_ext(struct topo_scan *tscan)
+{
+	/* Intel: Try leaf 0x1F first. */
+	if (tscan->c->cpuid_level >= 0x1f && parse_topology_leaf(tscan, 0x1f))
+		return true;
+
+	/* Intel/AMD: Fall back to leaf 0xB if available */
+	return tscan->c->cpuid_level >= 0x0b && parse_topology_leaf(tscan, 0x0b);
+}


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 07/19] x86/cpu: Use common topology code for Intel
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (5 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-01 15:07   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 08/19] x86/cpu/amd: Provide a separate accessor for Node ID Thomas Gleixner
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Intel CPUs use either topology leaf 0xb/0x1f evaluation or the legacy
SMP/HT evaluation based on CPUID leaf 0x1/0x4.

Move it over to the consolidated topology code and remove the random
topology hacks which are sprinkled into the Intel and the common code.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/common.c          |   65 ----------------------------------
 arch/x86/kernel/cpu/cpu.h             |    4 --
 arch/x86/kernel/cpu/intel.c           |   25 -------------
 arch/x86/kernel/cpu/topology_common.c |    5 ++
 4 files changed, 4 insertions(+), 95 deletions(-)
---
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -792,19 +792,6 @@ static void get_model_name(struct cpuinf
 	*(s + 1) = '\0';
 }
 
-void detect_num_cpu_cores(struct cpuinfo_x86 *c)
-{
-	unsigned int eax, ebx, ecx, edx;
-
-	c->x86_max_cores = 1;
-	if (!IS_ENABLED(CONFIG_SMP) || c->cpuid_level < 4)
-		return;
-
-	cpuid_count(4, 0, &eax, &ebx, &ecx, &edx);
-	if (eax & 0x1f)
-		c->x86_max_cores = (eax >> 26) + 1;
-}
-
 void cpu_detect_cache_sizes(struct cpuinfo_x86 *c)
 {
 	unsigned int n, dummy, ebx, ecx, edx, l2size;
@@ -866,54 +853,6 @@ static void cpu_detect_tlb(struct cpuinf
 		tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
 }
 
-int detect_ht_early(struct cpuinfo_x86 *c)
-{
-#ifdef CONFIG_SMP
-	u32 eax, ebx, ecx, edx;
-
-	if (!cpu_has(c, X86_FEATURE_HT))
-		return -1;
-
-	if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
-		return -1;
-
-	if (cpu_has(c, X86_FEATURE_XTOPOLOGY))
-		return -1;
-
-	cpuid(1, &eax, &ebx, &ecx, &edx);
-
-	smp_num_siblings = (ebx & 0xff0000) >> 16;
-	if (smp_num_siblings == 1)
-		pr_info_once("CPU0: Hyper-Threading is disabled\n");
-#endif
-	return 0;
-}
-
-void detect_ht(struct cpuinfo_x86 *c)
-{
-#ifdef CONFIG_SMP
-	int index_msb, core_bits;
-
-	if (topo_is_converted(c))
-		return;
-
-	if (detect_ht_early(c) < 0)
-		return;
-
-	index_msb = get_count_order(smp_num_siblings);
-	c->topo.pkg_id = apic->phys_pkg_id(c->topo.initial_apicid, index_msb);
-
-	smp_num_siblings = smp_num_siblings / c->x86_max_cores;
-
-	index_msb = get_count_order(smp_num_siblings);
-
-	core_bits = get_count_order(c->x86_max_cores);
-
-	c->topo.core_id = apic->phys_pkg_id(c->topo.initial_apicid, index_msb) &
-		((1 << core_bits) - 1);
-#endif
-}
-
 static void get_cpu_vendor(struct cpuinfo_x86 *c)
 {
 	char *v = c->x86_vendor_id;
@@ -1898,10 +1837,6 @@ static void identify_cpu(struct cpuinfo_
 				c->x86, c->x86_model);
 	}
 
-#ifdef CONFIG_X86_64
-	detect_ht(c);
-#endif
-
 	x86_init_rdrand(c);
 	setup_pku(c);
 	setup_cet(c);
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -76,11 +76,7 @@ extern void init_intel_cacheinfo(struct
 extern void init_amd_cacheinfo(struct cpuinfo_x86 *c);
 extern void init_hygon_cacheinfo(struct cpuinfo_x86 *c);
 
-extern void detect_num_cpu_cores(struct cpuinfo_x86 *c);
-extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
 extern int detect_extended_topology(struct cpuinfo_x86 *c);
-extern int detect_ht_early(struct cpuinfo_x86 *c);
-extern void detect_ht(struct cpuinfo_x86 *c);
 extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
 
 void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c);
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -315,13 +315,6 @@ static void early_init_intel(struct cpui
 	}
 
 	check_memory_type_self_snoop_errata(c);
-
-	/*
-	 * Get the number of SMT siblings early from the extended topology
-	 * leaf, if available. Otherwise try the legacy SMT detection.
-	 */
-	if (detect_extended_topology_early(c) < 0)
-		detect_ht_early(c);
 }
 
 static void bsp_init_intel(struct cpuinfo_x86 *c)
@@ -603,24 +596,6 @@ static void init_intel(struct cpuinfo_x8
 
 	intel_workarounds(c);
 
-	/*
-	 * Detect the extended topology information if available. This
-	 * will reinitialise the initial_apicid which will be used
-	 * in init_intel_cacheinfo()
-	 */
-	detect_extended_topology(c);
-
-	if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) {
-		/*
-		 * let's use the legacy cpuid vector 0x1 and 0x4 for topology
-		 * detection.
-		 */
-		detect_num_cpu_cores(c);
-#ifdef CONFIG_X86_32
-		detect_ht(c);
-#endif
-	}
-
 	init_intel_cacheinfo(c);
 
 	if (c->cpuid_level > 9) {
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -71,7 +71,6 @@ bool topo_is_converted(struct cpuinfo_x8
 	/* Temporary until everything is converted over. */
 	switch (boot_cpu_data.x86_vendor) {
 	case X86_VENDOR_AMD:
-	case X86_VENDOR_INTEL:
 	case X86_VENDOR_HYGON:
 		return false;
 	default:
@@ -136,6 +135,10 @@ static void parse_topology(struct topo_s
 	case X86_VENDOR_ZHAOXIN:
 		parse_legacy(tscan);
 		break;
+	case X86_VENDOR_INTEL:
+		if (!IS_ENABLED(CONFIG_CPU_SUP_INTEL) || !cpu_parse_topology_ext(tscan))
+			parse_legacy(tscan);
+		break;
 	}
 }
 


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 08/19] x86/cpu/amd: Provide a separate accessor for Node ID
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (6 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 07/19] x86/cpu: Use common topology code for Intel Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-01 15:19   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser Thomas Gleixner
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

AMD (ab)uses topology_die_id() to store the Node ID information and
topology_max_dies_per_pkg to store the number of nodes per package.

This collides with the proper processor die level enumeration which is
coming on AMD with CPUID 8000_0026, unless there is a correlation between
the two. There is zero documentation about that.

So provide new storage and new accessors which for now still access die_id
and topology_max_dies_per_pkg. Will be mopped up after AMD and HYGON are
converted over.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/events/amd/core.c       |    2 +-
 arch/x86/include/asm/processor.h |    3 +++
 arch/x86/include/asm/topology.h  |    8 ++++++++
 arch/x86/kernel/amd_nb.c         |    4 ++--
 arch/x86/kernel/cpu/cacheinfo.c  |    2 +-
 arch/x86/kernel/cpu/mce/amd.c    |    4 ++--
 arch/x86/kernel/cpu/mce/inject.c |    4 ++--
 drivers/edac/amd64_edac.c        |    4 ++--
 drivers/edac/mce_amd.c           |    4 ++--
 9 files changed, 23 insertions(+), 12 deletions(-)
---
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -579,7 +579,7 @@ static void amd_pmu_cpu_starting(int cpu
 	if (!x86_pmu.amd_nb_constraints)
 		return;
 
-	nb_id = topology_die_id(cpu);
+	nb_id = topology_amd_node_id(cpu);
 	WARN_ON_ONCE(nb_id == BAD_APICID);
 
 	for_each_online_cpu(i) {
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -100,6 +100,9 @@ struct cpuinfo_topology {
 	u32			logical_pkg_id;
 	u32			logical_die_id;
 
+	// AMD Node ID and Nodes per Package info
+	u32			amd_node_id;
+
 	// Cache level topology IDs
 	u32			llc_id;
 	u32			l2c_id;
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -131,6 +131,8 @@ extern const struct cpumask *cpu_cluster
 #define topology_core_id(cpu)			(cpu_data(cpu).topo.core_id)
 #define topology_ppin(cpu)			(cpu_data(cpu).ppin)
 
+#define topology_amd_node_id(cpu)		(cpu_data(cpu).topo.die_id)
+
 extern unsigned int __max_die_per_package;
 
 #ifdef CONFIG_SMP
@@ -161,6 +163,11 @@ int topology_update_package_map(unsigned
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
 
+static inline unsigned int topology_amd_nodes_per_pkg(void)
+{
+	return __max_die_per_package;
+}
+
 extern struct cpumask __cpu_primary_thread_mask;
 #define cpu_primary_thread_mask ((const struct cpumask *)&__cpu_primary_thread_mask)
 
@@ -182,6 +189,7 @@ static inline int topology_phys_to_logic
 static inline int topology_max_die_per_package(void) { return 1; }
 static inline int topology_max_smt_threads(void) { return 1; }
 static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
+static inline unsigned int topology_amd_nodes_per_pkg(void) { return 0; };
 #endif /* !CONFIG_SMP */
 
 static inline void arch_fix_phys_package_id(int num, u32 slot)
--- a/arch/x86/kernel/amd_nb.c
+++ b/arch/x86/kernel/amd_nb.c
@@ -386,7 +386,7 @@ struct resource *amd_get_mmconfig_range(
 
 int amd_get_subcaches(int cpu)
 {
-	struct pci_dev *link = node_to_amd_nb(topology_die_id(cpu))->link;
+	struct pci_dev *link = node_to_amd_nb(topology_amd_node_id(cpu))->link;
 	unsigned int mask;
 
 	if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING))
@@ -400,7 +400,7 @@ int amd_get_subcaches(int cpu)
 int amd_set_subcaches(int cpu, unsigned long mask)
 {
 	static unsigned int reset, ban;
-	struct amd_northbridge *nb = node_to_amd_nb(topology_die_id(cpu));
+	struct amd_northbridge *nb = node_to_amd_nb(topology_amd_node_id(cpu));
 	unsigned int reg;
 	int cuid;
 
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -595,7 +595,7 @@ static void amd_init_l3_cache(struct _cp
 	if (index < 3)
 		return;
 
-	node = topology_die_id(smp_processor_id());
+	node = topology_amd_node_id(smp_processor_id());
 	this_leaf->nb = node_to_amd_nb(node);
 	if (this_leaf->nb && !this_leaf->nb->l3_cache.indices)
 		amd_calc_l3_indices(this_leaf->nb);
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -1231,7 +1231,7 @@ static int threshold_create_bank(struct
 		return -ENODEV;
 
 	if (is_shared_bank(bank)) {
-		nb = node_to_amd_nb(topology_die_id(cpu));
+		nb = node_to_amd_nb(topology_amd_node_id(cpu));
 
 		/* threshold descriptor already initialized on this node? */
 		if (nb && nb->bank4) {
@@ -1335,7 +1335,7 @@ static void threshold_remove_bank(struct
 		 * The last CPU on this node using the shared bank is going
 		 * away, remove that bank now.
 		 */
-		nb = node_to_amd_nb(topology_die_id(smp_processor_id()));
+		nb = node_to_amd_nb(topology_amd_node_id(smp_processor_id()));
 		nb->bank4 = NULL;
 	}
 
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -543,8 +543,8 @@ static void do_inject(void)
 	if (boot_cpu_has(X86_FEATURE_AMD_DCM) &&
 	    b == 4 &&
 	    boot_cpu_data.x86 < 0x17) {
-		toggle_nb_mca_mst_cpu(topology_die_id(cpu));
-		cpu = get_nbc_for_node(topology_die_id(cpu));
+		toggle_nb_mca_mst_cpu(topology_amd_node_id(cpu));
+		cpu = get_nbc_for_node(topology_amd_node_id(cpu));
 	}
 
 	cpus_read_lock();
--- a/drivers/edac/amd64_edac.c
+++ b/drivers/edac/amd64_edac.c
@@ -1915,7 +1915,7 @@ static void dct_determine_memory_type(st
 /* On F10h and later ErrAddr is MC4_ADDR[47:1] */
 static u64 get_error_address(struct amd64_pvt *pvt, struct mce *m)
 {
-	u16 mce_nid = topology_die_id(m->extcpu);
+	u16 mce_nid = topology_amd_node_id(m->extcpu);
 	struct mem_ctl_info *mci;
 	u8 start_bit = 1;
 	u8 end_bit   = 47;
@@ -3446,7 +3446,7 @@ static void get_cpus_on_this_dct_cpumask
 	int cpu;
 
 	for_each_online_cpu(cpu)
-		if (topology_die_id(cpu) == nid)
+		if (topology_amd_node_id(cpu) == nid)
 			cpumask_set_cpu(cpu, mask);
 }
 
--- a/drivers/edac/mce_amd.c
+++ b/drivers/edac/mce_amd.c
@@ -584,7 +584,7 @@ static void decode_mc3_mce(struct mce *m
 static void decode_mc4_mce(struct mce *m)
 {
 	unsigned int fam = x86_family(m->cpuid);
-	int node_id = topology_die_id(m->extcpu);
+	int node_id = topology_amd_node_id(m->extcpu);
 	u16 ec = EC(m->status);
 	u8 xec = XEC(m->status, 0x1f);
 	u8 offset = 0;
@@ -746,7 +746,7 @@ static void decode_smca_error(struct mce
 
 	if ((bank_type == SMCA_UMC || bank_type == SMCA_UMC_V2) &&
 	    xec == 0 && decode_dram_ecc)
-		decode_dram_ecc(topology_die_id(m->extcpu), m);
+		decode_dram_ecc(topology_amd_node_id(m->extcpu), m);
 }
 
 static inline void amd_decode_err_code(u16 ec)


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (7 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 08/19] x86/cpu/amd: Provide a separate accessor for Node ID Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-01 15:55   ` Borislav Petkov
  2024-02-02 12:30   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 10/19] x86/smpboot: Teach it about topo.amd_node_id Thomas Gleixner
                   ` (10 subsequent siblings)
  19 siblings, 2 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

AMD/HYGON uses various methods for topology evaluation:

  - Leaf 0x80000008 and 0x8000001e based with an optional leaf 0xb,
    which is the preferred variant for modern CPUs.

    Leaf 0xb will be superseded by leaf 0x80000026 soon, which is just
    another variant of the Intel 0x1f leaf for whatever reasons.
    
  - Subleaf 0x80000008 and NODEID_MSR base

  - Legacy fallback

That code is following the principle of random bits and pieces all over the
place which results in multiple evaluations and impenetrable code flows in
the same way as the Intel parsing did.

Provide a sane implementation by clearly separating the three variants and
bringing them in the proper preference order in one place.

This provides the parsing for both AMD and HYGON because there is no point
in having a separate HYGON parser which only differs by 3 lines of
code. Any further divergence between AMD and HYGON can be handled in
different functions, while still sharing the existing parsers.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/include/asm/topology.h       |    2 
 arch/x86/kernel/cpu/Makefile          |    2 
 arch/x86/kernel/cpu/amd.c             |    2 
 arch/x86/kernel/cpu/cacheinfo.c       |    4 
 arch/x86/kernel/cpu/cpu.h             |    2 
 arch/x86/kernel/cpu/debugfs.c         |    2 
 arch/x86/kernel/cpu/topology.h        |    6 +
 arch/x86/kernel/cpu/topology_amd.c    |  182 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/topology_common.c |   19 +++
 9 files changed, 214 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/kernel/cpu/topology_amd.c
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -163,6 +163,8 @@ int topology_update_package_map(unsigned
 int topology_update_die_map(unsigned int dieid, unsigned int cpu);
 int topology_phys_to_logical_pkg(unsigned int pkg);
 
+extern unsigned int __amd_nodes_per_pkg;
+
 static inline unsigned int topology_amd_nodes_per_pkg(void)
 {
 	return __max_die_per_package;
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -18,7 +18,7 @@ KMSAN_SANITIZE_common.o := n
 KCSAN_SANITIZE_common.o := n
 
 obj-y			:= cacheinfo.o scattered.o
-obj-y			+= topology_common.o topology_ext.o topology.o
+obj-y			+= topology_common.o topology_ext.o topology_amd.o topology.o
 obj-y			+= common.o
 obj-y			+= rdrand.o
 obj-y			+= match.o
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -351,7 +351,7 @@ static void amd_get_topology(struct cpui
 		if (!err)
 			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
 
-		cacheinfo_amd_init_llc_id(c);
+		cacheinfo_amd_init_llc_id(c, c->topo.die_id);
 
 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
 		u64 value;
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -661,7 +661,7 @@ static int find_num_cache_leaves(struct
 	return i;
 }
 
-void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c)
+void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, u16 die_id)
 {
 	/*
 	 * We may have multiple LLCs if L3 caches exist, so check if we
@@ -672,7 +672,7 @@ void cacheinfo_amd_init_llc_id(struct cp
 
 	if (c->x86 < 0x17) {
 		/* LLC is at the node level. */
-		c->topo.llc_id = c->topo.die_id;
+		c->topo.llc_id = die_id;
 	} else if (c->x86 == 0x17 && c->x86_model <= 0x1F) {
 		/*
 		 * LLC is at the core complex level.
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -79,7 +79,7 @@ extern void init_hygon_cacheinfo(struct
 extern int detect_extended_topology(struct cpuinfo_x86 *c);
 extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
 
-void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c);
+void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, u16 die_id);
 void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c);
 
 unsigned int aperfmperf_get_khz(int cpu);
--- a/arch/x86/kernel/cpu/debugfs.c
+++ b/arch/x86/kernel/cpu/debugfs.c
@@ -26,6 +26,8 @@ static int cpu_debug_show(struct seq_fil
 	seq_printf(m, "logical_die_id:      %u\n", c->topo.logical_die_id);
 	seq_printf(m, "llc_id:              %u\n", c->topo.llc_id);
 	seq_printf(m, "l2c_id:              %u\n", c->topo.l2c_id);
+	seq_printf(m, "amd_node_id:         %u\n", c->topo.amd_node_id);
+	seq_printf(m, "amd_nodes_per_pkg:   %u\n", topology_amd_nodes_per_pkg());
 	seq_printf(m, "max_cores:           %u\n", c->x86_max_cores);
 	seq_printf(m, "max_die_per_pkg:     %u\n", __max_die_per_package);
 	seq_printf(m, "smp_num_siblings:    %u\n", smp_num_siblings);
--- a/arch/x86/kernel/cpu/topology.h
+++ b/arch/x86/kernel/cpu/topology.h
@@ -9,6 +9,10 @@ struct topo_scan {
 
 	// Legacy CPUID[1]:EBX[23:16] number of logical processors
 	unsigned int		ebx1_nproc_shift;
+
+	// AMD specific node ID which cannot be mapped into APIC space.
+	u16			amd_nodes_per_pkg;
+	u16			amd_node_id;
 };
 
 bool topo_is_converted(struct cpuinfo_x86 *c);
@@ -17,6 +21,8 @@ void cpu_parse_topology(struct cpuinfo_x
 void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
 		      unsigned int shift, unsigned int ncpus);
 bool cpu_parse_topology_ext(struct topo_scan *tscan);
+void cpu_parse_topology_amd(struct topo_scan *tscan);
+void cpu_topology_fixup_amd(struct topo_scan *tscan);
 
 static inline u32 topo_shift_apicid(u32 apicid, enum x86_topology_domains dom)
 {
--- /dev/null
+++ b/arch/x86/kernel/cpu/topology_amd.c
@@ -0,0 +1,182 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/cpu.h>
+
+#include <asm/apic.h>
+#include <asm/memtype.h>
+#include <asm/processor.h>
+
+#include "cpu.h"
+
+static bool parse_8000_0008(struct topo_scan *tscan)
+{
+	struct {
+		u32	ncores		:  8,
+			__rsvd0		:  4,
+			apicidsize	:  4,
+			perftscsize	:  2,
+			__rsvd1		: 14;
+	} ecx;
+	unsigned int sft;
+
+	if (tscan->c->extended_cpuid_level < 0x80000008)
+		return false;
+
+	cpuid_leaf_reg(0x80000008, CPUID_ECX, &ecx);
+
+	/* If the APIC ID size is 0, then get the shift value from ecx.ncores */
+	sft = ecx.apicidsize;
+	if (!sft)
+		sft = get_count_order(ecx.ncores + 1);
+
+	topology_set_dom(tscan, TOPO_CORE_DOMAIN, sft, ecx.ncores + 1);
+	return true;
+}
+
+static void store_node(struct topo_scan *tscan, unsigned int nr_nodes, u16 node_id)
+{
+	/*
+	 * Starting with Fam 17h the DIE domain could probably be used to
+	 * retrieve the node info on AMD/HYGON. Analysis of CPUID dumps
+	 * suggests it's the topmost bit(s) of the CPU cores area, but
+	 * that's guess work and neither enumerated nor documented.
+	 *
+	 * Up to Fam 16h this does not work at all and the legacy node ID
+	 * has to be used.
+	 */
+	tscan->amd_nodes_per_pkg = nr_nodes;
+	tscan->amd_node_id = node_id;
+}
+
+static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb)
+{
+	struct {
+		// eax
+		u32	x2apic_id	: 32;
+		// ebx
+		u32	cuid		:  8,
+			threads_per_cu	:  8,
+			__rsvd0		: 16;
+		// ecx
+		u32	nodeid		:  8,
+			nodes_per_pkg	:  3,
+			__rsvd1		: 21;
+		// edx
+		u32	__rsvd2		: 32;
+	} leaf;
+
+	if (!boot_cpu_has(X86_FEATURE_TOPOEXT))
+		return false;
+
+	cpuid_leaf(0x8000001e, &leaf);
+
+	tscan->c->topo.initial_apicid = leaf.x2apic_id;
+
+	/*
+	 * If leaf 0xb is available, then SMT shift is set already. If not
+	 * take it from ecx.threads_per_cu and use topo_update_dom() -
+	 * topology_set_dom() would propagate and overwrite the already
+	 * propagated CORE level.
+	 */
+	if (!has_0xb) {
+		unsigned int nthreads = leaf.threads_per_cu + 1;
+
+		topology_update_dom(tscan, TOPO_SMT_DOMAIN, get_count_order(nthreads), nthreads);
+	}
+
+	store_node(tscan, leaf.nodes_per_pkg + 1, leaf.nodeid);
+
+	if (tscan->c->x86_vendor == X86_VENDOR_AMD) {
+		if (tscan->c->x86 == 0x15)
+			tscan->c->topo.cu_id = leaf.cuid;
+
+		cacheinfo_amd_init_llc_id(tscan->c, leaf.nodeid);
+	} else {
+		/*
+		 * Package ID is ApicId[6..] on certain Hygon CPUs. See
+		 * commit e0ceeae708ce for explanation. The topology info
+		 * is screwed up: The package shift is always 6 and the
+		 * node ID is bit [4:5].
+		 */
+		if (!boot_cpu_has(X86_FEATURE_HYPERVISOR) && tscan->c->x86_model <= 0x3) {
+			topology_set_dom(tscan, TOPO_CORE_DOMAIN, 6,
+					 tscan->dom_ncpus[TOPO_CORE_DOMAIN]);
+		}
+		cacheinfo_hygon_init_llc_id(tscan->c);
+	}
+	return true;
+}
+
+static bool parse_fam10h_node_id(struct topo_scan *tscan)
+{
+	struct {
+		union {
+			u64	node_id		:  3,
+				nodes_per_pkg	:  3,
+				unused		: 58;
+			u64	msr;
+		};
+	} nid;
+
+	if (!boot_cpu_has(X86_FEATURE_NODEID_MSR))
+		return false;
+
+	rdmsrl(MSR_FAM10H_NODE_ID, nid.msr);
+	store_node(tscan, nid.nodes_per_pkg + 1, nid.node_id);
+	tscan->c->topo.llc_id = nid.node_id;
+	return true;
+}
+
+static void legacy_set_llc(struct topo_scan *tscan)
+{
+	unsigned int apicid = tscan->c->topo.initial_apicid;
+
+	/* parse_8000_0008() set everything up except llc_id */
+	tscan->c->topo.llc_id = apicid >> tscan->dom_shifts[TOPO_CORE_DOMAIN];
+}
+
+static void parse_topology_amd(struct topo_scan *tscan)
+{
+	bool has_0xb = false;
+
+	/*
+	 * If the extended topology leaf 0x8000_001e is available
+	 * try to get SMT and CORE shift from leaf 0xb first, then
+	 * try to get the CORE shift from leaf 0x8000_0008.
+	 */
+	if (boot_cpu_has(X86_FEATURE_TOPOEXT))
+		has_0xb = cpu_parse_topology_ext(tscan);
+
+	if (!has_0xb && !parse_8000_0008(tscan))
+		return;
+
+	/* Prefer leaf 0x8000001e if available */
+	if (parse_8000_001e(tscan, has_0xb))
+		return;
+
+	/* Try the NODEID MSR */
+	if (parse_fam10h_node_id(tscan))
+		return;
+
+	legacy_set_llc(tscan);
+}
+
+void cpu_parse_topology_amd(struct topo_scan *tscan)
+{
+	tscan->amd_nodes_per_pkg = 1;
+	parse_topology_amd(tscan);
+
+	if (tscan->amd_nodes_per_pkg > 1)
+		set_cpu_cap(tscan->c, X86_FEATURE_AMD_DCM);
+}
+
+void cpu_topology_fixup_amd(struct topo_scan *tscan)
+{
+	struct cpuinfo_x86 *c = tscan->c;
+
+	/*
+	 * Adjust the core_id relative to the node when there is more than
+	 * one node.
+	 */
+	if (tscan->c->x86 < 0x17 && tscan->amd_nodes_per_pkg > 1)
+		c->topo.core_id %= tscan->dom_ncpus[TOPO_CORE_DOMAIN] / tscan->amd_nodes_per_pkg;
+}
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -11,11 +11,13 @@
 
 struct x86_topology_system x86_topo_system __ro_after_init;
 
+unsigned int __amd_nodes_per_pkg __ro_after_init;
+EXPORT_SYMBOL_GPL(__amd_nodes_per_pkg);
+
 void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
 		      unsigned int shift, unsigned int ncpus)
 {
-	tscan->dom_shifts[dom] = shift;
-	tscan->dom_ncpus[dom] = ncpus;
+	topology_update_dom(tscan, dom, shift, ncpus);
 
 	/* Propagate to the upper levels */
 	for (dom++; dom < TOPO_MAX_DOMAIN; dom++) {
@@ -153,6 +155,13 @@ static void topo_set_ids(struct topo_sca
 	/* Package relative core ID */
 	c->topo.core_id = (apicid & topo_domain_mask(TOPO_PKG_DOMAIN)) >>
 		x86_topo_system.dom_shifts[TOPO_SMT_DOMAIN];
+
+	/* Temporary workaround */
+	if (tscan->amd_nodes_per_pkg)
+		c->topo.amd_node_id = c->topo.die_id = tscan->amd_node_id;
+
+	if (c->x86_vendor == X86_VENDOR_AMD)
+		cpu_topology_fixup_amd(tscan);
 }
 
 static void topo_set_max_cores(struct topo_scan *tscan)
@@ -237,4 +246,10 @@ void __init cpu_init_topology(struct cpu
 	 */
 	__max_die_per_package = tscan.dom_ncpus[TOPO_DIE_DOMAIN] /
 		tscan.dom_ncpus[TOPO_DIE_DOMAIN - 1];
+	/*
+	 * AMD systems have Nodes per package which cannot be mapped to
+	 * APIC ID.
+	 */
+	if (c->x86_vendor == X86_VENDOR_AMD || c->x86_vendor == X86_VENDOR_HYGON)
+		__amd_nodes_per_pkg = __max_die_per_package = tscan.amd_nodes_per_pkg;
 }


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 10/19] x86/smpboot: Teach it about topo.amd_node_id
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (8 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-06 15:48   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 11/19] x86/cpu: Use common topology code for AMD Thomas Gleixner
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

When switching AMD over to the new topology parser then the match functions
need to look for AMD systems with the extended topology feature at the new
topo.amd_node_id member which is then holding the node id information.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/smpboot.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)
---
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -488,6 +488,7 @@ static bool match_smt(struct cpuinfo_x86
 
 		if (c->topo.pkg_id == o->topo.pkg_id &&
 		    c->topo.die_id == o->topo.die_id &&
+		    c->topo.amd_node_id == o->topo.amd_node_id &&
 		    per_cpu_llc_id(cpu1) == per_cpu_llc_id(cpu2)) {
 			if (c->topo.core_id == o->topo.core_id)
 				return topology_sane(c, o, "smt");
@@ -509,10 +510,13 @@ static bool match_smt(struct cpuinfo_x86
 
 static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
 {
-	if (c->topo.pkg_id == o->topo.pkg_id &&
-	    c->topo.die_id == o->topo.die_id)
-		return true;
-	return false;
+	if (c->topo.pkg_id != o->topo.pkg_id || c->topo.die_id != o->topo.die_id)
+		return false;
+
+	if (boot_cpu_has(X86_FEATURE_TOPOEXT) && topology_amd_nodes_per_pkg() > 1)
+		return c->topo.amd_node_id == o->topo.amd_node_id;
+
+	return true;
 }
 
 static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 11/19] x86/cpu: Use common topology code for AMD
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (9 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 10/19] x86/smpboot: Teach it about topo.amd_node_id Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-06 15:58   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 12/19] x86/cpu: Use common topology code for HYGON Thomas Gleixner
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Switch it over to the new topology evaluation mechanism and remove the
random bits and pieces which are sprinkled all over the place.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/amd.c             |  146 ----------------------------------
 arch/x86/kernel/cpu/mce/inject.c      |    3 
 arch/x86/kernel/cpu/topology_common.c |    5 -
 3 files changed, 5 insertions(+), 149 deletions(-)
---
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -27,13 +27,6 @@
 
 #include "cpu.h"
 
-/*
- * nodes_per_socket: Stores the number of nodes per socket.
- * Refer to Fam15h Models 00-0fh BKDG - CPUID Fn8000_001E_ECX
- * Node Identifiers[10:8]
- */
-static u32 nodes_per_socket = 1;
-
 static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
 {
 	u32 gprs[8] = { 0 };
@@ -300,97 +293,6 @@ static int nearby_node(int apicid)
 }
 #endif
 
-/*
- * Fix up topo::core_id for pre-F17h systems to be in the
- * [0 .. cores_per_node - 1] range. Not really needed but
- * kept so as not to break existing setups.
- */
-static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
-{
-	u32 cus_per_node;
-
-	if (c->x86 >= 0x17)
-		return;
-
-	cus_per_node = c->x86_max_cores / nodes_per_socket;
-	c->topo.core_id %= cus_per_node;
-}
-
-/*
- * Fixup core topology information for
- * (1) AMD multi-node processors
- *     Assumption: Number of cores in each internal node is the same.
- * (2) AMD processors supporting compute units
- */
-static void amd_get_topology(struct cpuinfo_x86 *c)
-{
-	/* get information required for multi-node processors */
-	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
-		int err;
-		u32 eax, ebx, ecx, edx;
-
-		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
-
-		c->topo.die_id  = ecx & 0xff;
-
-		if (c->x86 == 0x15)
-			c->topo.cu_id = ebx & 0xff;
-
-		if (c->x86 >= 0x17) {
-			c->topo.core_id = ebx & 0xff;
-
-			if (smp_num_siblings > 1)
-				c->x86_max_cores /= smp_num_siblings;
-		}
-
-		/*
-		 * In case leaf B is available, use it to derive
-		 * topology information.
-		 */
-		err = detect_extended_topology(c);
-		if (!err)
-			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
-
-		cacheinfo_amd_init_llc_id(c, c->topo.die_id);
-
-	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
-		u64 value;
-
-		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		c->topo.die_id = value & 7;
-		c->topo.llc_id = c->topo.die_id;
-	} else
-		return;
-
-	if (nodes_per_socket > 1) {
-		set_cpu_cap(c, X86_FEATURE_AMD_DCM);
-		legacy_fixup_core_id(c);
-	}
-}
-
-/*
- * On a AMD dual core setup the lower bits of the APIC id distinguish the cores.
- * Assumes number of cores is a power of two.
- */
-static void amd_detect_cmp(struct cpuinfo_x86 *c)
-{
-	unsigned bits;
-
-	bits = c->x86_coreid_bits;
-	/* Low order bits define the core id (index of core in socket) */
-	c->topo.core_id = c->topo.initial_apicid & ((1 << bits)-1);
-	/* Convert the initial APIC ID into the socket ID */
-	c->topo.pkg_id = c->topo.initial_apicid >> bits;
-	/* use socket ID also for last level cache */
-	c->topo.llc_id = c->topo.die_id = c->topo.pkg_id;
-}
-
-u32 amd_get_nodes_per_socket(void)
-{
-	return nodes_per_socket;
-}
-EXPORT_SYMBOL_GPL(amd_get_nodes_per_socket);
-
 static void srat_detect_node(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_NUMA
@@ -442,32 +344,6 @@ static void srat_detect_node(struct cpui
 #endif
 }
 
-static void early_init_amd_mc(struct cpuinfo_x86 *c)
-{
-#ifdef CONFIG_SMP
-	unsigned bits, ecx;
-
-	/* Multi core CPU? */
-	if (c->extended_cpuid_level < 0x80000008)
-		return;
-
-	ecx = cpuid_ecx(0x80000008);
-
-	c->x86_max_cores = (ecx & 0xff) + 1;
-
-	/* CPU telling us the core id bits shift? */
-	bits = (ecx >> 12) & 0xF;
-
-	/* Otherwise recompute */
-	if (bits == 0) {
-		while ((1 << bits) < c->x86_max_cores)
-			bits++;
-	}
-
-	c->x86_coreid_bits = bits;
-#endif
-}
-
 static void bsp_init_amd(struct cpuinfo_x86 *c)
 {
 	if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
@@ -500,18 +376,6 @@ static void bsp_init_amd(struct cpuinfo_
 	if (cpu_has(c, X86_FEATURE_MWAITX))
 		use_mwaitx_delay();
 
-	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
-		u32 ecx;
-
-		ecx = cpuid_ecx(0x8000001e);
-		__max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1;
-	} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
-		u64 value;
-
-		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		__max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1;
-	}
-
 	if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
 	    !boot_cpu_has(X86_FEATURE_VIRT_SSBD) &&
 	    c->x86 >= 0x15 && c->x86 <= 0x17) {
@@ -636,8 +500,6 @@ static void early_init_amd(struct cpuinf
 	u64 value;
 	u32 dummy;
 
-	early_init_amd_mc(c);
-
 	if (c->x86 >= 0xf)
 		set_cpu_cap(c, X86_FEATURE_K8);
 
@@ -717,9 +579,6 @@ static void early_init_amd(struct cpuinf
 		}
 	}
 
-	if (cpu_has(c, X86_FEATURE_TOPOEXT))
-		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
-
 	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
 		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
 			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
@@ -1058,9 +917,6 @@ static void init_amd(struct cpuinfo_x86
 	if (cpu_has(c, X86_FEATURE_FSRM))
 		set_cpu_cap(c, X86_FEATURE_FSRS);
 
-	/* get apicid instead of initial apic id from cpuid */
-	c->topo.apicid = read_apic_id();
-
 	/* K6s reports MCEs but don't actually have all the MSRs */
 	if (c->x86 < 6)
 		clear_cpu_cap(c, X86_FEATURE_MCE);
@@ -1094,8 +950,6 @@ static void init_amd(struct cpuinfo_x86
 
 	cpu_detect_cache_sizes(c);
 
-	amd_detect_cmp(c);
-	amd_get_topology(c);
 	srat_detect_node(c);
 
 	init_amd_cacheinfo(c);
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -433,8 +433,7 @@ static u32 get_nbc_for_node(int node_id)
 	struct cpuinfo_x86 *c = &boot_cpu_data;
 	u32 cores_per_node;
 
-	cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket();
-
+	cores_per_node = (c->x86_max_cores * smp_num_siblings) / topology_amd_nodes_per_pkg();
 	return cores_per_node * node_id;
 }
 
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -72,7 +72,6 @@ bool topo_is_converted(struct cpuinfo_x8
 {
 	/* Temporary until everything is converted over. */
 	switch (boot_cpu_data.x86_vendor) {
-	case X86_VENDOR_AMD:
 	case X86_VENDOR_HYGON:
 		return false;
 	default:
@@ -133,6 +132,10 @@ static void parse_topology(struct topo_s
 	tscan->ebx1_nproc_shift = get_count_order(ebx.nproc);
 
 	switch (c->x86_vendor) {
+	case X86_VENDOR_AMD:
+		if (IS_ENABLED(CONFIG_CPU_SUP_AMD))
+			cpu_parse_topology_amd(tscan);
+		break;
 	case X86_VENDOR_CENTAUR:
 	case X86_VENDOR_ZHAOXIN:
 		parse_legacy(tscan);


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 12/19] x86/cpu: Use common topology code for HYGON
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (10 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 11/19] x86/cpu: Use common topology code for AMD Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 13/19] x86/mm/numa: Use core domain size on AMD Thomas Gleixner
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Switch it over to use the consolidated topology evaluation and remove the
temporary safe guards which are not longer needed.

No functional change intended.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/common.c          |    5 -
 arch/x86/kernel/cpu/cpu.h             |    1 
 arch/x86/kernel/cpu/hygon.c           |  129 ----------------------------------
 arch/x86/kernel/cpu/topology.h        |    1 
 arch/x86/kernel/cpu/topology_common.c |   22 +----
 5 files changed, 4 insertions(+), 154 deletions(-)
---
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1779,11 +1779,6 @@ static void identify_cpu(struct cpuinfo_
 	/* Clear/Set all flags overridden by options, after probe */
 	apply_forced_caps(c);
 
-#ifdef CONFIG_X86_64
-	if (!topo_is_converted(c))
-		c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
-#endif
-
 	/*
 	 * Set default APIC and TSC_DEADLINE MSR fencing flag. AMD and
 	 * Hygon will clear it in ->c_init() below.
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -76,7 +76,6 @@ extern void init_intel_cacheinfo(struct
 extern void init_amd_cacheinfo(struct cpuinfo_x86 *c);
 extern void init_hygon_cacheinfo(struct cpuinfo_x86 *c);
 
-extern int detect_extended_topology(struct cpuinfo_x86 *c);
 extern void check_null_seg_clears_base(struct cpuinfo_x86 *c);
 
 void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, u16 die_id);
--- a/arch/x86/kernel/cpu/hygon.c
+++ b/arch/x86/kernel/cpu/hygon.c
@@ -18,14 +18,6 @@
 
 #include "cpu.h"
 
-#define APICID_SOCKET_ID_BIT 6
-
-/*
- * nodes_per_socket: Stores the number of nodes per socket.
- * Refer to CPUID Fn8000_001E_ECX Node Identifiers[10:8]
- */
-static u32 nodes_per_socket = 1;
-
 #ifdef CONFIG_NUMA
 /*
  * To workaround broken NUMA config.  Read the comment in
@@ -49,80 +41,6 @@ static int nearby_node(int apicid)
 }
 #endif
 
-static void hygon_get_topology_early(struct cpuinfo_x86 *c)
-{
-	if (cpu_has(c, X86_FEATURE_TOPOEXT))
-		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
-}
-
-/*
- * Fixup core topology information for
- * (1) Hygon multi-node processors
- *     Assumption: Number of cores in each internal node is the same.
- * (2) Hygon processors supporting compute units
- */
-static void hygon_get_topology(struct cpuinfo_x86 *c)
-{
-	/* get information required for multi-node processors */
-	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
-		int err;
-		u32 eax, ebx, ecx, edx;
-
-		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
-
-		c->topo.die_id  = ecx & 0xff;
-
-		c->topo.core_id = ebx & 0xff;
-
-		if (smp_num_siblings > 1)
-			c->x86_max_cores /= smp_num_siblings;
-
-		/*
-		 * In case leaf B is available, use it to derive
-		 * topology information.
-		 */
-		err = detect_extended_topology(c);
-		if (!err)
-			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
-
-		/*
-		 * Socket ID is ApicId[6] for the processors with model <= 0x3
-		 * when running on host.
-		 */
-		if (!boot_cpu_has(X86_FEATURE_HYPERVISOR) && c->x86_model <= 0x3)
-			c->topo.pkg_id = c->topo.apicid >> APICID_SOCKET_ID_BIT;
-
-		cacheinfo_hygon_init_llc_id(c);
-	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
-		u64 value;
-
-		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		c->topo.die_id = value & 7;
-		c->topo.llc_id = c->topo.die_id;
-	} else
-		return;
-
-	if (nodes_per_socket > 1)
-		set_cpu_cap(c, X86_FEATURE_AMD_DCM);
-}
-
-/*
- * On Hygon setup the lower bits of the APIC id distinguish the cores.
- * Assumes number of cores is a power of two.
- */
-static void hygon_detect_cmp(struct cpuinfo_x86 *c)
-{
-	unsigned int bits;
-
-	bits = c->x86_coreid_bits;
-	/* Low order bits define the core id (index of core in socket) */
-	c->topo.core_id = c->topo.initial_apicid & ((1 << bits)-1);
-	/* Convert the initial APIC ID into the socket ID */
-	c->topo.pkg_id = c->topo.initial_apicid >> bits;
-	/* Use package ID also for last level cache */
-	c->topo.llc_id = c->topo.die_id = c->topo.pkg_id;
-}
-
 static void srat_detect_node(struct cpuinfo_x86 *c)
 {
 #ifdef CONFIG_NUMA
@@ -173,32 +91,6 @@ static void srat_detect_node(struct cpui
 #endif
 }
 
-static void early_init_hygon_mc(struct cpuinfo_x86 *c)
-{
-#ifdef CONFIG_SMP
-	unsigned int bits, ecx;
-
-	/* Multi core CPU? */
-	if (c->extended_cpuid_level < 0x80000008)
-		return;
-
-	ecx = cpuid_ecx(0x80000008);
-
-	c->x86_max_cores = (ecx & 0xff) + 1;
-
-	/* CPU telling us the core id bits shift? */
-	bits = (ecx >> 12) & 0xF;
-
-	/* Otherwise recompute */
-	if (bits == 0) {
-		while ((1 << bits) < c->x86_max_cores)
-			bits++;
-	}
-
-	c->x86_coreid_bits = bits;
-#endif
-}
-
 static void bsp_init_hygon(struct cpuinfo_x86 *c)
 {
 	if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
@@ -212,18 +104,6 @@ static void bsp_init_hygon(struct cpuinf
 	if (cpu_has(c, X86_FEATURE_MWAITX))
 		use_mwaitx_delay();
 
-	if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
-		u32 ecx;
-
-		ecx = cpuid_ecx(0x8000001e);
-		__max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1;
-	} else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) {
-		u64 value;
-
-		rdmsrl(MSR_FAM10H_NODE_ID, value);
-		__max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1;
-	}
-
 	if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
 	    !boot_cpu_has(X86_FEATURE_VIRT_SSBD)) {
 		/*
@@ -242,8 +122,6 @@ static void early_init_hygon(struct cpui
 {
 	u32 dummy;
 
-	early_init_hygon_mc(c);
-
 	set_cpu_cap(c, X86_FEATURE_K8);
 
 	rdmsr_safe(MSR_AMD64_PATCH_LEVEL, &c->microcode, &dummy);
@@ -284,8 +162,6 @@ static void early_init_hygon(struct cpui
 	 * we can set it unconditionally.
 	 */
 	set_cpu_cap(c, X86_FEATURE_VMMCALL);
-
-	hygon_get_topology_early(c);
 }
 
 static void init_hygon(struct cpuinfo_x86 *c)
@@ -302,9 +178,6 @@ static void init_hygon(struct cpuinfo_x8
 
 	set_cpu_cap(c, X86_FEATURE_REP_GOOD);
 
-	/* get apicid instead of initial apic id from cpuid */
-	c->topo.apicid = read_apic_id();
-
 	/*
 	 * XXX someone from Hygon needs to confirm this DTRT
 	 *
@@ -316,8 +189,6 @@ static void init_hygon(struct cpuinfo_x8
 
 	cpu_detect_cache_sizes(c);
 
-	hygon_detect_cmp(c);
-	hygon_get_topology(c);
 	srat_detect_node(c);
 
 	init_hygon_cacheinfo(c);
--- a/arch/x86/kernel/cpu/topology.h
+++ b/arch/x86/kernel/cpu/topology.h
@@ -15,7 +15,6 @@ struct topo_scan {
 	u16			amd_node_id;
 };
 
-bool topo_is_converted(struct cpuinfo_x86 *c);
 void cpu_init_topology(struct cpuinfo_x86 *c);
 void cpu_parse_topology(struct cpuinfo_x86 *c);
 void topology_set_dom(struct topo_scan *tscan, enum x86_topology_domains dom,
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -68,18 +68,6 @@ static void parse_legacy(struct topo_sca
 	topology_set_dom(tscan, TOPO_CORE_DOMAIN, core_shift, cores);
 }
 
-bool topo_is_converted(struct cpuinfo_x86 *c)
-{
-	/* Temporary until everything is converted over. */
-	switch (boot_cpu_data.x86_vendor) {
-	case X86_VENDOR_HYGON:
-		return false;
-	default:
-		/* Let all UP systems use the below */
-		return true;
-	}
-}
-
 static bool fake_topology(struct topo_scan *tscan)
 {
 	/*
@@ -144,6 +132,10 @@ static void parse_topology(struct topo_s
 		if (!IS_ENABLED(CONFIG_CPU_SUP_INTEL) || !cpu_parse_topology_ext(tscan))
 			parse_legacy(tscan);
 		break;
+	case X86_VENDOR_HYGON:
+		if (IS_ENABLED(CONFIG_CPU_SUP_HYGON))
+			cpu_parse_topology_amd(tscan);
+		break;
 	}
 }
 
@@ -187,9 +179,6 @@ void cpu_parse_topology(struct cpuinfo_x
 
 	parse_topology(&tscan, false);
 
-	if (!topo_is_converted(c))
-		return;
-
 	for (dom = TOPO_SMT_DOMAIN; dom < TOPO_MAX_DOMAIN; dom++) {
 		if (tscan.dom_shifts[dom] == x86_topo_system.dom_shifts[dom])
 			continue;
@@ -218,9 +207,6 @@ void __init cpu_init_topology(struct cpu
 
 	parse_topology(&tscan, true);
 
-	if (!topo_is_converted(c))
-		return;
-
 	/* Copy the shift values and calculate the unit sizes. */
 	memcpy(x86_topo_system.dom_shifts, tscan.dom_shifts, sizeof(x86_topo_system.dom_shifts));
 


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 13/19] x86/mm/numa: Use core domain size on AMD
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (11 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 12/19] x86/cpu: Use common topology code for HYGON Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-02-12 15:56   ` Borislav Petkov
  2024-01-23 12:53 ` [patch v5 14/19] x86/cpu: Make topology_amd_node_id() use the actual node info Thomas Gleixner
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

cpuinfo::topo::x86_coreid_bits is about to be phased out. Use the core
domain size from the topology information.

Add a comment why the early MPTABLE parsing is required and decrapify the
loop which sets the APIC ID to node map.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/include/asm/topology.h |    5 +++++
 arch/x86/mm/amdtopology.c       |   35 ++++++++++++++++-------------------
 2 files changed, 21 insertions(+), 19 deletions(-)
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -121,6 +121,11 @@ struct x86_topology_system {
 
 extern struct x86_topology_system x86_topo_system;
 
+static inline unsigned int topology_get_domain_size(enum x86_topology_domains dom)
+{
+	return x86_topo_system.dom_size[dom];
+}
+
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
 extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
--- a/arch/x86/mm/amdtopology.c
+++ b/arch/x86/mm/amdtopology.c
@@ -54,13 +54,11 @@ static __init int find_northbridge(void)
 
 int __init amd_numa_init(void)
 {
-	u64 start = PFN_PHYS(0);
+	unsigned int numnodes, cores, apicid;
+	u64 prevbase, start = PFN_PHYS(0);
 	u64 end = PFN_PHYS(max_pfn);
-	unsigned numnodes;
-	u64 prevbase;
-	int i, j, nb;
 	u32 nodeid, reg;
-	unsigned int bits, cores, apicid_base;
+	int i, j, nb;
 
 	if (!early_pci_allowed())
 		return -EINVAL;
@@ -158,26 +156,25 @@ int __init amd_numa_init(void)
 		return -ENOENT;
 
 	/*
-	 * We seem to have valid NUMA configuration.  Map apicids to nodes
-	 * using the coreid bits from early_identify_cpu.
+	 * We seem to have valid NUMA configuration. Map apicids to nodes
+	 * using the size of the core domain in the APIC space.
 	 */
-	bits = boot_cpu_data.x86_coreid_bits;
-	cores = 1 << bits;
-	apicid_base = 0;
+	cores = topology_get_domain_size(TOPO_CORE_DOMAIN);
 
 	/*
-	 * get boot-time SMP configuration:
+	 * Scan MPTABLE to map the local APIC and ensure that the boot CPU
+	 * APIC ID is valid. This is required because on pre ACPI/SRAT
+	 * systems IO-APICs are mapped before the boot CPU.
 	 */
 	early_get_smp_config();
 
-	if (boot_cpu_physical_apicid > 0) {
-		pr_info("BSP APIC ID: %02x\n", boot_cpu_physical_apicid);
-		apicid_base = boot_cpu_physical_apicid;
+	apicid = boot_cpu_physical_apicid;
+	if (apicid > 0)
+		pr_info("BSP APIC ID: %02x\n", apicid);
+
+	for_each_node_mask(i, numa_nodes_parsed) {
+		for (j = 0; j < cores; j++, apicid++)
+			set_apicid_to_node(apicid, i);
 	}
-
-	for_each_node_mask(i, numa_nodes_parsed)
-		for (j = apicid_base; j < cores + apicid_base; j++)
-			set_apicid_to_node((i << bits) + j, i);
-
 	return 0;
 }


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 14/19] x86/cpu: Make topology_amd_node_id() use the actual node info
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (12 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 13/19] x86/mm/numa: Use core domain size on AMD Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 15/19] x86/cpu: Remove topology.c Thomas Gleixner
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Now that everything is converted switch it over and remove the intermediate
operation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/include/asm/topology.h       |    4 ++--
 arch/x86/kernel/cpu/topology_common.c |    7 ++-----
 2 files changed, 4 insertions(+), 7 deletions(-)
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -136,7 +136,7 @@ extern const struct cpumask *cpu_cluster
 #define topology_core_id(cpu)			(cpu_data(cpu).topo.core_id)
 #define topology_ppin(cpu)			(cpu_data(cpu).ppin)
 
-#define topology_amd_node_id(cpu)		(cpu_data(cpu).topo.die_id)
+#define topology_amd_node_id(cpu)		(cpu_data(cpu).topo.amd_node_id)
 
 extern unsigned int __max_die_per_package;
 
@@ -172,7 +172,7 @@ extern unsigned int __amd_nodes_per_pkg;
 
 static inline unsigned int topology_amd_nodes_per_pkg(void)
 {
-	return __max_die_per_package;
+	return __amd_nodes_per_pkg;
 }
 
 extern struct cpumask __cpu_primary_thread_mask;
--- a/arch/x86/kernel/cpu/topology_common.c
+++ b/arch/x86/kernel/cpu/topology_common.c
@@ -151,9 +151,7 @@ static void topo_set_ids(struct topo_sca
 	c->topo.core_id = (apicid & topo_domain_mask(TOPO_PKG_DOMAIN)) >>
 		x86_topo_system.dom_shifts[TOPO_SMT_DOMAIN];
 
-	/* Temporary workaround */
-	if (tscan->amd_nodes_per_pkg)
-		c->topo.amd_node_id = c->topo.die_id = tscan->amd_node_id;
+	c->topo.amd_node_id = tscan->amd_node_id;
 
 	if (c->x86_vendor == X86_VENDOR_AMD)
 		cpu_topology_fixup_amd(tscan);
@@ -239,6 +237,5 @@ void __init cpu_init_topology(struct cpu
 	 * AMD systems have Nodes per package which cannot be mapped to
 	 * APIC ID.
 	 */
-	if (c->x86_vendor == X86_VENDOR_AMD || c->x86_vendor == X86_VENDOR_HYGON)
-		__amd_nodes_per_pkg = __max_die_per_package = tscan.amd_nodes_per_pkg;
+	__amd_nodes_per_pkg = tscan.amd_nodes_per_pkg;
 }


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 15/19] x86/cpu: Remove topology.c
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (13 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 14/19] x86/cpu: Make topology_amd_node_id() use the actual node info Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 16/19] x86/cpu: Remove x86_coreid_bits Thomas Gleixner
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

No more users. Stick it into the ugly code museum.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/kernel/cpu/Makefile   |    2 
 arch/x86/kernel/cpu/topology.c |  164 -----------------------------------------
 2 files changed, 1 insertion(+), 165 deletions(-)
 delete mode 100644 arch/x86/kernel/cpu/topology.c
---
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -18,7 +18,7 @@ KMSAN_SANITIZE_common.o := n
 KCSAN_SANITIZE_common.o := n
 
 obj-y			:= cacheinfo.o scattered.o
-obj-y			+= topology_common.o topology_ext.o topology_amd.o topology.o
+obj-y			+= topology_common.o topology_ext.o topology_amd.o
 obj-y			+= common.o
 obj-y			+= rdrand.o
 obj-y			+= match.o
--- a/arch/x86/kernel/cpu/topology.c
+++ /dev/null
@@ -1,164 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Check for extended topology enumeration cpuid leaf 0xb and if it
- * exists, use it for populating initial_apicid and cpu topology
- * detection.
- */
-
-#include <linux/cpu.h>
-#include <asm/apic.h>
-#include <asm/memtype.h>
-#include <asm/processor.h>
-
-#include "cpu.h"
-
-/* leaf 0xb SMT level */
-#define SMT_LEVEL	0
-
-/* extended topology sub-leaf types */
-#define INVALID_TYPE	0
-#define SMT_TYPE	1
-#define CORE_TYPE	2
-#define DIE_TYPE	5
-
-#define LEAFB_SUBTYPE(ecx)		(((ecx) >> 8) & 0xff)
-#define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
-#define LEVEL_MAX_SIBLINGS(ebx)		((ebx) & 0xffff)
-
-#ifdef CONFIG_SMP
-/*
- * Check if given CPUID extended topology "leaf" is implemented
- */
-static int check_extended_topology_leaf(int leaf)
-{
-	unsigned int eax, ebx, ecx, edx;
-
-	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
-
-	if (ebx == 0 || (LEAFB_SUBTYPE(ecx) != SMT_TYPE))
-		return -1;
-
-	return 0;
-}
-/*
- * Return best CPUID Extended Topology Leaf supported
- */
-static int detect_extended_topology_leaf(struct cpuinfo_x86 *c)
-{
-	if (c->cpuid_level >= 0x1f) {
-		if (check_extended_topology_leaf(0x1f) == 0)
-			return 0x1f;
-	}
-
-	if (c->cpuid_level >= 0xb) {
-		if (check_extended_topology_leaf(0xb) == 0)
-			return 0xb;
-	}
-
-	return -1;
-}
-#endif
-
-int detect_extended_topology_early(struct cpuinfo_x86 *c)
-{
-#ifdef CONFIG_SMP
-	unsigned int eax, ebx, ecx, edx;
-	int leaf;
-
-	leaf = detect_extended_topology_leaf(c);
-	if (leaf < 0)
-		return -1;
-
-	set_cpu_cap(c, X86_FEATURE_XTOPOLOGY);
-
-	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
-	/*
-	 * initial apic id, which also represents 32-bit extended x2apic id.
-	 */
-	c->topo.initial_apicid = edx;
-	smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
-#endif
-	return 0;
-}
-
-/*
- * Check for extended topology enumeration cpuid leaf, and if it
- * exists, use it for populating initial_apicid and cpu topology
- * detection.
- */
-int detect_extended_topology(struct cpuinfo_x86 *c)
-{
-#ifdef CONFIG_SMP
-	unsigned int eax, ebx, ecx, edx, sub_index;
-	unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width;
-	unsigned int core_select_mask, core_level_siblings;
-	unsigned int die_select_mask, die_level_siblings;
-	unsigned int pkg_mask_width;
-	bool die_level_present = false;
-	int leaf;
-
-	leaf = detect_extended_topology_leaf(c);
-	if (leaf < 0)
-		return -1;
-
-	/*
-	 * Populate HT related information from sub-leaf level 0.
-	 */
-	cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
-	c->topo.initial_apicid = edx;
-	core_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
-	smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
-	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
-	die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
-	pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
-
-	sub_index = 1;
-	while (true) {
-		cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx);
-
-		/*
-		 * Check for the Core type in the implemented sub leaves.
-		 */
-		if (LEAFB_SUBTYPE(ecx) == CORE_TYPE) {
-			core_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
-			core_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
-			die_level_siblings = core_level_siblings;
-			die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
-		}
-		if (LEAFB_SUBTYPE(ecx) == DIE_TYPE) {
-			die_level_present = true;
-			die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
-			die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
-		}
-
-		if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE)
-			pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
-		else
-			break;
-
-		sub_index++;
-	}
-
-	core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width;
-	die_select_mask = (~(-1 << die_plus_mask_width)) >>
-				core_plus_mask_width;
-
-	c->topo.core_id = apic->phys_pkg_id(c->topo.initial_apicid,
-				ht_mask_width) & core_select_mask;
-
-	if (die_level_present) {
-		c->topo.die_id = apic->phys_pkg_id(c->topo.initial_apicid,
-					core_plus_mask_width) & die_select_mask;
-	}
-
-	c->topo.pkg_id = apic->phys_pkg_id(c->topo.initial_apicid, pkg_mask_width);
-	/*
-	 * Reinit the apicid, now that we have extended initial_apicid.
-	 */
-	c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
-
-	c->x86_max_cores = (core_level_siblings / smp_num_siblings);
-	__max_die_per_package = (die_level_siblings / core_level_siblings);
-#endif
-	return 0;
-}


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 16/19] x86/cpu: Remove x86_coreid_bits
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (14 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 15/19] x86/cpu: Remove topology.c Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 17/19] x86/apic: Remove unused phys_pkg_id() callback Thomas Gleixner
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

No more users.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/include/asm/processor.h |    2 --
 arch/x86/kernel/cpu/common.c     |    1 -
 2 files changed, 3 deletions(-)
---
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -122,8 +122,6 @@ struct cpuinfo_x86 {
 #endif
 	__u8			x86_virt_bits;
 	__u8			x86_phys_bits;
-	/* CPUID returned core id bits: */
-	__u8			x86_coreid_bits;
 	/* Max extended CPUID function supported: */
 	__u32			extended_cpuid_level;
 	/* Maximum supported CPUID level, -1=no CPUID: */
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1752,7 +1752,6 @@ static void identify_cpu(struct cpuinfo_
 	c->x86_vendor_id[0] = '\0'; /* Unset */
 	c->x86_model_id[0] = '\0';  /* Unset */
 	c->x86_max_cores = 1;
-	c->x86_coreid_bits = 0;
 #ifdef CONFIG_X86_64
 	c->x86_clflush_size = 64;
 	c->x86_phys_bits = 36;


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 17/19] x86/apic: Remove unused phys_pkg_id() callback
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (15 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 16/19] x86/cpu: Remove x86_coreid_bits Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 18/19] x86/xen/smp_pv: Remove cpudata fiddling Thomas Gleixner
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

Now that the core code does not use this monstrosity anymore, it's time to
put it to rest.

The only real purpose was to read the APIC ID on UV and VSMP systems for
the actual evaluation. That's what the core code does now.

For doing the actual shift operation there is truly no APIC callback
required.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/include/asm/apic.h           |    1 -
 arch/x86/kernel/apic/apic_flat_64.c   |    7 -------
 arch/x86/kernel/apic/apic_noop.c      |    3 ---
 arch/x86/kernel/apic/apic_numachip.c  |    7 -------
 arch/x86/kernel/apic/bigsmp_32.c      |    6 ------
 arch/x86/kernel/apic/local.h          |    1 -
 arch/x86/kernel/apic/probe_32.c       |    6 ------
 arch/x86/kernel/apic/x2apic_cluster.c |    1 -
 arch/x86/kernel/apic/x2apic_phys.c    |    6 ------
 arch/x86/kernel/apic/x2apic_uv_x.c    |   11 -----------
 arch/x86/kernel/vsmp_64.c             |   13 -------------
 arch/x86/xen/apic.c                   |    6 ------
 12 files changed, 68 deletions(-)
---
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -295,7 +295,6 @@ struct apic {
 	void	(*init_apic_ldr)(void);
 	void	(*ioapic_phys_id_map)(physid_mask_t *phys_map, physid_mask_t *retmap);
 	u32	(*cpu_present_to_apicid)(int mps_cpu);
-	u32	(*phys_pkg_id)(u32 cpuid_apic, int index_msb);
 
 	u32	(*get_apic_id)(u32 id);
 	u32	(*set_apic_id)(u32 apicid);
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -66,11 +66,6 @@ static u32 set_apic_id(u32 id)
 	return (id & 0xFF) << 24;
 }
 
-static u32 flat_phys_pkg_id(u32 initial_apic_id, int index_msb)
-{
-	return initial_apic_id >> index_msb;
-}
-
 static int flat_probe(void)
 {
 	return 1;
@@ -88,7 +83,6 @@ static struct apic apic_flat __ro_after_
 
 	.init_apic_ldr			= default_init_apic_ldr,
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= flat_phys_pkg_id,
 
 	.max_apic_id			= 0xFE,
 	.get_apic_id			= flat_get_apic_id,
@@ -158,7 +152,6 @@ static struct apic apic_physflat __ro_af
 	.disable_esr			= 0,
 
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= flat_phys_pkg_id,
 
 	.max_apic_id			= 0xFE,
 	.get_apic_id			= flat_get_apic_id,
--- a/arch/x86/kernel/apic/apic_noop.c
+++ b/arch/x86/kernel/apic/apic_noop.c
@@ -29,7 +29,6 @@ static void noop_send_IPI_self(int vecto
 static void noop_apic_icr_write(u32 low, u32 id) { }
 static int noop_wakeup_secondary_cpu(u32 apicid, unsigned long start_eip) { return -1; }
 static u64 noop_apic_icr_read(void) { return 0; }
-static u32 noop_phys_pkg_id(u32 cpuid_apic, int index_msb) { return 0; }
 static u32 noop_get_apic_id(u32 apicid) { return 0; }
 static void noop_apic_eoi(void) { }
 
@@ -55,8 +54,6 @@ struct apic apic_noop __ro_after_init =
 	.ioapic_phys_id_map		= default_ioapic_phys_id_map,
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
 
-	.phys_pkg_id			= noop_phys_pkg_id,
-
 	.max_apic_id			= 0xFE,
 	.get_apic_id			= noop_get_apic_id,
 
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -56,11 +56,6 @@ static u32 numachip2_set_apic_id(u32 id)
 	return id << 24;
 }
 
-static u32 numachip_phys_pkg_id(u32 initial_apic_id, int index_msb)
-{
-	return initial_apic_id >> index_msb;
-}
-
 static void numachip1_apic_icr_write(int apicid, unsigned int val)
 {
 	write_lcsr(CSR_G3_EXT_IRQ_GEN, (apicid << 16) | val);
@@ -227,7 +222,6 @@ static const struct apic apic_numachip1
 	.disable_esr			= 0,
 
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= numachip_phys_pkg_id,
 
 	.max_apic_id			= UINT_MAX,
 	.get_apic_id			= numachip1_get_apic_id,
@@ -263,7 +257,6 @@ static const struct apic apic_numachip2
 	.disable_esr			= 0,
 
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= numachip_phys_pkg_id,
 
 	.max_apic_id			= UINT_MAX,
 	.get_apic_id			= numachip2_get_apic_id,
--- a/arch/x86/kernel/apic/bigsmp_32.c
+++ b/arch/x86/kernel/apic/bigsmp_32.c
@@ -29,11 +29,6 @@ static void bigsmp_ioapic_phys_id_map(ph
 	physids_promote(0xFFL, retmap);
 }
 
-static u32 bigsmp_phys_pkg_id(u32 cpuid_apic, int index_msb)
-{
-	return cpuid_apic >> index_msb;
-}
-
 static void bigsmp_send_IPI_allbutself(int vector)
 {
 	default_send_IPI_mask_allbutself_phys(cpu_online_mask, vector);
@@ -87,7 +82,6 @@ static struct apic apic_bigsmp __ro_afte
 	.check_apicid_used		= bigsmp_check_apicid_used,
 	.ioapic_phys_id_map		= bigsmp_ioapic_phys_id_map,
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= bigsmp_phys_pkg_id,
 
 	.max_apic_id			= 0xFE,
 	.get_apic_id			= bigsmp_get_apic_id,
--- a/arch/x86/kernel/apic/local.h
+++ b/arch/x86/kernel/apic/local.h
@@ -17,7 +17,6 @@
 void __x2apic_send_IPI_dest(unsigned int apicid, int vector, unsigned int dest);
 u32 x2apic_get_apic_id(u32 id);
 u32 x2apic_set_apic_id(u32 id);
-u32 x2apic_phys_pkg_id(u32 initial_apicid, int index_msb);
 
 void x2apic_send_IPI_all(int vector);
 void x2apic_send_IPI_allbutself(int vector);
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -18,11 +18,6 @@
 
 #include "local.h"
 
-static u32 default_phys_pkg_id(u32 cpuid_apic, int index_msb)
-{
-	return cpuid_apic >> index_msb;
-}
-
 static u32 default_get_apic_id(u32 x)
 {
 	unsigned int ver = GET_APIC_VERSION(apic_read(APIC_LVR));
@@ -53,7 +48,6 @@ static struct apic apic_default __ro_aft
 	.init_apic_ldr			= default_init_apic_ldr,
 	.ioapic_phys_id_map		= default_ioapic_phys_id_map,
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= default_phys_pkg_id,
 
 	.max_apic_id			= 0xFE,
 	.get_apic_id			= default_get_apic_id,
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -235,7 +235,6 @@ static struct apic apic_x2apic_cluster _
 	.init_apic_ldr			= init_x2apic_ldr,
 	.ioapic_phys_id_map		= NULL,
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= x2apic_phys_pkg_id,
 
 	.max_apic_id			= UINT_MAX,
 	.x2apic_set_max_apicid		= true,
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -134,11 +134,6 @@ u32 x2apic_set_apic_id(u32 id)
 	return id;
 }
 
-u32 x2apic_phys_pkg_id(u32 initial_apicid, int index_msb)
-{
-	return initial_apicid >> index_msb;
-}
-
 static struct apic apic_x2apic_phys __ro_after_init = {
 
 	.name				= "physical x2apic",
@@ -150,7 +145,6 @@ static struct apic apic_x2apic_phys __ro
 	.disable_esr			= 0,
 
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= x2apic_phys_pkg_id,
 
 	.max_apic_id			= UINT_MAX,
 	.x2apic_set_max_apicid		= true,
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -784,16 +784,6 @@ static u32 set_apic_id(u32 id)
 	return id;
 }
 
-static unsigned int uv_read_apic_id(void)
-{
-	return x2apic_get_apic_id(apic_read(APIC_ID));
-}
-
-static u32 uv_phys_pkg_id(u32 initial_apicid, int index_msb)
-{
-	return uv_read_apic_id() >> index_msb;
-}
-
 static int uv_probe(void)
 {
 	return apic == &apic_x2apic_uv_x;
@@ -810,7 +800,6 @@ static struct apic apic_x2apic_uv_x __ro
 	.disable_esr			= 0,
 
 	.cpu_present_to_apicid		= default_cpu_present_to_apicid,
-	.phys_pkg_id			= uv_phys_pkg_id,
 
 	.max_apic_id			= UINT_MAX,
 	.get_apic_id			= x2apic_get_apic_id,
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -127,25 +127,12 @@ static void __init vsmp_cap_cpus(void)
 #endif
 }
 
-static u32 apicid_phys_pkg_id(u32 initial_apic_id, int index_msb)
-{
-	return read_apic_id() >> index_msb;
-}
-
-static void vsmp_apic_post_init(void)
-{
-	/* need to update phys_pkg_id */
-	apic->phys_pkg_id = apicid_phys_pkg_id;
-}
-
 void __init vsmp_init(void)
 {
 	detect_vsmp_box();
 	if (!is_vsmp_box())
 		return;
 
-	x86_platform.apic_post_init = vsmp_apic_post_init;
-
 	vsmp_cap_cpus();
 
 	set_vsmp_ctl();
--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -110,11 +110,6 @@ static int xen_madt_oem_check(char *oem_
 	return xen_pv_domain();
 }
 
-static u32 xen_phys_pkg_id(u32 initial_apic_id, int index_msb)
-{
-	return initial_apic_id >> index_msb;
-}
-
 static u32 xen_cpu_present_to_apicid(int cpu)
 {
 	if (cpu_present(cpu))
@@ -133,7 +128,6 @@ static struct apic xen_pv_apic __ro_afte
 	.disable_esr			= 0,
 
 	.cpu_present_to_apicid		= xen_cpu_present_to_apicid,
-	.phys_pkg_id			= xen_phys_pkg_id, /* detect_ht */
 
 	.max_apic_id			= UINT_MAX,
 	.get_apic_id			= xen_get_apic_id,


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 18/19] x86/xen/smp_pv: Remove cpudata fiddling
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (16 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 17/19] x86/apic: Remove unused phys_pkg_id() callback Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-23 12:53 ` [patch v5 19/19] x86/apic/uv: Remove the private leaf 0xb parser Thomas Gleixner
  2024-01-31  7:40 ` [patch v5 00/19] x86/cpu: Rework topology evaluation Zhang, Rui
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

The new topology CPUID parser installs already fake topology for XEN/PV,
which ends up with cpuinfo::max_cores = 1.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/xen/smp_pv.c |    3 ---
 1 file changed, 3 deletions(-)
---
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -73,7 +73,6 @@ static void cpu_bringup(void)
 	}
 	cpu = smp_processor_id();
 	smp_store_cpu_info(cpu);
-	cpu_data(cpu).x86_max_cores = 1;
 	set_cpu_sibling_map(cpu);
 
 	speculative_store_bypass_ht_init();
@@ -224,8 +223,6 @@ static void __init xen_pv_smp_prepare_cp
 
 	smp_prepare_cpus_common();
 
-	cpu_data(0).x86_max_cores = 1;
-
 	speculative_store_bypass_ht_init();
 
 	xen_pmu_init(0);


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [patch v5 19/19] x86/apic/uv: Remove the private leaf 0xb parser
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (17 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 18/19] x86/xen/smp_pv: Remove cpudata fiddling Thomas Gleixner
@ 2024-01-23 12:53 ` Thomas Gleixner
  2024-01-31  7:40 ` [patch v5 00/19] x86/cpu: Rework topology evaluation Zhang, Rui
  19 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-01-23 12:53 UTC (permalink / raw)
  To: LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, Sohil Mehta, K Prateek Nayak,
	Kan Liang, Zhang Rui, Paul E. McKenney, Feng Tang,
	Andy Shevchenko, Michael Kelley, Peter Zijlstra (Intel)

From: Thomas Gleixner <tglx@linutronix.de>

The package shift has been already evaluated by the early CPU init.

Put the mindless copy right next to the original leaf 0xb parser.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Juergen Gross <jgross@suse.com>
Tested-by: Sohil Mehta <sohil.mehta@intel.com>
Tested-by: Michael Kelley <mhklinux@outlook.com>


---
 arch/x86/include/asm/topology.h    |    5 +++
 arch/x86/kernel/apic/x2apic_uv_x.c |   52 ++++++-------------------------------
 2 files changed, 14 insertions(+), 43 deletions(-)
---
--- a/arch/x86/include/asm/topology.h
+++ b/arch/x86/include/asm/topology.h
@@ -126,6 +126,11 @@ static inline unsigned int topology_get_
 	return x86_topo_system.dom_size[dom];
 }
 
+static inline unsigned int topology_get_domain_shift(enum x86_topology_domains dom)
+{
+	return dom == TOPO_SMT_DOMAIN ? 0 : x86_topo_system.dom_shifts[dom - 1];
+}
+
 extern const struct cpumask *cpu_coregroup_mask(int cpu);
 extern const struct cpumask *cpu_clustergroup_mask(int cpu);
 
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -241,54 +241,20 @@ static void __init uv_tsc_check_sync(voi
 	is_uv(UV3) ? sname.s3.field :		\
 	undef)
 
-/* [Copied from arch/x86/kernel/cpu/topology.c:detect_extended_topology()] */
-
-#define SMT_LEVEL			0	/* Leaf 0xb SMT level */
-#define INVALID_TYPE			0	/* Leaf 0xb sub-leaf types */
-#define SMT_TYPE			1
-#define CORE_TYPE			2
-#define LEAFB_SUBTYPE(ecx)		(((ecx) >> 8) & 0xff)
-#define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
-
-static void set_x2apic_bits(void)
-{
-	unsigned int eax, ebx, ecx, edx, sub_index;
-	unsigned int sid_shift;
-
-	cpuid(0, &eax, &ebx, &ecx, &edx);
-	if (eax < 0xb) {
-		pr_info("UV: CPU does not have CPUID.11\n");
-		return;
-	}
-
-	cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
-	if (ebx == 0 || (LEAFB_SUBTYPE(ecx) != SMT_TYPE)) {
-		pr_info("UV: CPUID.11 not implemented\n");
-		return;
-	}
-
-	sid_shift = BITS_SHIFT_NEXT_LEVEL(eax);
-	sub_index = 1;
-	do {
-		cpuid_count(0xb, sub_index, &eax, &ebx, &ecx, &edx);
-		if (LEAFB_SUBTYPE(ecx) == CORE_TYPE) {
-			sid_shift = BITS_SHIFT_NEXT_LEVEL(eax);
-			break;
-		}
-		sub_index++;
-	} while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
-
-	uv_cpuid.apicid_shift	= 0;
-	uv_cpuid.apicid_mask	= (~(-1 << sid_shift));
-	uv_cpuid.socketid_shift = sid_shift;
-}
-
 static void __init early_get_apic_socketid_shift(void)
 {
+	unsigned int sid_shift = topology_get_domain_shift(TOPO_PKG_DOMAIN);
+
 	if (is_uv2_hub() || is_uv3_hub())
 		uvh_apicid.v = uv_early_read_mmr(UVH_APICID);
 
-	set_x2apic_bits();
+	if (sid_shift) {
+		uv_cpuid.apicid_shift	= 0;
+		uv_cpuid.apicid_mask	= (~(-1 << sid_shift));
+		uv_cpuid.socketid_shift = sid_shift;
+	} else {
+		pr_info("UV: CPU does not have valid CPUID.11\n");
+	}
 
 	pr_info("UV: apicid_shift:%d apicid_mask:0x%x\n", uv_cpuid.apicid_shift, uv_cpuid.apicid_mask);
 	pr_info("UV: socketid_shift:%d pnode_mask:0x%x\n", uv_cpuid.socketid_shift, uv_cpuid.pnode_mask);


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 01/19] x86/cpu: Provide cpuid_read() et al.
  2024-01-23 12:53 ` [patch v5 01/19] x86/cpu: Provide cpuid_read() et al Thomas Gleixner
@ 2024-01-24 12:25   ` Borislav Petkov
  2024-01-24 20:02     ` Borislav Petkov
  0 siblings, 1 reply; 45+ messages in thread
From: Borislav Petkov @ 2024-01-24 12:25 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:30PM +0100, Thomas Gleixner wrote:
> +static inline void __cpuid_read(unsigned int leaf, unsigned int subleaf, u32 *regs)
> +{
> +	regs[CPUID_EAX] = leaf;
> +	regs[CPUID_ECX] = subleaf;
> +	__cpuid(regs, regs + 1, regs + 2, regs + 3);

You have defines for the regs - might as well use them:

	__cpuid(regs, regs + CPUID_EBX, regs + CPUID_ECX, regs + CPUID_EDX);

> +}
> +
> +#define cpuid_subleaf(leaf, subleaf, regs) {		\
> +	BUILD_BUG_ON(sizeof(*(regs)) != 16);		\
> +	__cpuid_read(leaf, subleaf, (u32 *)(regs));	\
> +}
> +
> +#define cpuid_leaf(leaf, regs) {			\
> +	BUILD_BUG_ON(sizeof(*(regs)) != 16);		\
> +	__cpuid_read(leaf, 0, (u32 *)(regs));		\
> +}
> +
> +static inline void __cpuid_read_reg(unsigned int leaf, unsigned int subleaf,
> +				    enum cpuid_regs_idx regidx, u32 *reg)
> +{
> +	u32 regs[4];
> +
> +	__cpuid_read(leaf, subleaf, regs);
> +	*reg = regs[regidx];

Why not do

	return regs[regidx];

instead?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 01/19] x86/cpu: Provide cpuid_read() et al.
  2024-01-24 12:25   ` Borislav Petkov
@ 2024-01-24 20:02     ` Borislav Petkov
  2024-02-12 13:57       ` Thomas Gleixner
  0 siblings, 1 reply; 45+ messages in thread
From: Borislav Petkov @ 2024-01-24 20:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Wed, Jan 24, 2024 at 01:25:12PM +0100, Borislav Petkov wrote:
> > +static inline void __cpuid_read_reg(unsigned int leaf, unsigned int subleaf,
> > +				    enum cpuid_regs_idx regidx, u32 *reg)
> > +{
> > +	u32 regs[4];
> > +
> > +	__cpuid_read(leaf, subleaf, regs);
> > +	*reg = regs[regidx];
> 
> Why not do
> 
> 	return regs[regidx];
> 
> instead?

Or do you really want to be able to use anonymous structs with bitfields
in them and then convert them to a u32 * when passing in to
cpuid_leaf_reg() etc in order to save yourself all the masking and
shifting and read out the bitfields directly?

I'm looking at the parse_topology() use case.

Looks like it...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 03/19] x86/cpu: Add legacy topology parser
  2024-01-23 12:53 ` [patch v5 03/19] x86/cpu: Add legacy topology parser Thomas Gleixner
@ 2024-01-24 20:12   ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-01-24 20:12 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:34PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> The legacy topology detection via CPUID leaf 4, which provides the number
> of cores in the package and CPUID leaf 1 which provides the number of
> logical CPUs in case that FEATURE_HT is enabled and the CMP_LEGACY feature
> is not set, is shared for Intel, Centaur amd Zhaoxin CPUs.
					   ^^^

x86 maintainer Freudian slip. :-P

Happens to me too.

> Lift the code from common.c without the early detection hack and provide it
> as common fallback mechanism.
> 
> Will be utilized in later changes.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Juergen Gross <jgross@suse.com>
> Tested-by: Sohil Mehta <sohil.mehta@intel.com>
> Tested-by: Michael Kelley <mhklinux@outlook.com>
> 
> 
> ---
>  arch/x86/kernel/cpu/common.c          |    3 ++
>  arch/x86/kernel/cpu/topology.h        |    3 ++
>  arch/x86/kernel/cpu/topology_common.c |   46 +++++++++++++++++++++++++++++++++-
>  3 files changed, 51 insertions(+), 1 deletion(-)
> ---
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -891,6 +891,9 @@ void detect_ht(struct cpuinfo_x86 *c)
>  #ifdef CONFIG_SMP
>  	int index_msb, core_bits;
>  
> +	if (topo_is_converted(c))
> +		return;
> +
>  	if (detect_ht_early(c) < 0)
>  		return;
>  
> --- a/arch/x86/kernel/cpu/topology.h
> +++ b/arch/x86/kernel/cpu/topology.h
> @@ -6,6 +6,9 @@ struct topo_scan {
>  	struct cpuinfo_x86	*c;
>  	unsigned int		dom_shifts[TOPO_MAX_DOMAIN];
>  	unsigned int		dom_ncpus[TOPO_MAX_DOMAIN];
> +
> +	// Legacy CPUID[1]:EBX[23:16] number of logical processors

Can we pretty please use the good 'ol multi-line comment style and not
turn tip into a mess with a mixture between single-line and multi-line
comments?

Thanks.

> +	unsigned int		ebx1_nproc_shift;
>  };
>  
>  bool topo_is_converted(struct cpuinfo_x86 *c);


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 04/19] x86/cpu: Use common topology code for Centaur and Zhaoxin
  2024-01-23 12:53 ` [patch v5 04/19] x86/cpu: Use common topology code for Centaur and Zhaoxin Thomas Gleixner
@ 2024-01-30 19:09   ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-01-30 19:09 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:35PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Centaur and Zhaoxin CPUs use only the legacy SMP detection. Remove the
> invocations from their 32bit path and exempt them from the call 64bit.

"... and exclude them from the 64-bit call path."

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-01-23 12:53 ` [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser Thomas Gleixner
@ 2024-01-30 19:31   ` Borislav Petkov
  2024-02-12 14:17     ` Thomas Gleixner
  2024-02-13 14:30     ` [tip: x86/misc] Documentation/maintainer-tip: Add C++ tail comments exception tip-bot2 for Borislav Petkov (AMD)
  0 siblings, 2 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-01-30 19:31 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:39PM +0100, Thomas Gleixner wrote:
> +static inline bool topo_subleaf(struct topo_scan *tscan, u32 leaf, u32 subleaf,

"parse_topo_subleaf"?

With a verb in the name...

> +				unsigned int *last_dom)
> +{
> +	unsigned int dom, maxtype;
> +	const unsigned int *map;
> +	struct {
> +		// eax

Can we please not use those yucky // comments together with the
multiline ones?

> +		u32	x2apic_shift	:  5, // Number of bits to shift APIC ID right
> +					      // for the topology ID at the next level
> +					: 27; // Reserved
> +		// ebx
> +		u32	num_processors	: 16, // Number of processors at current level
> +					: 16; // Reserved
> +		// ecx
> +		u32	level		:  8, // Current topology level. Same as sub leaf number
> +			type		:  8, // Level type. If 0, invalid
> +					: 16; // Reserved
> +		// edx
> +		u32	x2apic_id	: 32; // X2APIC ID of the current logical processor
> +	} sl;

...

> +static bool parse_topology_leaf(struct topo_scan *tscan, u32 leaf)
> +{
> +	unsigned int last_dom;
> +	u32 subleaf;
> +
> +	/* Read all available subleafs and populate the levels */
> +	for (subleaf = 0, last_dom = 0; topo_subleaf(tscan, leaf, subleaf, &last_dom); subleaf++);
> +
> +	/* If subleaf 0 failed to parse, give up */
> +	if (!subleaf)
> +		return false;
> +
> +	/*
> +	 * There are machines in the wild which have shift 0 in the subleaf
> +	 * 0, but advertise 2 logical processors at that level. They are
> +	 * truly SMT.
> +	 */
> +	if (!tscan->dom_shifts[TOPO_SMT_DOMAIN] && tscan->dom_ncpus[TOPO_SMT_DOMAIN] > 1) {
> +		unsigned int sft = get_count_order(tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
> +
> +		pr_warn_once(FW_BUG "CPUID leaf 0x%x subleaf 0 has shift level 0 but %u CPUs\n",
> +			     leaf, tscan->dom_ncpus[TOPO_SMT_DOMAIN]);

Do you really wanna warn about that? Hoping that someone would do
something about it while there's time...?

> +		topology_update_dom(tscan, TOPO_SMT_DOMAIN, sft, tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
> +	}
> +
> +	set_cpu_cap(tscan->c, X86_FEATURE_XTOPOLOGY);
> +	return true;
> +}

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 00/19] x86/cpu: Rework topology evaluation
  2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
                   ` (18 preceding siblings ...)
  2024-01-23 12:53 ` [patch v5 19/19] x86/apic/uv: Remove the private leaf 0xb parser Thomas Gleixner
@ 2024-01-31  7:40 ` Zhang, Rui
  19 siblings, 0 replies; 45+ messages in thread
From: Zhang, Rui @ 2024-01-31  7:40 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: arjan, mhklinux, andrew.cooper3, ray.huang, thomas.lendacky,
	Wang, Wendy, Sivanich, Dimitri, Tang, Feng, kan.liang, Mehta,
	Sohil, peterz, paulmck, kprateek.nayak, jgross, andy, x86

Hi, Thomas,

Wendy and I have tested all the three patch sets on a couple of
platforms, except the issue raised, no other problem is observed.

 Tested-by: Zhang Rui <rui.zhang@intel.com>
 Tested-by: Wang Wendy <wendy.wang@intel.com>

BTW, one behavior change in this patch series is that die_id become
platform unique instead of package unique.
By my understanding, this won't break any kernel user, but we have some
redundant code left where we compare both package id and die id. for
example, in match_smt(),

if (c->topo.pkg_id == o->topo.pkg_id &&
                   c->topo.die_id == o->topo.die_id &&
                   c->topo.core_id == o->topo.core_id) {
                return topology_sane(c, o, "smt");
        }

thanks,
rui


On Tue, 2024-01-23 at 13:53 +0100, Thomas Gleixner wrote:
> This is a follow up on V4 of this work:
> 
>   https://lore.kernel.org/all/20230814085006.593997112@linutronix.de
> 
> and contains only the not yet applied part which reworks the CPUID
> parsing. This is also preparatory work for the general overhaul of
> APIC ID
> enumeration and management.
> 
> Changes vs. V4:
> 
>   - Add DIEGRP level explicitly
> 
> This applies on Linus tree and is available from git:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git topo-
> cpuid-v5
> 
> Thanks,
> 
>         tglx
> ---
>  arch/x86/kernel/cpu/topology.c          |  167 ---------------------
> -
>  b/arch/x86/events/amd/core.c            |    2 
>  b/arch/x86/include/asm/apic.h           |    1 
>  b/arch/x86/include/asm/cpuid.h          |   36 ++++
>  b/arch/x86/include/asm/processor.h      |    5 
>  b/arch/x86/include/asm/topology.h       |   39 +++++
>  b/arch/x86/kernel/amd_nb.c              |    4 
>  b/arch/x86/kernel/apic/apic_flat_64.c   |    7 
>  b/arch/x86/kernel/apic/apic_noop.c      |    3 
>  b/arch/x86/kernel/apic/apic_numachip.c  |    7 
>  b/arch/x86/kernel/apic/bigsmp_32.c      |    6 
>  b/arch/x86/kernel/apic/local.h          |    1 
>  b/arch/x86/kernel/apic/probe_32.c       |    6 
>  b/arch/x86/kernel/apic/x2apic_cluster.c |    1 
>  b/arch/x86/kernel/apic/x2apic_phys.c    |    6 
>  b/arch/x86/kernel/apic/x2apic_uv_x.c    |   63 +-------
>  b/arch/x86/kernel/cpu/Makefile          |    3 
>  b/arch/x86/kernel/cpu/amd.c             |  146 -------------------
>  b/arch/x86/kernel/cpu/cacheinfo.c       |    6 
>  b/arch/x86/kernel/cpu/centaur.c         |    4 
>  b/arch/x86/kernel/cpu/common.c          |   91 +-----------
>  b/arch/x86/kernel/cpu/cpu.h             |   13 -
>  b/arch/x86/kernel/cpu/debugfs.c         |   40 +++++
>  b/arch/x86/kernel/cpu/hygon.c           |  129 -----------------
>  b/arch/x86/kernel/cpu/intel.c           |   25 ---
>  b/arch/x86/kernel/cpu/mce/amd.c         |    4 
>  b/arch/x86/kernel/cpu/mce/inject.c      |    7 
>  b/arch/x86/kernel/cpu/topology.h        |   56 +++++++
>  b/arch/x86/kernel/cpu/topology_amd.c    |  182
> ++++++++++++++++++++++++
>  b/arch/x86/kernel/cpu/topology_common.c |  241
> ++++++++++++++++++++++++++++++++
>  b/arch/x86/kernel/cpu/topology_ext.c    |  130 +++++++++++++++++
>  b/arch/x86/kernel/cpu/zhaoxin.c         |    4 
>  b/arch/x86/kernel/smpboot.c             |   12 +
>  b/arch/x86/kernel/vsmp_64.c             |   13 -
>  b/arch/x86/mm/amdtopology.c             |   35 ++--
>  b/arch/x86/xen/apic.c                   |    6 
>  b/arch/x86/xen/smp_pv.c                 |    3 
>  b/drivers/edac/amd64_edac.c             |    4 
>  b/drivers/edac/mce_amd.c                |    4 
>  39 files changed, 792 insertions(+), 720 deletions(-)
> 
> 


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 07/19] x86/cpu: Use common topology code for Intel
  2024-01-23 12:53 ` [patch v5 07/19] x86/cpu: Use common topology code for Intel Thomas Gleixner
@ 2024-02-01 15:07   ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-01 15:07 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:40PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> Intel CPUs use either topology leaf 0xb/0x1f evaluation or the legacy
> SMP/HT evaluation based on CPUID leaf 0x1/0x4.
> 
> Move it over to the consolidated topology code and remove the random
> topology hacks which are sprinkled into the Intel and the common code.
> 
> No functional change intended.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Juergen Gross <jgross@suse.com>
> Tested-by: Sohil Mehta <sohil.mehta@intel.com>
> Tested-by: Michael Kelley <mhklinux@outlook.com>
> 
> 
> ---
>  arch/x86/kernel/cpu/common.c          |   65 ----------------------------------
>  arch/x86/kernel/cpu/cpu.h             |    4 --
>  arch/x86/kernel/cpu/intel.c           |   25 -------------
>  arch/x86/kernel/cpu/topology_common.c |    5 ++
>  4 files changed, 4 insertions(+), 95 deletions(-)

Right:

arch/x86/kernel/cpu/topology.c:62:5: warning: no previous prototype for ‘detect_extended_topology_early’ [-Wmissing-prototypes]
   62 | int detect_extended_topology_early(struct cpuinfo_x86 *c)
      |     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

That one is already unused after this one - might zap it here.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 08/19] x86/cpu/amd: Provide a separate accessor for Node ID
  2024-01-23 12:53 ` [patch v5 08/19] x86/cpu/amd: Provide a separate accessor for Node ID Thomas Gleixner
@ 2024-02-01 15:19   ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-01 15:19 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:42PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
> 
> AMD (ab)uses topology_die_id() to store the Node ID information and
> topology_max_dies_per_pkg to store the number of nodes per package.

topology_max_die_per_package()

is what I can find.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser
  2024-01-23 12:53 ` [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser Thomas Gleixner
@ 2024-02-01 15:55   ` Borislav Petkov
  2024-02-02 12:30   ` Borislav Petkov
  1 sibling, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-01 15:55 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:43PM +0100, Thomas Gleixner wrote:
> --- a/arch/x86/kernel/cpu/topology.h
> +++ b/arch/x86/kernel/cpu/topology.h
> @@ -9,6 +9,10 @@ struct topo_scan {
>  
>  	// Legacy CPUID[1]:EBX[23:16] number of logical processors

/* comments pls.

>  	unsigned int		ebx1_nproc_shift;
> +
> +	// AMD specific node ID which cannot be mapped into APIC space.
> +	u16			amd_nodes_per_pkg;
> +	u16			amd_node_id;
>  };

...

> +static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb)
> +{
> +	struct {
> +		// eax
> +		u32	x2apic_id	: 32;

The docs call this ExtendedApicId, not x2apic_id.

> +		// ebx
> +		u32	cuid		:  8,
> +			threads_per_cu	:  8,
> +			__rsvd0		: 16;
> +		// ecx
> +		u32	nodeid		:  8,
> +			nodes_per_pkg	:  3,
> +			__rsvd1		: 21;
> +		// edx
> +		u32	__rsvd2		: 32;
> +	} leaf;
> +
> +	if (!boot_cpu_has(X86_FEATURE_TOPOEXT))

s/boot_cpu_has/cpu_feature_enabled/g

> +		return false;
> +
> +	cpuid_leaf(0x8000001e, &leaf);

...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 02/19] x86/cpu: Provide cpu_init/parse_topology()
  2024-01-23 12:53 ` [patch v5 02/19] x86/cpu: Provide cpu_init/parse_topology() Thomas Gleixner
@ 2024-02-01 22:16   ` Sohil Mehta
  0 siblings, 0 replies; 45+ messages in thread
From: Sohil Mehta @ 2024-02-01 22:16 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven, Huang Rui,
	Juergen Gross, Dimitri Sivanich, K Prateek Nayak, Kan Liang,
	Zhang Rui, Paul E. McKenney, Feng Tang, Andy Shevchenko,
	Michael Kelley, Peter Zijlstra (Intel)


> --- /dev/null
> +++ b/arch/x86/kernel/cpu/topology_common.c

> +static void parse_topology(struct topo_scan *tscan, bool early)
> +{
> +	const struct cpuinfo_topology topo_defaults = {
> +		.cu_id			= 0xff,
> +		.llc_id			= BAD_APICID,
> +		.l2c_id			= BAD_APICID,
> +	};
> +	struct cpuinfo_x86 *c = tscan->c;
> +	struct {
> +		u32	unused0		: 16,
> +			nproc		:  8,
> +			apicid		:  8;
> +	} ebx;
> +
> +	c->topo = topo_defaults;
> +
> +	if (fake_topology(tscan))
> +	    return;
> +

Need a tab here instead of 4 spaces.



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser
  2024-01-23 12:53 ` [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser Thomas Gleixner
  2024-02-01 15:55   ` Borislav Petkov
@ 2024-02-02 12:30   ` Borislav Petkov
  1 sibling, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-02 12:30 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:43PM +0100, Thomas Gleixner wrote:
> +static bool parse_8000_0008(struct topo_scan *tscan)
> +{
> +	struct {
> +		u32	ncores		:  8,

Yeah, so there was some confusion what this field actually means. It is
documented correctly in the latest APM:

"NT: number of physical threads - 1. The number of threads in the
processor is NT+1 (e.g., if NT = 0, then there is one thread). See
“Legacy Method” on page 645."

> +			__rsvd0		:  4,
> +			apicidsize	:  4,
> +			perftscsize	:  2,
> +			__rsvd1		: 14;
> +	} ecx;
> +	unsigned int sft;
> +
> +	if (tscan->c->extended_cpuid_level < 0x80000008)
> +		return false;
> +
> +	cpuid_leaf_reg(0x80000008, CPUID_ECX, &ecx);
> +
> +	/* If the APIC ID size is 0, then get the shift value from ecx.ncores */
> +	sft = ecx.apicidsize;
> +	if (!sft)
> +		sft = get_count_order(ecx.ncores + 1);
> +
> +	topology_set_dom(tscan, TOPO_CORE_DOMAIN, sft, ecx.ncores + 1);

So yeah, this should be TOPO_SMT_DOMAIN.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 10/19] x86/smpboot: Teach it about topo.amd_node_id
  2024-01-23 12:53 ` [patch v5 10/19] x86/smpboot: Teach it about topo.amd_node_id Thomas Gleixner
@ 2024-02-06 15:48   ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-06 15:48 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:45PM +0100, Thomas Gleixner wrote:
>  static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
>  {
> -	if (c->topo.pkg_id == o->topo.pkg_id &&
> -	    c->topo.die_id == o->topo.die_id)
> -		return true;
> -	return false;
> +	if (c->topo.pkg_id != o->topo.pkg_id || c->topo.die_id != o->topo.die_id)
> +		return false;
> +
> +	if (boot_cpu_has(X86_FEATURE_TOPOEXT) && topology_amd_nodes_per_pkg() > 1)

check_for_deprecated_apis: WARNING: arch/x86/kernel/smpboot.c:516: Do not use boot_cpu_has() - use cpu_feature_enabled() instead.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 11/19] x86/cpu: Use common topology code for AMD
  2024-01-23 12:53 ` [patch v5 11/19] x86/cpu: Use common topology code for AMD Thomas Gleixner
@ 2024-02-06 15:58   ` Borislav Petkov
  2024-02-12 14:50     ` Thomas Gleixner
  0 siblings, 1 reply; 45+ messages in thread
From: Borislav Petkov @ 2024-02-06 15:58 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:47PM +0100, Thomas Gleixner wrote:
> --- a/arch/x86/kernel/cpu/mce/inject.c
> +++ b/arch/x86/kernel/cpu/mce/inject.c
> @@ -433,8 +433,7 @@ static u32 get_nbc_for_node(int node_id)
>  	struct cpuinfo_x86 *c = &boot_cpu_data;
>  	u32 cores_per_node;
>  
> -	cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket();
> -
> +	cores_per_node = (c->x86_max_cores * smp_num_siblings) / topology_amd_nodes_per_pkg();
>  	return cores_per_node * node_id;
>  }

One more hunk depending on what goes in when and in what order, to fix
a build issue from the RAS tree:

ERROR: modpost: "amd_get_nodes_per_socket" [drivers/ras/amd/atl/amd_atl.ko] undefined!
make[2]: *** [scripts/Makefile.modpost:145: Module.symvers] Error 1
make[1]: *** [/mnt/kernel/kernel/2nd/linux/Makefile:1873: modpost] Error 2
make: *** [Makefile:240: __sub-make] Error 2

---

diff --git a/drivers/ras/amd/atl/umc.c b/drivers/ras/amd/atl/umc.c
index 7e310d1dfcfc..283812bd8497 100644
--- a/drivers/ras/amd/atl/umc.c
+++ b/drivers/ras/amd/atl/umc.c
@@ -264,7 +264,7 @@ static u8 get_die_id(struct atl_err *err)
 	 * For CPUs, this is the AMD Node ID modulo the number
 	 * of AMD Nodes per socket.
 	 */
-	return topology_die_id(err->cpu) % amd_get_nodes_per_socket();
+	return topology_die_id(err->cpu) % topology_amd_nodes_per_pkg();
 }
 
 #define UMC_CHANNEL_NUM	GENMASK(31, 20)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [patch v5 01/19] x86/cpu: Provide cpuid_read() et al.
  2024-01-24 20:02     ` Borislav Petkov
@ 2024-02-12 13:57       ` Thomas Gleixner
  0 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-02-12 13:57 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Wed, Jan 24 2024 at 21:02, Borislav Petkov wrote:
> On Wed, Jan 24, 2024 at 01:25:12PM +0100, Borislav Petkov wrote:
>> > +static inline void __cpuid_read_reg(unsigned int leaf, unsigned int subleaf,
>> > +				    enum cpuid_regs_idx regidx, u32 *reg)
>> > +{
>> > +	u32 regs[4];
>> > +
>> > +	__cpuid_read(leaf, subleaf, regs);
>> > +	*reg = regs[regidx];
>> 
>> Why not do
>> 
>> 	return regs[regidx];
>> 
>> instead?
>
> Or do you really want to be able to use anonymous structs with bitfields
> in them and then convert them to a u32 * when passing in to
> cpuid_leaf_reg() etc in order to save yourself all the masking and
> shifting and read out the bitfields directly?
>
> I'm looking at the parse_topology() use case.
>
> Looks like it...

Yes, that's the idea.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-01-30 19:31   ` Borislav Petkov
@ 2024-02-12 14:17     ` Thomas Gleixner
  2024-02-12 15:00       ` Borislav Petkov
  2024-02-12 15:03       ` Thomas Gleixner
  2024-02-13 14:30     ` [tip: x86/misc] Documentation/maintainer-tip: Add C++ tail comments exception tip-bot2 for Borislav Petkov (AMD)
  1 sibling, 2 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-02-12 14:17 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 30 2024 at 20:31, Borislav Petkov wrote:
> On Tue, Jan 23, 2024 at 01:53:39PM +0100, Thomas Gleixner wrote:
>> +static inline bool topo_subleaf(struct topo_scan *tscan, u32 leaf, u32 subleaf,
>
> "parse_topo_subleaf"?
>
> With a verb in the name...
>
>> +				unsigned int *last_dom)
>> +{
>> +	unsigned int dom, maxtype;
>> +	const unsigned int *map;
>> +	struct {
>> +		// eax
>
> Can we please not use those yucky // comments together with the
> multiline ones?

TBH, the // comment style is really better for struct definitions. It's
denser and easier to parse.

		// eax
		u32	x2apic_shift	:  5, // Number of bits to shift APIC ID right
					      // for the topology ID at the next level
					: 27; // Reserved
		// ebx
		u32	num_processors	: 16, // Number of processors at current level
					: 16; // Reserved

versus:

		/* eax */
		u32	x2apic_shift	:  5, /*
                                               * Number of bits to shift APIC ID right
					       * for the topology ID at	the next level
                                               */
					: 27; /* Reserved */

		/* ebx */
		u32	num_processors	: 16, /* Number of processors at current level */
					: 16; /* Reserved */

Especially x2apic_shift is horrible and the comments of EBX are visually
impaired while with the C++ comments x2apic_shift looks natural and the
EBX comments are just open to the right and therefore simpler.

>> +	if (!tscan->dom_shifts[TOPO_SMT_DOMAIN] && tscan->dom_ncpus[TOPO_SMT_DOMAIN] > 1) {
>> +		unsigned int sft = get_count_order(tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
>> +
>> +		pr_warn_once(FW_BUG "CPUID leaf 0x%x subleaf 0 has shift level 0 but %u CPUs\n",
>> +			     leaf, tscan->dom_ncpus[TOPO_SMT_DOMAIN]);
>
> Do you really wanna warn about that? Hoping that someone would do
> something about it while there's time...?

If it's caught in early testing, this should be fixed, no?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 11/19] x86/cpu: Use common topology code for AMD
  2024-02-06 15:58   ` Borislav Petkov
@ 2024-02-12 14:50     ` Thomas Gleixner
  2024-02-12 15:06       ` Borislav Petkov
  0 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-02-12 14:50 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Feb 06 2024 at 16:58, Borislav Petkov wrote:
> On Tue, Jan 23, 2024 at 01:53:47PM +0100, Thomas Gleixner wrote:
>> --- a/arch/x86/kernel/cpu/mce/inject.c
>> +++ b/arch/x86/kernel/cpu/mce/inject.c
>> @@ -433,8 +433,7 @@ static u32 get_nbc_for_node(int node_id)
>>  	struct cpuinfo_x86 *c = &boot_cpu_data;
>>  	u32 cores_per_node;
>>  
>> -	cores_per_node = (c->x86_max_cores * smp_num_siblings) / amd_get_nodes_per_socket();
>> -
>> +	cores_per_node = (c->x86_max_cores * smp_num_siblings) / topology_amd_nodes_per_pkg();
>>  	return cores_per_node * node_id;
>>  }
>
> One more hunk depending on what goes in when and in what order, to fix
> a build issue from the RAS tree:
>
> ERROR: modpost: "amd_get_nodes_per_socket" [drivers/ras/amd/atl/amd_atl.ko] undefined!
> make[2]: *** [scripts/Makefile.modpost:145: Module.symvers] Error 1
> make[1]: *** [/mnt/kernel/kernel/2nd/linux/Makefile:1873: modpost] Error 2
> make: *** [Makefile:240: __sub-make] Error 2

Hrm. That is unfortunate, but we really don't want to mix the RAS
tree. So this needs a fixup in next and in the pull requests.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-02-12 14:17     ` Thomas Gleixner
@ 2024-02-12 15:00       ` Borislav Petkov
  2024-02-12 15:08         ` Thomas Gleixner
  2024-02-12 15:03       ` Thomas Gleixner
  1 sibling, 1 reply; 45+ messages in thread
From: Borislav Petkov @ 2024-02-12 15:00 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12, 2024 at 03:17:45PM +0100, Thomas Gleixner wrote:
> Especially x2apic_shift is horrible and the comments of EBX are visually
> impaired while with the C++ comments x2apic_shift looks natural and the
> EBX comments are just open to the right and therefore simpler.

I'd say, put comments *above* the member versus on the side. We don't
like side comments, if you remember. :-)

And, for example, the commenting in arch/x86/include/asm/fpu/types.h is
not half as bad and works real nice for struct definitions, I'd say.

But if you want to make that into a rule to have C++, side comments for
struct members I guess I'll get accustomed to it eventually.

> If it's caught in early testing, this should be fixed, no?

Hope dies last. :)

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-02-12 14:17     ` Thomas Gleixner
  2024-02-12 15:00       ` Borislav Petkov
@ 2024-02-12 15:03       ` Thomas Gleixner
  2024-02-12 15:05         ` Borislav Petkov
  1 sibling, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-02-12 15:03 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12 2024 at 15:17, Thomas Gleixner wrote:
> On Tue, Jan 30 2024 at 20:31, Borislav Petkov wrote:
> TBH, the // comment style is really better for struct definitions. It's
> denser and easier to parse.
>
> 		// eax
> 		u32	x2apic_shift	:  5, // Number of bits to shift APIC ID right
> 					      // for the topology ID at the next level
> 					: 27; // Reserved
> 		// ebx
> 		u32	num_processors	: 16, // Number of processors at current level
> 					: 16; // Reserved
>
> versus:
>
> 		/* eax */
> 		u32	x2apic_shift	:  5, /*
>                                                * Number of bits to shift APIC ID right
> 					       * for the topology ID at	the next level
>                                                */
> 					: 27; /* Reserved */
>
> 		/* ebx */
> 		u32	num_processors	: 16, /* Number of processors at current level */
> 					: 16; /* Reserved */
>
> Especially x2apic_shift is horrible and the comments of EBX are visually
> impaired while with the C++ comments x2apic_shift looks natural and the
> EBX comments are just open to the right and therefore simpler.

Aside of that it would make the struct generator in the CPUID data base
more complex.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-02-12 15:03       ` Thomas Gleixner
@ 2024-02-12 15:05         ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-12 15:05 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12, 2024 at 04:03:41PM +0100, Thomas Gleixner wrote:
> Aside of that it would make the struct generator in the CPUID data base
> more complex.

Hmm, that's a valid point... that thing being XML is already complex.
:-P

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 11/19] x86/cpu: Use common topology code for AMD
  2024-02-12 14:50     ` Thomas Gleixner
@ 2024-02-12 15:06       ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-12 15:06 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12, 2024 at 03:50:04PM +0100, Thomas Gleixner wrote:
> Hrm. That is unfortunate, but we really don't want to mix the RAS
> tree. So this needs a fixup in next and in the pull requests.

Sure, lemme know how you wanna do the patch tetris and I'll take care of
it.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-02-12 15:00       ` Borislav Petkov
@ 2024-02-12 15:08         ` Thomas Gleixner
  2024-02-12 15:43           ` Borislav Petkov
  0 siblings, 1 reply; 45+ messages in thread
From: Thomas Gleixner @ 2024-02-12 15:08 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12 2024 at 16:00, Borislav Petkov wrote:
> On Mon, Feb 12, 2024 at 03:17:45PM +0100, Thomas Gleixner wrote:
>> Especially x2apic_shift is horrible and the comments of EBX are visually
>> impaired while with the C++ comments x2apic_shift looks natural and the
>> EBX comments are just open to the right and therefore simpler.
>
> I'd say, put comments *above* the member versus on the side. We don't
> like side comments, if you remember. :-)

In code, no. For struct definitions if they are strictly tabular
formatted, they are actually nice as they are more compact and take less
space than the above member variant.

		// eax
		u32	x2apic_shift	:  5, // Number of bits to shift APIC ID right
					      // for the topology ID at the next level
					: 27; // Reserved
		// ebx
		u32	num_processors	: 16, // Number of processors at current level
					: 16; // Reserved

versus:

		/* eax */
                	/*
                         * Number of bits to shift APIC ID right for the topology ID
	                 * at the next level
                         */
		u32	x2apic_shift	:  5,
                	/* Reserved */
					: 27;

		/* ebx */
                	 /* Number of processors at current level */
		u32	num_processors	: 16,
                	 /* Reserved */
					: 16;

This really makes my eyes bleed.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-02-12 15:08         ` Thomas Gleixner
@ 2024-02-12 15:43           ` Borislav Petkov
  2024-02-12 23:02             ` Thomas Gleixner
  0 siblings, 1 reply; 45+ messages in thread
From: Borislav Petkov @ 2024-02-12 15:43 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12, 2024 at 04:08:31PM +0100, Thomas Gleixner wrote:
> This really makes my eyes bleed.

From: Borislav Petkov (AMD) <bp@alien8.de>
Date:   Mon Feb 12 16:41:42 2024 +0100

Documentation/maintainer-tip: Add C++ tail comments exception

Document when C++-style, tail comments should be used.

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>

diff --git a/Documentation/process/maintainer-tip.rst b/Documentation/process/maintainer-tip.rst
index 799359231b7f..497bb39727c8 100644
--- a/Documentation/process/maintainer-tip.rst
+++ b/Documentation/process/maintainer-tip.rst
@@ -480,7 +480,7 @@ Multi-line comments::
 	 * Larger multi-line comments should be split into paragraphs.
 	 */
 
-No tail comments:
+No tail comments (see below):
 
   Please refrain from using tail comments. Tail comments disturb the
   reading flow in almost all contexts, but especially in code::
@@ -501,6 +501,34 @@ No tail comments:
 	/* This magic initialization needs a comment. Maybe not? */
 	seed = MAGIC_CONSTANT;
 
+  Use C++ style, tail comments when documenting structs in headers to
+  achieve a more compact layout and better readability::
+
+        // eax
+        u32     x2apic_shift    :  5, // Number of bits to shift APIC ID right
+                                      // for the topology ID at the next level
+                                : 27; // Reserved
+        // ebx
+        u32     num_processors  : 16, // Number of processors at current level
+                                : 16; // Reserved
+
+  versus::
+
+	/* eax */
+	        /*
+	         * Number of bits to shift APIC ID right for the topology ID
+	         * at the next level
+	         */
+         u32     x2apic_shift    :  5,
+		 /* Reserved */
+				 : 27;
+
+	/* ebx */
+		/* Number of processors at current level */
+	u32     num_processors  : 16,
+		/* Reserved */
+				: 16;
+
 Comment the important things:
 
   Comments should be added where the operation is not obvious. Documenting



-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [patch v5 13/19] x86/mm/numa: Use core domain size on AMD
  2024-01-23 12:53 ` [patch v5 13/19] x86/mm/numa: Use core domain size on AMD Thomas Gleixner
@ 2024-02-12 15:56   ` Borislav Petkov
  0 siblings, 0 replies; 45+ messages in thread
From: Borislav Petkov @ 2024-02-12 15:56 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Tue, Jan 23, 2024 at 01:53:50PM +0100, Thomas Gleixner wrote:
> @@ -158,26 +156,25 @@ int __init amd_numa_init(void)
>  		return -ENOENT;
>  
>  	/*
> -	 * We seem to have valid NUMA configuration.  Map apicids to nodes
> -	 * using the coreid bits from early_identify_cpu.
> +	 * We seem to have valid NUMA configuration. Map apicids to nodes
> +	 * using the size of the core domain in the APIC space.

Since you're touching the comments:

	/*
	 * Valid NUMA configuration detected. Map APICIDs to nodes...

>  	 */
> -	bits = boot_cpu_data.x86_coreid_bits;
> -	cores = 1 << bits;
> -	apicid_base = 0;
> +	cores = topology_get_domain_size(TOPO_CORE_DOMAIN);

num_cores ...

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser
  2024-02-12 15:43           ` Borislav Petkov
@ 2024-02-12 23:02             ` Thomas Gleixner
  0 siblings, 0 replies; 45+ messages in thread
From: Thomas Gleixner @ 2024-02-12 23:02 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, x86, Tom Lendacky, Andrew Cooper, Arjan van de Ven,
	Huang Rui, Juergen Gross, Dimitri Sivanich, Sohil Mehta,
	K Prateek Nayak, Kan Liang, Zhang Rui, Paul E. McKenney,
	Feng Tang, Andy Shevchenko, Michael Kelley,
	Peter Zijlstra (Intel)

On Mon, Feb 12 2024 at 16:43, Borislav Petkov wrote:
> On Mon, Feb 12, 2024 at 04:08:31PM +0100, Thomas Gleixner wrote:
>> This really makes my eyes bleed.
>
> From: Borislav Petkov (AMD) <bp@alien8.de>
> Date:   Mon Feb 12 16:41:42 2024 +0100
>
> Documentation/maintainer-tip: Add C++ tail comments exception
>
> Document when C++-style, tail comments should be used.
>
> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [tip: x86/misc] Documentation/maintainer-tip: Add C++ tail comments exception
  2024-01-30 19:31   ` Borislav Petkov
  2024-02-12 14:17     ` Thomas Gleixner
@ 2024-02-13 14:30     ` tip-bot2 for Borislav Petkov (AMD)
  1 sibling, 0 replies; 45+ messages in thread
From: tip-bot2 for Borislav Petkov (AMD) @ 2024-02-13 14:30 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Borislav Petkov (AMD), Thomas Gleixner, x86, linux-kernel

The following commit has been merged into the x86/misc branch of tip:

Commit-ID:     7dd0a21ccb5a937ca9f798afad34de4ba030f8d4
Gitweb:        https://git.kernel.org/tip/7dd0a21ccb5a937ca9f798afad34de4ba030f8d4
Author:        Borislav Petkov (AMD) <bp@alien8.de>
AuthorDate:    Mon, 12 Feb 2024 16:41:42 +01:00
Committer:     Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 13 Feb 2024 13:19:40 +01:00

Documentation/maintainer-tip: Add C++ tail comments exception

Document when C++-style, tail comments should be used.

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20240130193102.GEZblOdor_bzoVhT0f@fat_crate.local
---
 Documentation/process/maintainer-tip.rst | 30 ++++++++++++++++++++++-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/Documentation/process/maintainer-tip.rst b/Documentation/process/maintainer-tip.rst
index 7993592..497bb39 100644
--- a/Documentation/process/maintainer-tip.rst
+++ b/Documentation/process/maintainer-tip.rst
@@ -480,7 +480,7 @@ Multi-line comments::
 	 * Larger multi-line comments should be split into paragraphs.
 	 */
 
-No tail comments:
+No tail comments (see below):
 
   Please refrain from using tail comments. Tail comments disturb the
   reading flow in almost all contexts, but especially in code::
@@ -501,6 +501,34 @@ No tail comments:
 	/* This magic initialization needs a comment. Maybe not? */
 	seed = MAGIC_CONSTANT;
 
+  Use C++ style, tail comments when documenting structs in headers to
+  achieve a more compact layout and better readability::
+
+        // eax
+        u32     x2apic_shift    :  5, // Number of bits to shift APIC ID right
+                                      // for the topology ID at the next level
+                                : 27; // Reserved
+        // ebx
+        u32     num_processors  : 16, // Number of processors at current level
+                                : 16; // Reserved
+
+  versus::
+
+	/* eax */
+	        /*
+	         * Number of bits to shift APIC ID right for the topology ID
+	         * at the next level
+	         */
+         u32     x2apic_shift    :  5,
+		 /* Reserved */
+				 : 27;
+
+	/* ebx */
+		/* Number of processors at current level */
+	u32     num_processors  : 16,
+		/* Reserved */
+				: 16;
+
 Comment the important things:
 
   Comments should be added where the operation is not obvious. Documenting

^ permalink raw reply related	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2024-02-13 14:30 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-23 12:53 [patch v5 00/19] x86/cpu: Rework topology evaluation Thomas Gleixner
2024-01-23 12:53 ` [patch v5 01/19] x86/cpu: Provide cpuid_read() et al Thomas Gleixner
2024-01-24 12:25   ` Borislav Petkov
2024-01-24 20:02     ` Borislav Petkov
2024-02-12 13:57       ` Thomas Gleixner
2024-01-23 12:53 ` [patch v5 02/19] x86/cpu: Provide cpu_init/parse_topology() Thomas Gleixner
2024-02-01 22:16   ` Sohil Mehta
2024-01-23 12:53 ` [patch v5 03/19] x86/cpu: Add legacy topology parser Thomas Gleixner
2024-01-24 20:12   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 04/19] x86/cpu: Use common topology code for Centaur and Zhaoxin Thomas Gleixner
2024-01-30 19:09   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 05/19] x86/cpu: Move __max_die_per_package to common.c Thomas Gleixner
2024-01-23 12:53 ` [patch v5 06/19] x86/cpu: Provide a sane leaf 0xb/0x1f parser Thomas Gleixner
2024-01-30 19:31   ` Borislav Petkov
2024-02-12 14:17     ` Thomas Gleixner
2024-02-12 15:00       ` Borislav Petkov
2024-02-12 15:08         ` Thomas Gleixner
2024-02-12 15:43           ` Borislav Petkov
2024-02-12 23:02             ` Thomas Gleixner
2024-02-12 15:03       ` Thomas Gleixner
2024-02-12 15:05         ` Borislav Petkov
2024-02-13 14:30     ` [tip: x86/misc] Documentation/maintainer-tip: Add C++ tail comments exception tip-bot2 for Borislav Petkov (AMD)
2024-01-23 12:53 ` [patch v5 07/19] x86/cpu: Use common topology code for Intel Thomas Gleixner
2024-02-01 15:07   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 08/19] x86/cpu/amd: Provide a separate accessor for Node ID Thomas Gleixner
2024-02-01 15:19   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 09/19] x86/cpu: Provide an AMD/HYGON specific topology parser Thomas Gleixner
2024-02-01 15:55   ` Borislav Petkov
2024-02-02 12:30   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 10/19] x86/smpboot: Teach it about topo.amd_node_id Thomas Gleixner
2024-02-06 15:48   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 11/19] x86/cpu: Use common topology code for AMD Thomas Gleixner
2024-02-06 15:58   ` Borislav Petkov
2024-02-12 14:50     ` Thomas Gleixner
2024-02-12 15:06       ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 12/19] x86/cpu: Use common topology code for HYGON Thomas Gleixner
2024-01-23 12:53 ` [patch v5 13/19] x86/mm/numa: Use core domain size on AMD Thomas Gleixner
2024-02-12 15:56   ` Borislav Petkov
2024-01-23 12:53 ` [patch v5 14/19] x86/cpu: Make topology_amd_node_id() use the actual node info Thomas Gleixner
2024-01-23 12:53 ` [patch v5 15/19] x86/cpu: Remove topology.c Thomas Gleixner
2024-01-23 12:53 ` [patch v5 16/19] x86/cpu: Remove x86_coreid_bits Thomas Gleixner
2024-01-23 12:53 ` [patch v5 17/19] x86/apic: Remove unused phys_pkg_id() callback Thomas Gleixner
2024-01-23 12:53 ` [patch v5 18/19] x86/xen/smp_pv: Remove cpudata fiddling Thomas Gleixner
2024-01-23 12:53 ` [patch v5 19/19] x86/apic/uv: Remove the private leaf 0xb parser Thomas Gleixner
2024-01-31  7:40 ` [patch v5 00/19] x86/cpu: Rework topology evaluation Zhang, Rui

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).