All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/10] x86: make pat and mtrr independent from each other
@ 2022-09-08  8:49 Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 01/10] x86/mtrr: add comment for set_mtrr_state() serialization Juergen Gross
                   ` (9 more replies)
  0 siblings, 10 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel, linux-pm
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Rafael J. Wysocki, Pavel Machek,
	Andy Lutomirski, Peter Zijlstra

Today PAT can't be used without MTRR being available, unless MTRR is at
least configured via CONFIG_MTRR and the system is running as Xen PV
guest. In this case PAT is automatically available via the hypervisor,
but the PAT MSR can't be modified by the kernel and MTRR is disabled.

The same applies to a kernel built with no MTRR support: it won't
allow to use the PAT MSR, even if there is no technical reason for
that, other than setting up PAT on all cpus the same way (which is a
requirement of the processor's cache management) is relying on some
MTRR specific code.

Fix all of that by:

- moving the function needed by PAT from MTRR specific code one level
  up
- reworking the init sequences of MTRR and PAT to be more similar to
  each other without calling PAT from MTRR code
- removing the dependency of PAT on MTRR

While working on that I discovered two minor bugs in MTRR code, which
are fixed, too.

Changes in V3:
- replace patch 1 by just adding a comment

Changes in V2:
- complete rework of the patches based on comments by Boris Petkov
- added several patches to the series

Juergen Gross (10):
  x86/mtrr: add comment for set_mtrr_state() serialization
  x86/mtrr: remove unused cyrix_set_all() function
  x86/mtrr: replace use_intel() with a local flag
  x86: move some code out of arch/x86/kernel/cpu/mtrr
  x86/mtrr: split generic_set_all()
  x86/mtrr: remove set_all callback from struct mtrr_ops
  x86/mtrr: simplify mtrr_bp_init()
  x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  x86/mtrr: add a stop_machine() handler calling only cache_cpu_init()
  x86: decouple pat and mtrr handling

 arch/x86/include/asm/cacheinfo.h   |  14 +++
 arch/x86/include/asm/memtype.h     |   5 +-
 arch/x86/include/asm/mtrr.h        |  12 +--
 arch/x86/kernel/cpu/cacheinfo.c    | 159 +++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/common.c       |   3 +-
 arch/x86/kernel/cpu/mtrr/cyrix.c   |  34 ------
 arch/x86/kernel/cpu/mtrr/generic.c | 107 ++-----------------
 arch/x86/kernel/cpu/mtrr/mtrr.c    | 158 +++++-----------------------
 arch/x86/kernel/cpu/mtrr/mtrr.h    |   5 -
 arch/x86/kernel/setup.c            |  14 +--
 arch/x86/kernel/smpboot.c          |   9 +-
 arch/x86/mm/pat/memtype.c          | 127 +++++++----------------
 arch/x86/power/cpu.c               |   3 +-
 13 files changed, 265 insertions(+), 385 deletions(-)

-- 
2.35.3


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH v3 01/10] x86/mtrr: add comment for set_mtrr_state() serialization
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 02/10] x86/mtrr: remove unused cyrix_set_all() function Juergen Gross
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Add a comment how set_mtrr_state() is needing serialization.

Suggested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch instead of old patch 1
---
 arch/x86/kernel/cpu/mtrr/generic.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 558108296f3c..cd64eab02393 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -684,7 +684,10 @@ static u32 deftype_lo, deftype_hi;
 /**
  * set_mtrr_state - Set the MTRR state for this CPU.
  *
- * NOTE: The CPU must already be in a safe state for MTRR changes.
+ * NOTE: The CPU must already be in a safe state for MTRR changes, including
+ *       measures that only a single CPU can be active in set_mtrr_state() in
+ *       order to not be subject to races for usage of deftype_lo (this is
+ *       accomplished by taking set_atomicity_lock).
  * RETURNS: 0 if no changes made, else a mask indicating what was changed.
  */
 static unsigned long set_mtrr_state(void)
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 02/10] x86/mtrr: remove unused cyrix_set_all() function
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 01/10] x86/mtrr: add comment for set_mtrr_state() serialization Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag Juergen Gross
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

The Cyrix CPU specific MTRR function cyrix_set_all() will never be
called, as the struct mtrr_ops set_all() callback will only be called
in the use_intel() case, which would require the use_intel_if member
of struct mtrr_ops to be set, which isn't the case for Cyrix.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 arch/x86/kernel/cpu/mtrr/cyrix.c | 34 --------------------------------
 1 file changed, 34 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/cyrix.c b/arch/x86/kernel/cpu/mtrr/cyrix.c
index ca670919b561..c77d3b0a5bf2 100644
--- a/arch/x86/kernel/cpu/mtrr/cyrix.c
+++ b/arch/x86/kernel/cpu/mtrr/cyrix.c
@@ -234,42 +234,8 @@ static void cyrix_set_arr(unsigned int reg, unsigned long base,
 	post_set();
 }
 
-typedef struct {
-	unsigned long	base;
-	unsigned long	size;
-	mtrr_type	type;
-} arr_state_t;
-
-static arr_state_t arr_state[8] = {
-	{0UL, 0UL, 0UL}, {0UL, 0UL, 0UL}, {0UL, 0UL, 0UL}, {0UL, 0UL, 0UL},
-	{0UL, 0UL, 0UL}, {0UL, 0UL, 0UL}, {0UL, 0UL, 0UL}, {0UL, 0UL, 0UL}
-};
-
-static unsigned char ccr_state[7] = { 0, 0, 0, 0, 0, 0, 0 };
-
-static void cyrix_set_all(void)
-{
-	int i;
-
-	prepare_set();
-
-	/* the CCRs are not contiguous */
-	for (i = 0; i < 4; i++)
-		setCx86(CX86_CCR0 + i, ccr_state[i]);
-	for (; i < 7; i++)
-		setCx86(CX86_CCR4 + i, ccr_state[i]);
-
-	for (i = 0; i < 8; i++) {
-		cyrix_set_arr(i, arr_state[i].base,
-			      arr_state[i].size, arr_state[i].type);
-	}
-
-	post_set();
-}
-
 static const struct mtrr_ops cyrix_mtrr_ops = {
 	.vendor            = X86_VENDOR_CYRIX,
-	.set_all	   = cyrix_set_all,
 	.set               = cyrix_set_arr,
 	.get               = cyrix_get_arr,
 	.get_free_region   = cyrix_get_free_region,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 01/10] x86/mtrr: add comment for set_mtrr_state() serialization Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 02/10] x86/mtrr: remove unused cyrix_set_all() function Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-11 10:16   ` Borislav Petkov
  2022-09-08  8:49 ` [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr Juergen Gross
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

In MTRR code use_intel() is only used in one source file, and the
relevant use_intel_if member of struct mtrr_ops is set only in
generic_mtrr_ops.

Replace use_intel() with a single flag in cacheinfo.c, which can be set
when assigning generic_mtrr_ops to mtrr_if. This allows to drop
use_intel_if from mtrr_ops, while preparing to support PAT without
MTRR. As another preparation for the PAT/MTRR decoupling use a bit for
MTRR control and one for PAT control. For now set both bits together,
this can be changed later.

As the new flag will be set only if mtrr_enabled is set, the test for
mtrr_enabled can be dropped at some places.

At the same time drop the local mtrr_enabled() function and rename
the __mtrr_enabled flag to mtrr_enabled.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 arch/x86/include/asm/cacheinfo.h   |  5 +++
 arch/x86/kernel/cpu/cacheinfo.c    |  3 ++
 arch/x86/kernel/cpu/mtrr/generic.c |  1 -
 arch/x86/kernel/cpu/mtrr/mtrr.c    | 58 ++++++++++++++----------------
 arch/x86/kernel/cpu/mtrr/mtrr.h    |  2 --
 5 files changed, 35 insertions(+), 34 deletions(-)

diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
index 86b2e0dcc4bf..1aeafa9888f7 100644
--- a/arch/x86/include/asm/cacheinfo.h
+++ b/arch/x86/include/asm/cacheinfo.h
@@ -2,6 +2,11 @@
 #ifndef _ASM_X86_CACHEINFO_H
 #define _ASM_X86_CACHEINFO_H
 
+/* Kernel controls MTRR and/or PAT MSRs. */
+extern unsigned int cache_generic;
+#define CACHE_GENERIC_MTRR 0x01
+#define CACHE_GENERIC_PAT  0x02
+
 void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 66556833d7af..3b05d3ade7a6 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -35,6 +35,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
 /* Shared L2 cache maps */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
 
+/* Kernel controls MTRR and/or PAT MSRs. */
+unsigned int cache_generic;
+
 struct _cache_table {
 	unsigned char descriptor;
 	char cache_type;
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index cd64eab02393..81742870ecc5 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -917,7 +917,6 @@ int positive_have_wrcomb(void)
  * Generic structure...
  */
 const struct mtrr_ops generic_mtrr_ops = {
-	.use_intel_if		= 1,
 	.set_all		= generic_set_all,
 	.get			= generic_get_mtrr,
 	.get_free_region	= generic_get_free_region,
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 2746cac9d8a9..7d7d5bd30219 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -46,6 +46,7 @@
 #include <linux/syscore_ops.h>
 #include <linux/rcupdate.h>
 
+#include <asm/cacheinfo.h>
 #include <asm/cpufeature.h>
 #include <asm/e820/api.h>
 #include <asm/mtrr.h>
@@ -58,12 +59,7 @@
 #define MTRR_TO_PHYS_WC_OFFSET 1000
 
 u32 num_var_ranges;
-static bool __mtrr_enabled;
-
-static bool mtrr_enabled(void)
-{
-	return __mtrr_enabled;
-}
+static bool mtrr_enabled;
 
 unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 static DEFINE_MUTEX(mtrr_mutex);
@@ -119,11 +115,11 @@ static int have_wrcomb(void)
 }
 
 /*  This function returns the number of variable MTRRs  */
-static void __init set_num_var_ranges(void)
+static void __init set_num_var_ranges(bool use_generic)
 {
 	unsigned long config = 0, dummy;
 
-	if (use_intel())
+	if (use_generic)
 		rdmsr(MSR_MTRRcap, config, dummy);
 	else if (is_cpu(AMD) || is_cpu(HYGON))
 		config = 2;
@@ -303,7 +299,7 @@ int mtrr_add_page(unsigned long base, unsigned long size,
 	int i, replace, error;
 	mtrr_type ltype;
 
-	if (!mtrr_enabled())
+	if (!mtrr_enabled)
 		return -ENXIO;
 
 	error = mtrr_if->validate_add_page(base, size, type);
@@ -451,7 +447,7 @@ static int mtrr_check(unsigned long base, unsigned long size)
 int mtrr_add(unsigned long base, unsigned long size, unsigned int type,
 	     bool increment)
 {
-	if (!mtrr_enabled())
+	if (!mtrr_enabled)
 		return -ENODEV;
 	if (mtrr_check(base, size))
 		return -EINVAL;
@@ -480,7 +476,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
 	unsigned long lbase, lsize;
 	int error = -EINVAL;
 
-	if (!mtrr_enabled())
+	if (!mtrr_enabled)
 		return -ENODEV;
 
 	max = num_var_ranges;
@@ -540,7 +536,7 @@ int mtrr_del_page(int reg, unsigned long base, unsigned long size)
  */
 int mtrr_del(int reg, unsigned long base, unsigned long size)
 {
-	if (!mtrr_enabled())
+	if (!mtrr_enabled)
 		return -ENODEV;
 	if (mtrr_check(base, size))
 		return -EINVAL;
@@ -566,7 +562,7 @@ int arch_phys_wc_add(unsigned long base, unsigned long size)
 {
 	int ret;
 
-	if (pat_enabled() || !mtrr_enabled())
+	if (pat_enabled() || !mtrr_enabled)
 		return 0;  /* Success!  (We don't need to do anything.) */
 
 	ret = mtrr_add(base, size, MTRR_TYPE_WRCOMB, true);
@@ -686,6 +682,7 @@ int __initdata changed_by_mtrr_cleanup;
  */
 void __init mtrr_bp_init(void)
 {
+	bool use_generic = false;
 	u32 phys_addr;
 
 	init_ifs();
@@ -694,6 +691,7 @@ void __init mtrr_bp_init(void)
 
 	if (boot_cpu_has(X86_FEATURE_MTRR)) {
 		mtrr_if = &generic_mtrr_ops;
+		use_generic = true;
 		size_or_mask = SIZE_OR_MASK_BITS(36);
 		size_and_mask = 0x00f00000;
 		phys_addr = 36;
@@ -755,15 +753,18 @@ void __init mtrr_bp_init(void)
 	}
 
 	if (mtrr_if) {
-		__mtrr_enabled = true;
-		set_num_var_ranges();
+		mtrr_enabled = true;
+		set_num_var_ranges(use_generic);
 		init_table();
-		if (use_intel()) {
+		if (use_generic) {
 			/* BIOS may override */
-			__mtrr_enabled = get_mtrr_state();
+			mtrr_enabled = get_mtrr_state();
 
-			if (mtrr_enabled())
+			if (mtrr_enabled) {
 				mtrr_bp_pat_init();
+				cache_generic |= CACHE_GENERIC_MTRR |
+						 CACHE_GENERIC_PAT;
+			}
 
 			if (mtrr_cleanup(phys_addr)) {
 				changed_by_mtrr_cleanup = 1;
@@ -772,7 +773,7 @@ void __init mtrr_bp_init(void)
 		}
 	}
 
-	if (!mtrr_enabled()) {
+	if (!mtrr_enabled) {
 		pr_info("Disabled\n");
 
 		/*
@@ -786,10 +787,7 @@ void __init mtrr_bp_init(void)
 
 void mtrr_ap_init(void)
 {
-	if (!mtrr_enabled())
-		return;
-
-	if (!use_intel() || mtrr_aps_delayed_init)
+	if (!cache_generic || mtrr_aps_delayed_init)
 		return;
 
 	/*
@@ -816,7 +814,7 @@ void mtrr_save_state(void)
 {
 	int first_cpu;
 
-	if (!mtrr_enabled())
+	if (!mtrr_enabled)
 		return;
 
 	first_cpu = cpumask_first(cpu_online_mask);
@@ -825,9 +823,7 @@ void mtrr_save_state(void)
 
 void set_mtrr_aps_delayed_init(void)
 {
-	if (!mtrr_enabled())
-		return;
-	if (!use_intel())
+	if (!cache_generic)
 		return;
 
 	mtrr_aps_delayed_init = true;
@@ -838,7 +834,7 @@ void set_mtrr_aps_delayed_init(void)
  */
 void mtrr_aps_init(void)
 {
-	if (!use_intel() || !mtrr_enabled())
+	if (!cache_generic)
 		return;
 
 	/*
@@ -855,7 +851,7 @@ void mtrr_aps_init(void)
 
 void mtrr_bp_restore(void)
 {
-	if (!use_intel() || !mtrr_enabled())
+	if (!cache_generic)
 		return;
 
 	mtrr_if->set_all();
@@ -863,10 +859,10 @@ void mtrr_bp_restore(void)
 
 static int __init mtrr_init_finialize(void)
 {
-	if (!mtrr_enabled())
+	if (!mtrr_enabled)
 		return 0;
 
-	if (use_intel()) {
+	if (cache_generic & CACHE_GENERIC_MTRR) {
 		if (!changed_by_mtrr_cleanup)
 			mtrr_state_warn();
 		return 0;
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
index 2ac99e561181..88b1c4b6174a 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
@@ -14,7 +14,6 @@ extern unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 
 struct mtrr_ops {
 	u32	vendor;
-	u32	use_intel_if;
 	void	(*set)(unsigned int reg, unsigned long base,
 		       unsigned long size, mtrr_type type);
 	void	(*set_all)(void);
@@ -61,7 +60,6 @@ extern u64 size_or_mask, size_and_mask;
 extern const struct mtrr_ops *mtrr_if;
 
 #define is_cpu(vnd)	(mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd)
-#define use_intel()	(mtrr_if && mtrr_if->use_intel_if == 1)
 
 extern unsigned int num_var_ranges;
 extern u64 mtrr_tom2;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (2 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-11 11:02   ` Borislav Petkov
  2022-09-08  8:49 ` [PATCH v3 05/10] x86/mtrr: split generic_set_all() Juergen Gross
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Prepare making PAT and MTRR support independent from each other by
moving some code needed by both out of the MTRR specific sources.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- move code from cpu/common.c to cpu/cacheinfo.c (Boris Petkov)
---
 arch/x86/include/asm/cacheinfo.h   |  3 ++
 arch/x86/include/asm/mtrr.h        |  4 ++
 arch/x86/kernel/cpu/cacheinfo.c    | 77 ++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/mtrr/generic.c | 81 ++++--------------------------
 4 files changed, 93 insertions(+), 72 deletions(-)

diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
index 1aeafa9888f7..313a6920d0f9 100644
--- a/arch/x86/include/asm/cacheinfo.h
+++ b/arch/x86/include/asm/cacheinfo.h
@@ -10,4 +10,7 @@ extern unsigned int cache_generic;
 void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 
+void cache_disable(void);
+void cache_enable(void);
+
 #endif /* _ASM_X86_CACHEINFO_H */
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index 76d726074c16..12a16caed395 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -48,6 +48,8 @@ extern void mtrr_aps_init(void);
 extern void mtrr_bp_restore(void);
 extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
 extern int amd_special_default_mtrr(void);
+void mtrr_disable(void);
+void mtrr_enable(void);
 #  else
 static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
 {
@@ -87,6 +89,8 @@ static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
 #define set_mtrr_aps_delayed_init() do {} while (0)
 #define mtrr_aps_init() do {} while (0)
 #define mtrr_bp_restore() do {} while (0)
+#define mtrr_disable() do {} while (0)
+#define mtrr_enable() do {} while (0)
 #  endif
 
 #ifdef CONFIG_COMPAT
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 3b05d3ade7a6..47e2c72fa8a4 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -20,6 +20,8 @@
 #include <asm/cacheinfo.h>
 #include <asm/amd_nb.h>
 #include <asm/smp.h>
+#include <asm/mtrr.h>
+#include <asm/tlbflush.h>
 
 #include "cpu.h"
 
@@ -1043,3 +1045,78 @@ int populate_cache_leaves(unsigned int cpu)
 
 	return 0;
 }
+
+/*
+ * Disable and enable caches. Needed for changing MTRRs and the PAT MSR.
+ *
+ * Since we are disabling the cache don't allow any interrupts,
+ * they would run extremely slow and would only increase the pain.
+ *
+ * The caller must ensure that local interrupts are disabled and
+ * are reenabled after cache_enable() has been called.
+ */
+static unsigned long saved_cr4;
+static DEFINE_RAW_SPINLOCK(cache_disable_lock);
+
+void cache_disable(void) __acquires(cache_disable_lock)
+{
+	unsigned long cr0;
+
+	/*
+	 * Note that this is not ideal
+	 * since the cache is only flushed/disabled for this CPU while the
+	 * MTRRs are changed, but changing this requires more invasive
+	 * changes to the way the kernel boots
+	 */
+
+	raw_spin_lock(&cache_disable_lock);
+
+	/* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */
+	cr0 = read_cr0() | X86_CR0_CD;
+	write_cr0(cr0);
+
+	/*
+	 * Cache flushing is the most time-consuming step when programming
+	 * the MTRRs. Fortunately, as per the Intel Software Development
+	 * Manual, we can skip it if the processor supports cache self-
+	 * snooping.
+	 */
+	if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
+		wbinvd();
+
+	/* Save value of CR4 and clear Page Global Enable (bit 7) */
+	if (boot_cpu_has(X86_FEATURE_PGE)) {
+		saved_cr4 = __read_cr4();
+		__write_cr4(saved_cr4 & ~X86_CR4_PGE);
+	}
+
+	/* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */
+	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
+	flush_tlb_local();
+
+	if (boot_cpu_has(X86_FEATURE_MTRR))
+		mtrr_disable();
+
+	/* Again, only flush caches if we have to. */
+	if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
+		wbinvd();
+}
+
+void cache_enable(void) __releases(cache_disable_lock)
+{
+	/* Flush TLBs (no need to flush caches - they are disabled) */
+	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
+	flush_tlb_local();
+
+	if (boot_cpu_has(X86_FEATURE_MTRR))
+		mtrr_enable();
+
+	/* Enable caches */
+	write_cr0(read_cr0() & ~X86_CR0_CD);
+
+	/* Restore value of CR4 */
+	if (boot_cpu_has(X86_FEATURE_PGE))
+		__write_cr4(saved_cr4);
+
+	raw_spin_unlock(&cache_disable_lock);
+}
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 81742870ecc5..5ed397f03a87 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -10,6 +10,7 @@
 #include <linux/mm.h>
 
 #include <asm/processor-flags.h>
+#include <asm/cacheinfo.h>
 #include <asm/cpufeature.h>
 #include <asm/tlbflush.h>
 #include <asm/mtrr.h>
@@ -396,9 +397,6 @@ print_fixed(unsigned base, unsigned step, const mtrr_type *types)
 	}
 }
 
-static void prepare_set(void);
-static void post_set(void);
-
 static void __init print_mtrr_state(void)
 {
 	unsigned int i;
@@ -450,11 +448,11 @@ void __init mtrr_bp_pat_init(void)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	prepare_set();
+	cache_disable();
 
 	pat_init();
 
-	post_set();
+	cache_enable();
 	local_irq_restore(flags);
 }
 
@@ -718,80 +716,19 @@ static unsigned long set_mtrr_state(void)
 	return change_mask;
 }
 
-
-static unsigned long cr4;
-static DEFINE_RAW_SPINLOCK(set_atomicity_lock);
-
-/*
- * Since we are disabling the cache don't allow any interrupts,
- * they would run extremely slow and would only increase the pain.
- *
- * The caller must ensure that local interrupts are disabled and
- * are reenabled after post_set() has been called.
- */
-static void prepare_set(void) __acquires(set_atomicity_lock)
+void mtrr_disable(void)
 {
-	unsigned long cr0;
-
-	/*
-	 * Note that this is not ideal
-	 * since the cache is only flushed/disabled for this CPU while the
-	 * MTRRs are changed, but changing this requires more invasive
-	 * changes to the way the kernel boots
-	 */
-
-	raw_spin_lock(&set_atomicity_lock);
-
-	/* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */
-	cr0 = read_cr0() | X86_CR0_CD;
-	write_cr0(cr0);
-
-	/*
-	 * Cache flushing is the most time-consuming step when programming
-	 * the MTRRs. Fortunately, as per the Intel Software Development
-	 * Manual, we can skip it if the processor supports cache self-
-	 * snooping.
-	 */
-	if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
-		wbinvd();
-
-	/* Save value of CR4 and clear Page Global Enable (bit 7) */
-	if (boot_cpu_has(X86_FEATURE_PGE)) {
-		cr4 = __read_cr4();
-		__write_cr4(cr4 & ~X86_CR4_PGE);
-	}
-
-	/* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */
-	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
-	flush_tlb_local();
-
 	/* Save MTRR state */
 	rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi);
 
 	/* Disable MTRRs, and set the default type to uncached */
 	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & ~0xcff, deftype_hi);
-
-	/* Again, only flush caches if we have to. */
-	if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
-		wbinvd();
 }
 
-static void post_set(void) __releases(set_atomicity_lock)
+void mtrr_enable(void)
 {
-	/* Flush TLBs (no need to flush caches - they are disabled) */
-	count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
-	flush_tlb_local();
-
 	/* Intel (P6) standard MTRRs */
 	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi);
-
-	/* Enable caches */
-	write_cr0(read_cr0() & ~X86_CR0_CD);
-
-	/* Restore value of CR4 */
-	if (boot_cpu_has(X86_FEATURE_PGE))
-		__write_cr4(cr4);
-	raw_spin_unlock(&set_atomicity_lock);
 }
 
 static void generic_set_all(void)
@@ -800,7 +737,7 @@ static void generic_set_all(void)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	prepare_set();
+	cache_disable();
 
 	/* Actually set the state */
 	mask = set_mtrr_state();
@@ -808,7 +745,7 @@ static void generic_set_all(void)
 	/* also set PAT */
 	pat_init();
 
-	post_set();
+	cache_enable();
 	local_irq_restore(flags);
 
 	/* Use the atomic bitops to update the global mask */
@@ -839,7 +776,7 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
 	vr = &mtrr_state.var_ranges[reg];
 
 	local_irq_save(flags);
-	prepare_set();
+	cache_disable();
 
 	if (size == 0) {
 		/*
@@ -858,7 +795,7 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
 		mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi);
 	}
 
-	post_set();
+	cache_enable();
 	local_irq_restore(flags);
 }
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 05/10] x86/mtrr: split generic_set_all()
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (3 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-19 19:25   ` Borislav Petkov
  2022-09-08  8:49 ` [PATCH v3 06/10] x86/mtrr: remove set_all callback from struct mtrr_ops Juergen Gross
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Split generic_set_all() into multiple parts, while moving the main
function body into cacheinfo.c.

This prepares the support of PAT without needing MTRR support by
moving the main function body of generic_set_all() into cacheinfo.c
while renaming it to cache_cpu_init(). The MTRR specific parts are
moved into a dedicated small function called by cache_cpu_init().
The PAT and MTRR specific functions are called conditionally based
on the cache_generic bit settings.

The setting of smp_changes_mask is merged into the (new) function
mtrr_generic_set_state() used to call set_mtrr_state(). It was
probably split in ancient times, as atomic operations while running
uncached might be quite expensive, but OTOH only systems with a
broken BIOS should ever require to set any bit in smp_changes_mask,
so just hurting those devices with a penalty of a few microseconds
during boot shouldn't be a real issue.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 arch/x86/include/asm/cacheinfo.h   |  1 +
 arch/x86/include/asm/mtrr.h        |  2 ++
 arch/x86/kernel/cpu/cacheinfo.c    | 19 +++++++++++++++++++
 arch/x86/kernel/cpu/mtrr/generic.c | 15 ++-------------
 4 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
index 313a6920d0f9..563d9cb5fcf5 100644
--- a/arch/x86/include/asm/cacheinfo.h
+++ b/arch/x86/include/asm/cacheinfo.h
@@ -12,5 +12,6 @@ void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 
 void cache_disable(void);
 void cache_enable(void);
+void cache_cpu_init(void);
 
 #endif /* _ASM_X86_CACHEINFO_H */
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index 12a16caed395..986249a2b9b6 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -50,6 +50,7 @@ extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
 extern int amd_special_default_mtrr(void);
 void mtrr_disable(void);
 void mtrr_enable(void);
+void mtrr_generic_set_state(void);
 #  else
 static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
 {
@@ -91,6 +92,7 @@ static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
 #define mtrr_bp_restore() do {} while (0)
 #define mtrr_disable() do {} while (0)
 #define mtrr_enable() do {} while (0)
+#define mtrr_generic_set_state() do {} while (0)
 #  endif
 
 #ifdef CONFIG_COMPAT
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 47e2c72fa8a4..36378604ec61 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -1120,3 +1120,22 @@ void cache_enable(void) __releases(cache_disable_lock)
 
 	raw_spin_unlock(&cache_disable_lock);
 }
+
+void cache_cpu_init(void)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	cache_disable();
+
+	/* Set MTRR state. */
+	if (cache_generic & CACHE_GENERIC_MTRR)
+		mtrr_generic_set_state();
+
+	/* Set PAT. */
+	if (cache_generic & CACHE_GENERIC_PAT)
+		pat_init();
+
+	cache_enable();
+	local_irq_restore(flags);
+}
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 5ed397f03a87..fc7b2d952737 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -731,30 +731,19 @@ void mtrr_enable(void)
 	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi);
 }
 
-static void generic_set_all(void)
+void mtrr_generic_set_state(void)
 {
 	unsigned long mask, count;
-	unsigned long flags;
-
-	local_irq_save(flags);
-	cache_disable();
 
 	/* Actually set the state */
 	mask = set_mtrr_state();
 
-	/* also set PAT */
-	pat_init();
-
-	cache_enable();
-	local_irq_restore(flags);
-
 	/* Use the atomic bitops to update the global mask */
 	for (count = 0; count < sizeof(mask) * 8; ++count) {
 		if (mask & 0x01)
 			set_bit(count, &smp_changes_mask);
 		mask >>= 1;
 	}
-
 }
 
 /**
@@ -854,7 +843,7 @@ int positive_have_wrcomb(void)
  * Generic structure...
  */
 const struct mtrr_ops generic_mtrr_ops = {
-	.set_all		= generic_set_all,
+	.set_all		= cache_cpu_init,
 	.get			= generic_get_mtrr,
 	.get_free_region	= generic_get_free_region,
 	.set			= generic_set_mtrr,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 06/10] x86/mtrr: remove set_all callback from struct mtrr_ops
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (4 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 05/10] x86/mtrr: split generic_set_all() Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 07/10] x86/mtrr: simplify mtrr_bp_init() Juergen Gross
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

Instead of using an indirect call to mtrr_if->set_all just call the
only possible target cache_cpu_init() directly. This enables to remove
the set_all callback from struct mtrr_ops.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/cpu/mtrr/generic.c |  1 -
 arch/x86/kernel/cpu/mtrr/mtrr.c    | 10 +++++-----
 arch/x86/kernel/cpu/mtrr/mtrr.h    |  2 --
 3 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index fc7b2d952737..5f83ee865def 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -843,7 +843,6 @@ int positive_have_wrcomb(void)
  * Generic structure...
  */
 const struct mtrr_ops generic_mtrr_ops = {
-	.set_all		= cache_cpu_init,
 	.get			= generic_get_mtrr,
 	.get_free_region	= generic_get_free_region,
 	.set			= generic_set_mtrr,
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 7d7d5bd30219..9609a0d235f8 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -165,15 +165,15 @@ static int mtrr_rendezvous_handler(void *info)
 	 * saved, and we want to replicate that across all the cpus that come
 	 * online (either at the end of boot or resume or during a runtime cpu
 	 * online). If we're doing that, @reg is set to something special and on
-	 * all the cpu's we do mtrr_if->set_all() (On the logical cpu that
+	 * all the cpu's we do cache_cpu_init() (On the logical cpu that
 	 * started the boot/resume sequence, this might be a duplicate
-	 * set_all()).
+	 * cache_cpu_init()).
 	 */
 	if (data->smp_reg != ~0U) {
 		mtrr_if->set(data->smp_reg, data->smp_base,
 			     data->smp_size, data->smp_type);
 	} else if (mtrr_aps_delayed_init || !cpu_online(smp_processor_id())) {
-		mtrr_if->set_all();
+		cache_cpu_init();
 	}
 	return 0;
 }
@@ -768,7 +768,7 @@ void __init mtrr_bp_init(void)
 
 			if (mtrr_cleanup(phys_addr)) {
 				changed_by_mtrr_cleanup = 1;
-				mtrr_if->set_all();
+				cache_cpu_init();
 			}
 		}
 	}
@@ -854,7 +854,7 @@ void mtrr_bp_restore(void)
 	if (!cache_generic)
 		return;
 
-	mtrr_if->set_all();
+	cache_cpu_init();
 }
 
 static int __init mtrr_init_finialize(void)
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
index 88b1c4b6174a..3b1883185185 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
@@ -16,8 +16,6 @@ struct mtrr_ops {
 	u32	vendor;
 	void	(*set)(unsigned int reg, unsigned long base,
 		       unsigned long size, mtrr_type type);
-	void	(*set_all)(void);
-
 	void	(*get)(unsigned int reg, unsigned long *base,
 		       unsigned long *size, mtrr_type *type);
 	int	(*get_free_region)(unsigned long base, unsigned long size,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 07/10] x86/mtrr: simplify mtrr_bp_init()
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (5 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 06/10] x86/mtrr: remove set_all callback from struct mtrr_ops Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-26 18:40   ` Borislav Petkov
  2022-09-08  8:49 ` [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init Juergen Gross
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

In case of the generic cache interface being used (Intel CPUs or a
64-bit system), the initialization sequence of the boot CPU is more
complicated as necessary:

- check if MTRR enabled, if yes, call mtrr_bp_pat_init() which will
  disable caching, set the PAT MSR, and reenable caching

- call mtrr_cleanup(), in case that changed anything, call
  cache_cpu_init() doing the same caching disable/enable dance as
  above, but this time with setting the (modified) MTRR state (even
  if MTRR was disabled) AND setting the PAT MSR (again even with
  disabled MTRR)

The sequence can be simplified a lot while removing potential
inconsistencies:

- check if MTRR enabled, if yes, call mtrr_cleanup() and then
  cache_cpu_init()

This ensures to:

- no longer disable/enable caching more than once

- avoid to set MTRRs and/or the PAT MSR on the boot processor in case
  of MTRR cleanups even if MTRRs meant to be disabled

With that mtrr_bp_pat_init() can be removed.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 arch/x86/kernel/cpu/mtrr/generic.c | 14 --------------
 arch/x86/kernel/cpu/mtrr/mtrr.c    |  7 ++-----
 arch/x86/kernel/cpu/mtrr/mtrr.h    |  1 -
 3 files changed, 2 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 5f83ee865def..b15634e5ad44 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -442,20 +442,6 @@ static void __init print_mtrr_state(void)
 		pr_debug("TOM2: %016llx aka %lldM\n", mtrr_tom2, mtrr_tom2>>20);
 }
 
-/* PAT setup for BP. We need to go through sync steps here */
-void __init mtrr_bp_pat_init(void)
-{
-	unsigned long flags;
-
-	local_irq_save(flags);
-	cache_disable();
-
-	pat_init();
-
-	cache_enable();
-	local_irq_restore(flags);
-}
-
 /* Grab all of the MTRR state for this CPU into *state */
 bool __init get_mtrr_state(void)
 {
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 9609a0d235f8..956838bb4481 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -761,13 +761,10 @@ void __init mtrr_bp_init(void)
 			mtrr_enabled = get_mtrr_state();
 
 			if (mtrr_enabled) {
-				mtrr_bp_pat_init();
 				cache_generic |= CACHE_GENERIC_MTRR |
 						 CACHE_GENERIC_PAT;
-			}
-
-			if (mtrr_cleanup(phys_addr)) {
-				changed_by_mtrr_cleanup = 1;
+				changed_by_mtrr_cleanup =
+					mtrr_cleanup(phys_addr);
 				cache_cpu_init();
 			}
 		}
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
index 3b1883185185..c98928ceee6a 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
@@ -50,7 +50,6 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt);
 void fill_mtrr_var_range(unsigned int index,
 		u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi);
 bool get_mtrr_state(void);
-void mtrr_bp_pat_init(void);
 
 extern void __init set_mtrr_ops(const struct mtrr_ops *ops);
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (6 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 07/10] x86/mtrr: simplify mtrr_bp_init() Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-26 21:11   ` Borislav Petkov
  2022-09-08  8:49 ` [PATCH v3 09/10] x86/mtrr: add a stop_machine() handler calling only cache_cpu_init() Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 10/10] x86: decouple pat and mtrr handling Juergen Gross
  9 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin

In order to prepare decoupling MTRR and PAT replace the MTRR specific
mtrr_aps_delayed_init flag with a more generic cache_aps_delayed_init
one.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 arch/x86/include/asm/cacheinfo.h |  2 ++
 arch/x86/include/asm/mtrr.h      |  2 --
 arch/x86/kernel/cpu/cacheinfo.c  |  2 ++
 arch/x86/kernel/cpu/mtrr/mtrr.c  | 17 ++++-------------
 arch/x86/kernel/smpboot.c        |  5 +++--
 5 files changed, 11 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
index 563d9cb5fcf5..e80ed3c523c8 100644
--- a/arch/x86/include/asm/cacheinfo.h
+++ b/arch/x86/include/asm/cacheinfo.h
@@ -7,6 +7,8 @@ extern unsigned int cache_generic;
 #define CACHE_GENERIC_MTRR 0x01
 #define CACHE_GENERIC_PAT  0x02
 
+extern bool cache_aps_delayed_init;
+
 void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index 986249a2b9b6..5d31219c8529 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -43,7 +43,6 @@ extern int mtrr_del(int reg, unsigned long base, unsigned long size);
 extern int mtrr_del_page(int reg, unsigned long base, unsigned long size);
 extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
 extern void mtrr_ap_init(void);
-extern void set_mtrr_aps_delayed_init(void);
 extern void mtrr_aps_init(void);
 extern void mtrr_bp_restore(void);
 extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
@@ -87,7 +86,6 @@ static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
 {
 }
 #define mtrr_ap_init() do {} while (0)
-#define set_mtrr_aps_delayed_init() do {} while (0)
 #define mtrr_aps_init() do {} while (0)
 #define mtrr_bp_restore() do {} while (0)
 #define mtrr_disable() do {} while (0)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 36378604ec61..c6e7c93e45e8 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -1139,3 +1139,5 @@ void cache_cpu_init(void)
 	cache_enable();
 	local_irq_restore(flags);
 }
+
+bool cache_aps_delayed_init;
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 956838bb4481..a47d46035240 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -65,7 +65,6 @@ unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 static DEFINE_MUTEX(mtrr_mutex);
 
 u64 size_or_mask, size_and_mask;
-static bool mtrr_aps_delayed_init;
 
 static const struct mtrr_ops *mtrr_ops[X86_VENDOR_NUM] __ro_after_init;
 
@@ -172,7 +171,7 @@ static int mtrr_rendezvous_handler(void *info)
 	if (data->smp_reg != ~0U) {
 		mtrr_if->set(data->smp_reg, data->smp_base,
 			     data->smp_size, data->smp_type);
-	} else if (mtrr_aps_delayed_init || !cpu_online(smp_processor_id())) {
+	} else if (cache_aps_delayed_init || !cpu_online(smp_processor_id())) {
 		cache_cpu_init();
 	}
 	return 0;
@@ -784,7 +783,7 @@ void __init mtrr_bp_init(void)
 
 void mtrr_ap_init(void)
 {
-	if (!cache_generic || mtrr_aps_delayed_init)
+	if (!cache_generic || cache_aps_delayed_init)
 		return;
 
 	/*
@@ -818,14 +817,6 @@ void mtrr_save_state(void)
 	smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1);
 }
 
-void set_mtrr_aps_delayed_init(void)
-{
-	if (!cache_generic)
-		return;
-
-	mtrr_aps_delayed_init = true;
-}
-
 /*
  * Delayed MTRR initialization for all AP's
  */
@@ -839,11 +830,11 @@ void mtrr_aps_init(void)
 	 * by doing set_mtrr_aps_delayed_init(), prior to this point. If not,
 	 * then we are done.
 	 */
-	if (!mtrr_aps_delayed_init)
+	if (!cache_aps_delayed_init)
 		return;
 
 	set_mtrr(~0U, 0, 0, 0);
-	mtrr_aps_delayed_init = false;
+	cache_aps_delayed_init = false;
 }
 
 void mtrr_bp_restore(void)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f24227bc3220..ef7bce21cbe8 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -58,6 +58,7 @@
 #include <linux/overflow.h>
 
 #include <asm/acpi.h>
+#include <asm/cacheinfo.h>
 #include <asm/desc.h>
 #include <asm/nmi.h>
 #include <asm/irq.h>
@@ -1428,7 +1429,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 
 	uv_system_init();
 
-	set_mtrr_aps_delayed_init();
+	cache_aps_delayed_init = true;
 
 	smp_quirk_init_udelay();
 
@@ -1439,7 +1440,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 
 void arch_thaw_secondary_cpus_begin(void)
 {
-	set_mtrr_aps_delayed_init();
+	cache_aps_delayed_init = true;
 }
 
 void arch_thaw_secondary_cpus_end(void)
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 09/10] x86/mtrr: add a stop_machine() handler calling only cache_cpu_init()
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (7 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  2022-09-08  8:49 ` [PATCH v3 10/10] x86: decouple pat and mtrr handling Juergen Gross
  9 siblings, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel, linux-pm
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Rafael J. Wysocki, Pavel Machek

Instead of having a stop_machine() handler for either a specific MTRR
register or all state at once, add a handler just for calling
cache_cpu_init() if appropriate.

Add functions for calling stop_machine() with this handler as well.

Add a generic replacements for mtrr_bp_restore() and a wrapper for
mtrr_bp_init().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- completely new replacement of former patch 2
---
 arch/x86/include/asm/cacheinfo.h |  5 +-
 arch/x86/include/asm/mtrr.h      |  4 --
 arch/x86/kernel/cpu/cacheinfo.c  | 59 +++++++++++++++++++++-
 arch/x86/kernel/cpu/common.c     |  3 +-
 arch/x86/kernel/cpu/mtrr/mtrr.c  | 87 +-------------------------------
 arch/x86/kernel/setup.c          |  3 +-
 arch/x86/kernel/smpboot.c        |  4 +-
 arch/x86/power/cpu.c             |  3 +-
 8 files changed, 72 insertions(+), 96 deletions(-)

diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
index e80ed3c523c8..a122a1aad936 100644
--- a/arch/x86/include/asm/cacheinfo.h
+++ b/arch/x86/include/asm/cacheinfo.h
@@ -14,6 +14,9 @@ void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
 
 void cache_disable(void);
 void cache_enable(void);
-void cache_cpu_init(void);
+void cache_bp_init(void);
+void cache_bp_restore(void);
+void cache_ap_init(void);
+void cache_aps_init(void);
 
 #endif /* _ASM_X86_CACHEINFO_H */
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index 5d31219c8529..ec73d1e5bafb 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -42,8 +42,6 @@ extern int mtrr_add_page(unsigned long base, unsigned long size,
 extern int mtrr_del(int reg, unsigned long base, unsigned long size);
 extern int mtrr_del_page(int reg, unsigned long base, unsigned long size);
 extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
-extern void mtrr_ap_init(void);
-extern void mtrr_aps_init(void);
 extern void mtrr_bp_restore(void);
 extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
 extern int amd_special_default_mtrr(void);
@@ -85,8 +83,6 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
 static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
 {
 }
-#define mtrr_ap_init() do {} while (0)
-#define mtrr_aps_init() do {} while (0)
 #define mtrr_bp_restore() do {} while (0)
 #define mtrr_disable() do {} while (0)
 #define mtrr_enable() do {} while (0)
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index c6e7c93e45e8..4946f93eb16f 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -15,6 +15,7 @@
 #include <linux/capability.h>
 #include <linux/sysfs.h>
 #include <linux/pci.h>
+#include <linux/stop_machine.h>
 
 #include <asm/cpufeature.h>
 #include <asm/cacheinfo.h>
@@ -1121,7 +1122,7 @@ void cache_enable(void) __releases(cache_disable_lock)
 	raw_spin_unlock(&cache_disable_lock);
 }
 
-void cache_cpu_init(void)
+static void cache_cpu_init(void)
 {
 	unsigned long flags;
 
@@ -1141,3 +1142,59 @@ void cache_cpu_init(void)
 }
 
 bool cache_aps_delayed_init;
+
+static int cache_rendezvous_handler(void *unused)
+{
+	if (cache_aps_delayed_init || !cpu_online(smp_processor_id()))
+		cache_cpu_init();
+
+	return 0;
+}
+
+void __init cache_bp_init(void)
+{
+	mtrr_bp_init();
+
+	if (cache_generic)
+		cache_cpu_init();
+}
+
+void cache_bp_restore(void)
+{
+	if (cache_generic)
+		cache_cpu_init();
+}
+
+void cache_ap_init(void)
+{
+	if (!cache_generic || cache_aps_delayed_init)
+		return;
+
+	/*
+	 * Ideally we should hold mtrr_mutex here to avoid mtrr entries
+	 * changed, but this routine will be called in cpu boot time,
+	 * holding the lock breaks it.
+	 *
+	 * This routine is called in two cases:
+	 *
+	 *   1. very early time of software resume, when there absolutely
+	 *      isn't mtrr entry changes;
+	 *
+	 *   2. cpu hotadd time. We let mtrr_add/del_page hold cpuhotplug
+	 *      lock to prevent mtrr entry changes
+	 */
+	stop_machine_from_inactive_cpu(cache_rendezvous_handler, NULL,
+				       cpu_callout_mask);
+}
+
+/*
+ * Delayed cache initialization for all AP's
+ */
+void cache_aps_init(void)
+{
+	if (!cache_generic || !cache_aps_delayed_init)
+		return;
+
+	stop_machine(cache_rendezvous_handler, NULL, cpu_online_mask);
+	cache_aps_delayed_init = false;
+}
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 3e508f239098..fd058b547f8d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -52,6 +52,7 @@
 #include <asm/cpu.h>
 #include <asm/mce.h>
 #include <asm/msr.h>
+#include <asm/cacheinfo.h>
 #include <asm/memtype.h>
 #include <asm/microcode.h>
 #include <asm/microcode_intel.h>
@@ -1948,7 +1949,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 #ifdef CONFIG_X86_32
 	enable_sep_cpu();
 #endif
-	mtrr_ap_init();
+	cache_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
 	update_srbds_msr();
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index a47d46035240..5e8be11d1873 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -70,9 +70,6 @@ static const struct mtrr_ops *mtrr_ops[X86_VENDOR_NUM] __ro_after_init;
 
 const struct mtrr_ops *mtrr_if;
 
-static void set_mtrr(unsigned int reg, unsigned long base,
-		     unsigned long size, mtrr_type type);
-
 void __init set_mtrr_ops(const struct mtrr_ops *ops)
 {
 	if (ops->vendor && ops->vendor < X86_VENDOR_NUM)
@@ -155,25 +152,8 @@ static int mtrr_rendezvous_handler(void *info)
 {
 	struct set_mtrr_data *data = info;
 
-	/*
-	 * We use this same function to initialize the mtrrs during boot,
-	 * resume, runtime cpu online and on an explicit request to set a
-	 * specific MTRR.
-	 *
-	 * During boot or suspend, the state of the boot cpu's mtrrs has been
-	 * saved, and we want to replicate that across all the cpus that come
-	 * online (either at the end of boot or resume or during a runtime cpu
-	 * online). If we're doing that, @reg is set to something special and on
-	 * all the cpu's we do cache_cpu_init() (On the logical cpu that
-	 * started the boot/resume sequence, this might be a duplicate
-	 * cache_cpu_init()).
-	 */
-	if (data->smp_reg != ~0U) {
-		mtrr_if->set(data->smp_reg, data->smp_base,
-			     data->smp_size, data->smp_type);
-	} else if (cache_aps_delayed_init || !cpu_online(smp_processor_id())) {
-		cache_cpu_init();
-	}
+	mtrr_if->set(data->smp_reg, data->smp_base,
+		     data->smp_size, data->smp_type);
 	return 0;
 }
 
@@ -243,19 +223,6 @@ static void set_mtrr_cpuslocked(unsigned int reg, unsigned long base,
 	stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask);
 }
 
-static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base,
-				      unsigned long size, mtrr_type type)
-{
-	struct set_mtrr_data data = { .smp_reg = reg,
-				      .smp_base = base,
-				      .smp_size = size,
-				      .smp_type = type
-				    };
-
-	stop_machine_from_inactive_cpu(mtrr_rendezvous_handler, &data,
-				       cpu_callout_mask);
-}
-
 /**
  * mtrr_add_page - Add a memory type region
  * @base: Physical base address of region in pages (in units of 4 kB!)
@@ -764,7 +731,6 @@ void __init mtrr_bp_init(void)
 						 CACHE_GENERIC_PAT;
 				changed_by_mtrr_cleanup =
 					mtrr_cleanup(phys_addr);
-				cache_cpu_init();
 			}
 		}
 	}
@@ -781,27 +747,6 @@ void __init mtrr_bp_init(void)
 	}
 }
 
-void mtrr_ap_init(void)
-{
-	if (!cache_generic || cache_aps_delayed_init)
-		return;
-
-	/*
-	 * Ideally we should hold mtrr_mutex here to avoid mtrr entries
-	 * changed, but this routine will be called in cpu boot time,
-	 * holding the lock breaks it.
-	 *
-	 * This routine is called in two cases:
-	 *
-	 *   1. very early time of software resume, when there absolutely
-	 *      isn't mtrr entry changes;
-	 *
-	 *   2. cpu hotadd time. We let mtrr_add/del_page hold cpuhotplug
-	 *      lock to prevent mtrr entry changes
-	 */
-	set_mtrr_from_inactive_cpu(~0U, 0, 0, 0);
-}
-
 /**
  * mtrr_save_state - Save current fixed-range MTRR state of the first
  *	cpu in cpu_online_mask.
@@ -817,34 +762,6 @@ void mtrr_save_state(void)
 	smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1);
 }
 
-/*
- * Delayed MTRR initialization for all AP's
- */
-void mtrr_aps_init(void)
-{
-	if (!cache_generic)
-		return;
-
-	/*
-	 * Check if someone has requested the delay of AP MTRR initialization,
-	 * by doing set_mtrr_aps_delayed_init(), prior to this point. If not,
-	 * then we are done.
-	 */
-	if (!cache_aps_delayed_init)
-		return;
-
-	set_mtrr(~0U, 0, 0, 0);
-	cache_aps_delayed_init = false;
-}
-
-void mtrr_bp_restore(void)
-{
-	if (!cache_generic)
-		return;
-
-	cache_cpu_init();
-}
-
 static int __init mtrr_init_finialize(void)
 {
 	if (!mtrr_enabled)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 216fee7144ee..e0e185ee0229 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -34,6 +34,7 @@
 #include <asm/numa.h>
 #include <asm/bios_ebda.h>
 #include <asm/bugs.h>
+#include <asm/cacheinfo.h>
 #include <asm/cpu.h>
 #include <asm/efi.h>
 #include <asm/gart.h>
@@ -1075,7 +1076,7 @@ void __init setup_arch(char **cmdline_p)
 
 	/* update e820 for memory not covered by WB MTRRs */
 	if (IS_ENABLED(CONFIG_MTRR))
-		mtrr_bp_init();
+		cache_bp_init();
 	else
 		pat_disable("PAT support disabled because CONFIG_MTRR is disabled in the kernel.");
 
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index ef7bce21cbe8..ff793f436904 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1445,7 +1445,7 @@ void arch_thaw_secondary_cpus_begin(void)
 
 void arch_thaw_secondary_cpus_end(void)
 {
-	mtrr_aps_init();
+	cache_aps_init();
 }
 
 /*
@@ -1488,7 +1488,7 @@ void __init native_smp_cpus_done(unsigned int max_cpus)
 
 	nmi_selftest();
 	impress_friends();
-	mtrr_aps_init();
+	cache_aps_init();
 }
 
 static int __initdata setup_possible_cpus = -1;
diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
index bb176c72891c..754221c9a1c3 100644
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -23,6 +23,7 @@
 #include <asm/fpu/api.h>
 #include <asm/debugreg.h>
 #include <asm/cpu.h>
+#include <asm/cacheinfo.h>
 #include <asm/mmu_context.h>
 #include <asm/cpu_device_id.h>
 #include <asm/microcode.h>
@@ -261,7 +262,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
 	do_fpu_end();
 	tsc_verify_tsc_adjust(true);
 	x86_platform.restore_sched_clock_state();
-	mtrr_bp_restore();
+	cache_bp_restore();
 	perf_restore_debug_store();
 
 	c = &cpu_data(smp_processor_id());
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v3 10/10] x86: decouple pat and mtrr handling
  2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
                   ` (8 preceding siblings ...)
  2022-09-08  8:49 ` [PATCH v3 09/10] x86/mtrr: add a stop_machine() handler calling only cache_cpu_init() Juergen Gross
@ 2022-09-08  8:49 ` Juergen Gross
  9 siblings, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-08  8:49 UTC (permalink / raw)
  To: xen-devel, x86, linux-kernel
  Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, H. Peter Anvin, Andy Lutomirski, Peter Zijlstra

Today PAT is usable only with MTRR being active, with some nasty tweaks
to make PAT usable when running as Xen PV guest, which doesn't support
MTRR.

The reason for this coupling is, that both, PAT MSR changes and MTRR
changes, require a similar sequence and so full PAT support was added
using the already available MTRR handling.

Xen PV PAT handling can work without MTRR, as it just needs to consume
the PAT MSR setting done by the hypervisor without the ability and need
to change it. This in turn has resulted in a convoluted initialization
sequence and wrong decisions regarding cache mode availability due to
misguiding PAT availability flags.

Fix all of that by allowing to use PAT without MTRR and by reworking
the current PAT initialization sequence to match better with the newly
introduced generic cache initialization.

This removes the need of the recently added pat_force_disabled flag, so
remove the remnants of the patch adding it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- former patch 3 completely reworked
---
 arch/x86/include/asm/memtype.h  |   5 +-
 arch/x86/kernel/cpu/cacheinfo.c |   3 +-
 arch/x86/kernel/cpu/mtrr/mtrr.c |  13 +---
 arch/x86/kernel/setup.c         |  13 +---
 arch/x86/mm/pat/memtype.c       | 127 ++++++++++----------------------
 5 files changed, 45 insertions(+), 116 deletions(-)

diff --git a/arch/x86/include/asm/memtype.h b/arch/x86/include/asm/memtype.h
index 9ca760e430b9..113b2fa51849 100644
--- a/arch/x86/include/asm/memtype.h
+++ b/arch/x86/include/asm/memtype.h
@@ -6,9 +6,8 @@
 #include <asm/pgtable_types.h>
 
 extern bool pat_enabled(void);
-extern void pat_disable(const char *reason);
-extern void pat_init(void);
-extern void init_cache_modes(void);
+extern void pat_bp_init(void);
+extern void pat_cpu_init(void);
 
 extern int memtype_reserve(u64 start, u64 end,
 		enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
index 4946f93eb16f..08130919d55d 100644
--- a/arch/x86/kernel/cpu/cacheinfo.c
+++ b/arch/x86/kernel/cpu/cacheinfo.c
@@ -1135,7 +1135,7 @@ static void cache_cpu_init(void)
 
 	/* Set PAT. */
 	if (cache_generic & CACHE_GENERIC_PAT)
-		pat_init();
+		pat_cpu_init();
 
 	cache_enable();
 	local_irq_restore(flags);
@@ -1154,6 +1154,7 @@ static int cache_rendezvous_handler(void *unused)
 void __init cache_bp_init(void)
 {
 	mtrr_bp_init();
+	pat_bp_init();
 
 	if (cache_generic)
 		cache_cpu_init();
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 5e8be11d1873..921f425fe7b3 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -727,24 +727,15 @@ void __init mtrr_bp_init(void)
 			mtrr_enabled = get_mtrr_state();
 
 			if (mtrr_enabled) {
-				cache_generic |= CACHE_GENERIC_MTRR |
-						 CACHE_GENERIC_PAT;
+				cache_generic |= CACHE_GENERIC_MTRR;
 				changed_by_mtrr_cleanup =
 					mtrr_cleanup(phys_addr);
 			}
 		}
 	}
 
-	if (!mtrr_enabled) {
+	if (!mtrr_enabled)
 		pr_info("Disabled\n");
-
-		/*
-		 * PAT initialization relies on MTRR's rendezvous handler.
-		 * Skip PAT init until the handler can initialize both
-		 * features independently.
-		 */
-		pat_disable("MTRRs disabled, skipping PAT initialization too.");
-	}
 }
 
 /**
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index e0e185ee0229..aacaa96f0195 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1075,23 +1075,12 @@ void __init setup_arch(char **cmdline_p)
 	max_pfn = e820__end_of_ram_pfn();
 
 	/* update e820 for memory not covered by WB MTRRs */
-	if (IS_ENABLED(CONFIG_MTRR))
-		cache_bp_init();
-	else
-		pat_disable("PAT support disabled because CONFIG_MTRR is disabled in the kernel.");
-
+	cache_bp_init();
 	if (mtrr_trim_uncached_memory(max_pfn))
 		max_pfn = e820__end_of_ram_pfn();
 
 	max_possible_pfn = max_pfn;
 
-	/*
-	 * This call is required when the CPU does not support PAT. If
-	 * mtrr_bp_init() invoked it already via pat_init() the call has no
-	 * effect.
-	 */
-	init_cache_modes();
-
 	/*
 	 * Define random base addresses for memory sections after max_pfn is
 	 * defined and before each memory section base is used.
diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index 66a209f7eb86..1f62fc40e6df 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -43,6 +43,7 @@
 #include <linux/rbtree.h>
 
 #include <asm/cacheflush.h>
+#include <asm/cacheinfo.h>
 #include <asm/processor.h>
 #include <asm/tlbflush.h>
 #include <asm/x86_init.h>
@@ -60,41 +61,34 @@
 #undef pr_fmt
 #define pr_fmt(fmt) "" fmt
 
-static bool __read_mostly pat_bp_initialized;
 static bool __read_mostly pat_disabled = !IS_ENABLED(CONFIG_X86_PAT);
-static bool __initdata pat_force_disabled = !IS_ENABLED(CONFIG_X86_PAT);
-static bool __read_mostly pat_bp_enabled;
-static bool __read_mostly pat_cm_initialized;
+static u64 __read_mostly pat_msr_val;
 
 /*
  * PAT support is enabled by default, but can be disabled for
  * various user-requested or hardware-forced reasons:
  */
-void pat_disable(const char *msg_reason)
+static void __init pat_disable(const char *msg_reason)
 {
 	if (pat_disabled)
 		return;
 
-	if (pat_bp_initialized) {
-		WARN_ONCE(1, "x86/PAT: PAT cannot be disabled after initialization\n");
-		return;
-	}
-
 	pat_disabled = true;
 	pr_info("x86/PAT: %s\n", msg_reason);
+
+	cache_generic &= ~CACHE_GENERIC_PAT;
 }
 
 static int __init nopat(char *str)
 {
 	pat_disable("PAT support disabled via boot option.");
-	pat_force_disabled = true;
 	return 0;
 }
 early_param("nopat", nopat);
 
 bool pat_enabled(void)
 {
-	return pat_bp_enabled;
+	return !pat_disabled;
 }
 EXPORT_SYMBOL_GPL(pat_enabled);
 
@@ -192,7 +186,8 @@ enum {
 
 #define CM(c) (_PAGE_CACHE_MODE_ ## c)
 
-static enum page_cache_mode pat_get_cache_mode(unsigned pat_val, char *msg)
+static enum page_cache_mode __init pat_get_cache_mode(unsigned int pat_val,
+						      char *msg)
 {
 	enum page_cache_mode cache;
 	char *cache_mode;
@@ -219,14 +214,12 @@ static enum page_cache_mode pat_get_cache_mode(unsigned pat_val, char *msg)
  * configuration.
  * Using lower indices is preferred, so we start with highest index.
  */
-static void __init_cache_modes(u64 pat)
+static void __init init_cache_modes(u64 pat)
 {
 	enum page_cache_mode cache;
 	char pat_msg[33];
 	int i;
 
-	WARN_ON_ONCE(pat_cm_initialized);
-
 	pat_msg[32] = 0;
 	for (i = 7; i >= 0; i--) {
 		cache = pat_get_cache_mode((pat >> (i * 8)) & 7,
@@ -234,34 +227,11 @@ static void __init_cache_modes(u64 pat)
 		update_cache_mode_entry(i, cache);
 	}
 	pr_info("x86/PAT: Configuration [0-7]: %s\n", pat_msg);
-
-	pat_cm_initialized = true;
 }
 
 #define PAT(x, y)	((u64)PAT_ ## y << ((x)*8))
 
-static void pat_bp_init(u64 pat)
-{
-	u64 tmp_pat;
-
-	if (!boot_cpu_has(X86_FEATURE_PAT)) {
-		pat_disable("PAT not supported by the CPU.");
-		return;
-	}
-
-	rdmsrl(MSR_IA32_CR_PAT, tmp_pat);
-	if (!tmp_pat) {
-		pat_disable("PAT support disabled by the firmware.");
-		return;
-	}
-
-	wrmsrl(MSR_IA32_CR_PAT, pat);
-	pat_bp_enabled = true;
-
-	__init_cache_modes(pat);
-}
-
-static void pat_ap_init(u64 pat)
+void pat_cpu_init(void)
 {
 	if (!boot_cpu_has(X86_FEATURE_PAT)) {
 		/*
@@ -271,30 +241,35 @@ static void pat_ap_init(u64 pat)
 		panic("x86/PAT: PAT enabled, but not supported by secondary CPU\n");
 	}
 
-	wrmsrl(MSR_IA32_CR_PAT, pat);
+	wrmsrl(MSR_IA32_CR_PAT, pat_msr_val);
 }
 
-void __init init_cache_modes(void)
+/**
+ * pat_bp_init - Initialize the PAT MSR value and PAT table
+ *
+ * This function initializes PAT MSR value and PAT table with an OS-defined
+ * value to enable additional cache attributes, WC, WT and WP.
+ *
+ * This function prepares the calls of pat_cpu_init() via cache_cpu_init()
+ * on all cpus.
+ */
+void __init pat_bp_init(void)
 {
+	struct cpuinfo_x86 *c = &boot_cpu_data;
 	u64 pat = 0;
 
-	if (pat_cm_initialized)
-		return;
+#ifndef CONFIG_X86_PAT
+	pr_info_once("x86/PAT: PAT support disabled because CONFIG_X86_PAT is disabled in the kernel.\n");
+#endif
 
-	if (boot_cpu_has(X86_FEATURE_PAT)) {
-		/*
-		 * CPU supports PAT. Set PAT table to be consistent with
-		 * PAT MSR. This case supports "nopat" boot option, and
-		 * virtual machine environments which support PAT without
-		 * MTRRs. In specific, Xen has unique setup to PAT MSR.
-		 *
-		 * If PAT MSR returns 0, it is considered invalid and emulates
-		 * as No PAT.
-		 */
+	if (!boot_cpu_has(X86_FEATURE_PAT))
+		pat_disable("PAT not supported by the CPU.");
+	else
 		rdmsrl(MSR_IA32_CR_PAT, pat);
-	}
 
 	if (!pat) {
+		pat_disable("PAT support disabled by the firmware.");
+
 		/*
 		 * No PAT. Emulate the PAT table that corresponds to the two
 		 * cache bits, PWT (Write Through) and PCD (Cache Disable).
@@ -315,38 +290,14 @@ void __init init_cache_modes(void)
 		 */
 		pat = PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, UC) |
 		      PAT(4, WB) | PAT(5, WT) | PAT(6, UC_MINUS) | PAT(7, UC);
-	} else if (!pat_force_disabled && cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) {
-		/*
-		 * Clearly PAT is enabled underneath. Allow pat_enabled() to
-		 * reflect this.
-		 */
-		pat_bp_enabled = true;
 	}
 
-	__init_cache_modes(pat);
-}
-
-/**
- * pat_init - Initialize the PAT MSR and PAT table on the current CPU
- *
- * This function initializes PAT MSR and PAT table with an OS-defined value
- * to enable additional cache attributes, WC, WT and WP.
- *
- * This function must be called on all CPUs using the specific sequence of
- * operations defined in Intel SDM. mtrr_rendezvous_handler() provides this
- * procedure for PAT.
- */
-void pat_init(void)
-{
-	u64 pat;
-	struct cpuinfo_x86 *c = &boot_cpu_data;
-
-#ifndef CONFIG_X86_PAT
-	pr_info_once("x86/PAT: PAT support disabled because CONFIG_X86_PAT is disabled in the kernel.\n");
-#endif
+	/* Xen PV doesn't allow to set PAT MSR, but all cache modes are fine. */
+	if (pat_disabled || cpu_feature_enabled(X86_FEATURE_XENPV)) {
+		init_cache_modes(pat);
 
-	if (pat_disabled)
 		return;
+	}
 
 	if ((c->x86_vendor == X86_VENDOR_INTEL) &&
 	    (((c->x86 == 0x6) && (c->x86_model <= 0xd)) ||
@@ -404,12 +355,10 @@ void pat_init(void)
 		      PAT(4, WB) | PAT(5, WP) | PAT(6, UC_MINUS) | PAT(7, WT);
 	}
 
-	if (!pat_bp_initialized) {
-		pat_bp_init(pat);
-		pat_bp_initialized = true;
-	} else {
-		pat_ap_init(pat);
-	}
+	pat_msr_val = pat;
+	cache_generic |= CACHE_GENERIC_PAT;
+
+	init_cache_modes(pat);
 }
 
 #undef PAT
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag
  2022-09-08  8:49 ` [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag Juergen Gross
@ 2022-09-11 10:16   ` Borislav Petkov
  2022-09-12  9:10     ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-11 10:16 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Thu, Sep 08, 2022 at 10:49:07AM +0200, Juergen Gross wrote:
> diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
> index 86b2e0dcc4bf..1aeafa9888f7 100644
> --- a/arch/x86/include/asm/cacheinfo.h
> +++ b/arch/x86/include/asm/cacheinfo.h
> @@ -2,6 +2,11 @@
>  #ifndef _ASM_X86_CACHEINFO_H
>  #define _ASM_X86_CACHEINFO_H
>  
> +/* Kernel controls MTRR and/or PAT MSRs. */
> +extern unsigned int cache_generic;

So this should be called something more descriptive like

	memory_caching_types

or so to denote that this is a bitfield of supported memory caching
technologies. The code then would read as

	if (memory_caching_types & CACHE_MTRR)

The name's still not optimal tho - needs more brooding over.

> +#define CACHE_GENERIC_MTRR 0x01
> +#define CACHE_GENERIC_PAT  0x02

And those should be CACHE_{MTRR,PAT}.

>  void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
>  void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
>  
> diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
> index 66556833d7af..3b05d3ade7a6 100644
> --- a/arch/x86/kernel/cpu/cacheinfo.c
> +++ b/arch/x86/kernel/cpu/cacheinfo.c
> @@ -35,6 +35,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
>  /* Shared L2 cache maps */
>  DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
>  
> +/* Kernel controls MTRR and/or PAT MSRs. */
> +unsigned int cache_generic;

This should either be __ro_after_init and initialized to 0 or you need
accessors...

>  u32 num_var_ranges;
> -static bool __mtrr_enabled;
> -
> -static bool mtrr_enabled(void)
> -{
> -	return __mtrr_enabled;
> -}
> +static bool mtrr_enabled;

Hmm, I don't like this. There's way too many boolean flags in the mtrr
code. There's mtrr_state.enabled too. ;-\

Can we set (or clear) X86_FEATURE_MTRR to denote MTRR enablement status
and get rid of one more boolean flag?

...

>  void __init mtrr_bp_init(void)
>  {
> +	bool use_generic = false;
>  	u32 phys_addr;
>  
>  	init_ifs();
> @@ -694,6 +691,7 @@ void __init mtrr_bp_init(void)
>  
>  	if (boot_cpu_has(X86_FEATURE_MTRR)) {
>  		mtrr_if = &generic_mtrr_ops;
> +		use_generic = true;
>  		size_or_mask = SIZE_OR_MASK_BITS(36);
>  		size_and_mask = 0x00f00000;
>  		phys_addr = 36;
> @@ -755,15 +753,18 @@ void __init mtrr_bp_init(void)
>  	}
>  
>  	if (mtrr_if) {
> -		__mtrr_enabled = true;
> -		set_num_var_ranges();
> +		mtrr_enabled = true;
> +		set_num_var_ranges(use_generic);

You don't need use_generic either:

		set_num_var_ranges(mtrr_if == generic_mtrr_ops);

(The reason being I wanna get rid of that nasty minefield of boolean
vars all round that code).

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr
  2022-09-08  8:49 ` [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr Juergen Gross
@ 2022-09-11 11:02   ` Borislav Petkov
  2022-09-12  9:11     ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-11 11:02 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Thu, Sep 08, 2022 at 10:49:08AM +0200, Juergen Gross wrote:
> Prepare making PAT and MTRR support independent from each other by
> moving some code needed by both out of the MTRR specific sources.

This needs to be two patches at least: first one is only *mechanical*
move without any changes. The next one(s) do the renaming and other
changes etc. Otherwise reviewing it is unnecessarily complicated.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag
  2022-09-11 10:16   ` Borislav Petkov
@ 2022-09-12  9:10     ` Juergen Gross
  2022-09-19 19:10       ` Borislav Petkov
  2022-09-28 10:17       ` Juergen Gross
  0 siblings, 2 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-12  9:10 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 3248 bytes --]

On 11.09.22 12:16, Borislav Petkov wrote:
> On Thu, Sep 08, 2022 at 10:49:07AM +0200, Juergen Gross wrote:
>> diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
>> index 86b2e0dcc4bf..1aeafa9888f7 100644
>> --- a/arch/x86/include/asm/cacheinfo.h
>> +++ b/arch/x86/include/asm/cacheinfo.h
>> @@ -2,6 +2,11 @@
>>   #ifndef _ASM_X86_CACHEINFO_H
>>   #define _ASM_X86_CACHEINFO_H
>>   
>> +/* Kernel controls MTRR and/or PAT MSRs. */
>> +extern unsigned int cache_generic;
> 
> So this should be called something more descriptive like
> 
> 	memory_caching_types

In the end this variable doesn't specify which caching types are available,
but the ways to select/control the caching types.

So what about "memory_caching_select" or "memory_caching_control" instead?

> or so to denote that this is a bitfield of supported memory caching
> technologies. The code then would read as
> 
> 	if (memory_caching_types & CACHE_MTRR)
> 
> The name's still not optimal tho - needs more brooding over.
> 
>> +#define CACHE_GENERIC_MTRR 0x01
>> +#define CACHE_GENERIC_PAT  0x02
> 
> And those should be CACHE_{MTRR,PAT}.

Fine with me.

>>   void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
>>   void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
>>   
>> diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
>> index 66556833d7af..3b05d3ade7a6 100644
>> --- a/arch/x86/kernel/cpu/cacheinfo.c
>> +++ b/arch/x86/kernel/cpu/cacheinfo.c
>> @@ -35,6 +35,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
>>   /* Shared L2 cache maps */
>>   DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
>>   
>> +/* Kernel controls MTRR and/or PAT MSRs. */
>> +unsigned int cache_generic;
> 
> This should either be __ro_after_init and initialized to 0 or you need
> accessors...

Okay.

> 
>>   u32 num_var_ranges;
>> -static bool __mtrr_enabled;
>> -
>> -static bool mtrr_enabled(void)
>> -{
>> -	return __mtrr_enabled;
>> -}
>> +static bool mtrr_enabled;
> 
> Hmm, I don't like this. There's way too many boolean flags in the mtrr
> code. There's mtrr_state.enabled too. ;-\
> 
> Can we set (or clear) X86_FEATURE_MTRR to denote MTRR enablement status
> and get rid of one more boolean flag?

I'll have a look.

> 
> ...
> 
>>   void __init mtrr_bp_init(void)
>>   {
>> +	bool use_generic = false;
>>   	u32 phys_addr;
>>   
>>   	init_ifs();
>> @@ -694,6 +691,7 @@ void __init mtrr_bp_init(void)
>>   
>>   	if (boot_cpu_has(X86_FEATURE_MTRR)) {
>>   		mtrr_if = &generic_mtrr_ops;
>> +		use_generic = true;
>>   		size_or_mask = SIZE_OR_MASK_BITS(36);
>>   		size_and_mask = 0x00f00000;
>>   		phys_addr = 36;
>> @@ -755,15 +753,18 @@ void __init mtrr_bp_init(void)
>>   	}
>>   
>>   	if (mtrr_if) {
>> -		__mtrr_enabled = true;
>> -		set_num_var_ranges();
>> +		mtrr_enabled = true;
>> +		set_num_var_ranges(use_generic);
> 
> You don't need use_generic either:
> 
> 		set_num_var_ranges(mtrr_if == generic_mtrr_ops);
> 
> (The reason being I wanna get rid of that nasty minefield of boolean
> vars all round that code).

Fine with me.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr
  2022-09-11 11:02   ` Borislav Petkov
@ 2022-09-12  9:11     ` Juergen Gross
  0 siblings, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-12  9:11 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 491 bytes --]

On 11.09.22 13:02, Borislav Petkov wrote:
> On Thu, Sep 08, 2022 at 10:49:08AM +0200, Juergen Gross wrote:
>> Prepare making PAT and MTRR support independent from each other by
>> moving some code needed by both out of the MTRR specific sources.
> 
> This needs to be two patches at least: first one is only *mechanical*
> move without any changes. The next one(s) do the renaming and other
> changes etc. Otherwise reviewing it is unnecessarily complicated.

Okay.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag
  2022-09-12  9:10     ` Juergen Gross
@ 2022-09-19 19:10       ` Borislav Petkov
  2022-09-28 10:17       ` Juergen Gross
  1 sibling, 0 replies; 41+ messages in thread
From: Borislav Petkov @ 2022-09-19 19:10 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Mon, Sep 12, 2022 at 11:10:29AM +0200, Juergen Gross wrote:
> In the end this variable doesn't specify which caching types are available,
> but the ways to select/control the caching types.
> 
> So what about "memory_caching_select" or "memory_caching_control" instead?

_control sounds like the right thing. As in, which of the memory caching
things the kernel controls. Along with a comment above it what this
exactly is. Yap.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 05/10] x86/mtrr: split generic_set_all()
  2022-09-08  8:49 ` [PATCH v3 05/10] x86/mtrr: split generic_set_all() Juergen Gross
@ 2022-09-19 19:25   ` Borislav Petkov
  0 siblings, 0 replies; 41+ messages in thread
From: Borislav Petkov @ 2022-09-19 19:25 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Thu, Sep 08, 2022 at 10:49:09AM +0200, Juergen Gross wrote:
> Split generic_set_all() into multiple parts, while moving the main
> function body into cacheinfo.c.
> 
> This prepares the support of PAT without needing MTRR support by

"Prepare PAT support without ... "

> moving the main function body of generic_set_all() into cacheinfo.c
> while renaming it to cache_cpu_init(). The MTRR specific parts are
> moved into a dedicated small function called by cache_cpu_init().
> The PAT and MTRR specific functions are called conditionally based
> on the cache_generic bit settings.
> 
> The setting of smp_changes_mask is merged into the (new) function

... and so on. So the commit message should not say what you're doing -
that should be visible from the diff itself. It should talk more about
the *why* you're doing it.

...

> diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
> index 47e2c72fa8a4..36378604ec61 100644
> --- a/arch/x86/kernel/cpu/cacheinfo.c
> +++ b/arch/x86/kernel/cpu/cacheinfo.c
> @@ -1120,3 +1120,22 @@ void cache_enable(void) __releases(cache_disable_lock)
>  
>  	raw_spin_unlock(&cache_disable_lock);
>  }
> +
> +void cache_cpu_init(void)
> +{
> +	unsigned long flags;
> +
> +	local_irq_save(flags);
> +	cache_disable();
> +
> +	/* Set MTRR state. */

Yeah, and when you name the functions properly as you've done, you don't
really need those comments anymore either.

> +	if (cache_generic & CACHE_GENERIC_MTRR)
> +		mtrr_generic_set_state();
> +
> +	/* Set PAT. */

And this one.

> +	if (cache_generic & CACHE_GENERIC_PAT)
> +		pat_init();
> +
> +	cache_enable();
> +	local_irq_restore(flags);
> +}
> diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
> index 5ed397f03a87..fc7b2d952737 100644
> --- a/arch/x86/kernel/cpu/mtrr/generic.c
> +++ b/arch/x86/kernel/cpu/mtrr/generic.c
> @@ -731,30 +731,19 @@ void mtrr_enable(void)
>  	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi);
>  }
>  
> -static void generic_set_all(void)
> +void mtrr_generic_set_state(void)
>  {
>  	unsigned long mask, count;
> -	unsigned long flags;
> -
> -	local_irq_save(flags);
> -	cache_disable();
>  
>  	/* Actually set the state */
>  	mask = set_mtrr_state();
>  
> -	/* also set PAT */
> -	pat_init();
> -
> -	cache_enable();
> -	local_irq_restore(flags);
> -
>  	/* Use the atomic bitops to update the global mask */
>  	for (count = 0; count < sizeof(mask) * 8; ++count) {
>  		if (mask & 0x01)
>  			set_bit(count, &smp_changes_mask);
>  		mask >>= 1;
>  	}
> -
>  }
>  
>  /**
> @@ -854,7 +843,7 @@ int positive_have_wrcomb(void)
>   * Generic structure...
>   */
>  const struct mtrr_ops generic_mtrr_ops = {
> -	.set_all		= generic_set_all,
> +	.set_all		= cache_cpu_init,

I was gonna say that this looks weird - a set_all function pointer
assigned to a init function but that changes in the next patch.

But yeah, I like where this is going.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 07/10] x86/mtrr: simplify mtrr_bp_init()
  2022-09-08  8:49 ` [PATCH v3 07/10] x86/mtrr: simplify mtrr_bp_init() Juergen Gross
@ 2022-09-26 18:40   ` Borislav Petkov
  0 siblings, 0 replies; 41+ messages in thread
From: Borislav Petkov @ 2022-09-26 18:40 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Thu, Sep 08, 2022 at 10:49:11AM +0200, Juergen Gross wrote:
> diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
> index 9609a0d235f8..956838bb4481 100644
> --- a/arch/x86/kernel/cpu/mtrr/mtrr.c
> +++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
> @@ -761,13 +761,10 @@ void __init mtrr_bp_init(void)
>  			mtrr_enabled = get_mtrr_state();
>  
>  			if (mtrr_enabled) {
> -				mtrr_bp_pat_init();
>  				cache_generic |= CACHE_GENERIC_MTRR |
>  						 CACHE_GENERIC_PAT;
> -			}
> -
> -			if (mtrr_cleanup(phys_addr)) {
> -				changed_by_mtrr_cleanup = 1;
> +				changed_by_mtrr_cleanup =
> +					mtrr_cleanup(phys_addr);

Just let those lines stick out.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-08  8:49 ` [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init Juergen Gross
@ 2022-09-26 21:11   ` Borislav Petkov
  2022-09-27  8:57     ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-26 21:11 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Thu, Sep 08, 2022 at 10:49:12AM +0200, Juergen Gross wrote:
> -void set_mtrr_aps_delayed_init(void)
> -{
> -	if (!cache_generic)
> -		return;
> -
> -	mtrr_aps_delayed_init = true;
> -}
> -

Except that you've removed the accessors and made that bool global.
Which is less pretty than it was before...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-26 21:11   ` Borislav Petkov
@ 2022-09-27  8:57     ` Juergen Gross
  2022-09-27 10:10       ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-27  8:57 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 997 bytes --]

On 26.09.22 23:11, Borislav Petkov wrote:
> On Thu, Sep 08, 2022 at 10:49:12AM +0200, Juergen Gross wrote:
>> -void set_mtrr_aps_delayed_init(void)
>> -{
>> -	if (!cache_generic)
>> -		return;
>> -
>> -	mtrr_aps_delayed_init = true;
>> -}
>> -
> 
> Except that you've removed the accessors and made that bool global.
> Which is less pretty than it was before...
> 

The accessor would now only need to set the bool, while it had at least
some logic before.

TBH I don't see the point of having an accessor which is just setting a
variable to "true". But if you like it better, I can keep it.

Another possibility would be to move the arch_thaw_secondary_cpus_begin()
and arch_thaw_secondary_cpus_end() functions to cacheinfo.c, resulting
in only a single place outside of cacheinfo.c setting the variable (in
theory the arch_thaw_secondary_cpus_*() functions could just be redefined
to the accessor and cache_aps_init(), but this would be rather hacky IMO).


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27  8:57     ` Juergen Gross
@ 2022-09-27 10:10       ` Borislav Petkov
  2022-09-27 10:14         ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-27 10:10 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Tue, Sep 27, 2022 at 10:57:37AM +0200, Juergen Gross wrote:
> TBH I don't see the point of having an accessor which is just setting a
> variable to "true". But if you like it better, I can keep it.

Accessors are always better, no matter how silly. :)

But, in trying to grok your next patch - you really should split those
more complex ones because they're a pain to review - I'm starting to
wonder whether we could even remove mtrr_aps_delayed_init and make the
delayed init the default.

Because, AFAICT, set_mtrr_aps_delayed_init() is called by default
by native_smp_prepare_cpus(). Which is called by hyperv and
arch/x86/xen/smp_hvm.c.

The only one that's not calling it is arch/x86/xen/smp_pv.c but that
thing doesn't support MTRRs in the first place, right?

Which means, it doesn't need delayed MTRR init anyway.

Which would then mean that this would simplify this ugly logic even more.

Or am I missing an angle?

It is possible in this nuts code.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 10:10       ` Borislav Petkov
@ 2022-09-27 10:14         ` Juergen Gross
  2022-09-27 11:19           ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-27 10:14 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1107 bytes --]

On 27.09.22 12:10, Borislav Petkov wrote:
> On Tue, Sep 27, 2022 at 10:57:37AM +0200, Juergen Gross wrote:
>> TBH I don't see the point of having an accessor which is just setting a
>> variable to "true". But if you like it better, I can keep it.
> 
> Accessors are always better, no matter how silly. :)

Okay, then I'll keep it.

> But, in trying to grok your next patch - you really should split those
> more complex ones because they're a pain to review - I'm starting to
> wonder whether we could even remove mtrr_aps_delayed_init and make the
> delayed init the default.
> 
> Because, AFAICT, set_mtrr_aps_delayed_init() is called by default
> by native_smp_prepare_cpus(). Which is called by hyperv and
> arch/x86/xen/smp_hvm.c.
> 
> The only one that's not calling it is arch/x86/xen/smp_pv.c but that
> thing doesn't support MTRRs in the first place, right?

Correct.

> Which means, it doesn't need delayed MTRR init anyway.
> 
> Which would then mean that this would simplify this ugly logic even more.
> 
> Or am I missing an angle?

Yes: cpu hotplug.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 10:14         ` Juergen Gross
@ 2022-09-27 11:19           ` Borislav Petkov
  2022-09-27 11:25             ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-27 11:19 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Tue, Sep 27, 2022 at 12:14:42PM +0200, Juergen Gross wrote:
> Yes: cpu hotplug.

You might need to elaborate here.

Because I see mtrr_ap_init() on the AP hotplug path:

native_cpu_up->
do_boot_cpu->
start_secondary->
smp_callin->
smp_store_cpu_info->
identify_secondary_cpu->
mtrr_ap_init

Which then means that we could check in mtrr_ap_init() if we're on the
hotplug path and still get rid of that stupid bool...

Close?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 11:19           ` Borislav Petkov
@ 2022-09-27 11:25             ` Juergen Gross
  2022-09-27 12:13               ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-27 11:25 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 613 bytes --]

On 27.09.22 13:19, Borislav Petkov wrote:
> On Tue, Sep 27, 2022 at 12:14:42PM +0200, Juergen Gross wrote:
>> Yes: cpu hotplug.
> 
> You might need to elaborate here.
> 
> Because I see mtrr_ap_init() on the AP hotplug path:
> 
> native_cpu_up->
> do_boot_cpu->
> start_secondary->
> smp_callin->
> smp_store_cpu_info->
> identify_secondary_cpu->
> mtrr_ap_init
> 
> Which then means that we could check in mtrr_ap_init() if we're on the
> hotplug path and still get rid of that stupid bool...
> 
> Close?
> 

You mean by replacing it with "(system_state != SYSTEM_RUNNING)" ?


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 11:25             ` Juergen Gross
@ 2022-09-27 12:13               ` Borislav Petkov
  2022-09-27 12:21                 ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-27 12:13 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Tue, Sep 27, 2022 at 01:25:47PM +0200, Juergen Gross wrote:
> You mean by replacing it with "(system_state != SYSTEM_RUNNING)" ?

Right, or maybe even something more elegant. I've been meaning to ask
tglx about it as I needed it for the microcode loader too.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 12:13               ` Borislav Petkov
@ 2022-09-27 12:21                 ` Juergen Gross
  2022-09-27 23:16                   ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-27 12:21 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 503 bytes --]

On 27.09.22 14:13, Borislav Petkov wrote:
> On Tue, Sep 27, 2022 at 01:25:47PM +0200, Juergen Gross wrote:
>> You mean by replacing it with "(system_state != SYSTEM_RUNNING)" ?
> 
> Right, or maybe even something more elegant. I've been meaning to ask
> tglx about it as I needed it for the microcode loader too.

So replacing the bool with "(system_state != SYSTEM_RUNNING)" is fine
with you right now? We can later switch that to the "more elegant"
solution when it shows up.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 12:21                 ` Juergen Gross
@ 2022-09-27 23:16                   ` Borislav Petkov
  2022-09-28  5:30                     ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-27 23:16 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Tue, Sep 27, 2022 at 02:21:17PM +0200, Juergen Gross wrote:
> So replacing the bool with "(system_state != SYSTEM_RUNNING)" is fine
> with you right now? We can later switch that to the "more elegant"
> solution when it shows up.

Ok, I think I have something. And it was staring me straight in the
face but I didn't see it: the MTRR code needs a hotplug notifier. In
that notifier it can do the immediate, i.e., non-delayed init while the
delayed init becomes the default, see below.

And ignore the pr_info debugging gunk pls.

mtrr_ap_init() becomes the notifier callback. It doesn't need to be
called in identify_secondary_cpu() anymore as in the init case that
function doesn't do anything - delayed=true - and in the hotplug case
the notifier runs.

mtrr_aps_init() - "aps" in plural - does the delayed init after all CPUs
have been brought online after the box has booted. That might need some
renaming.

And yes, there's a lot more to cleanup after this. This code has grown
wart after wart over the years...

Fun.

---
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index 76d726074c16..1a3dad244bba 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -42,8 +42,6 @@ extern int mtrr_add_page(unsigned long base, unsigned long size,
 extern int mtrr_del(int reg, unsigned long base, unsigned long size);
 extern int mtrr_del_page(int reg, unsigned long base, unsigned long size);
 extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
-extern void mtrr_ap_init(void);
-extern void set_mtrr_aps_delayed_init(void);
 extern void mtrr_aps_init(void);
 extern void mtrr_bp_restore(void);
 extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
@@ -83,8 +81,6 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
 static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
 {
 }
-#define mtrr_ap_init() do {} while (0)
-#define set_mtrr_aps_delayed_init() do {} while (0)
 #define mtrr_aps_init() do {} while (0)
 #define mtrr_bp_restore() do {} while (0)
 #  endif
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 3e508f239098..deef1b5b27cc 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1948,7 +1948,6 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 #ifdef CONFIG_X86_32
 	enable_sep_cpu();
 #endif
-	mtrr_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
 	update_srbds_msr();
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 2746cac9d8a9..abbf7cb8a430 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -69,7 +69,6 @@ unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 static DEFINE_MUTEX(mtrr_mutex);
 
 u64 size_or_mask, size_and_mask;
-static bool mtrr_aps_delayed_init;
 
 static const struct mtrr_ops *mtrr_ops[X86_VENDOR_NUM] __ro_after_init;
 
@@ -176,7 +175,7 @@ static int mtrr_rendezvous_handler(void *info)
 	if (data->smp_reg != ~0U) {
 		mtrr_if->set(data->smp_reg, data->smp_base,
 			     data->smp_size, data->smp_type);
-	} else if (mtrr_aps_delayed_init || !cpu_online(smp_processor_id())) {
+	} else if (!cpu_online(smp_processor_id())) {
 		mtrr_if->set_all();
 	}
 	return 0;
@@ -784,13 +783,16 @@ void __init mtrr_bp_init(void)
 	}
 }
 
-void mtrr_ap_init(void)
+static int mtrr_ap_init(unsigned int cpu)
 {
+	pr_info("%s: single AP entry, use_intel: %d, mtrr_enabled: %d, mtrr_aps_delayed_init\n",
+		__func__, use_intel(), mtrr_enabled());
+
 	if (!mtrr_enabled())
-		return;
+		return 1;
 
-	if (!use_intel() || mtrr_aps_delayed_init)
-		return;
+	if (!use_intel())
+		return 1;
 
 	/*
 	 * Ideally we should hold mtrr_mutex here to avoid mtrr entries
@@ -806,6 +808,8 @@ void mtrr_ap_init(void)
 	 *      lock to prevent mtrr entry changes
 	 */
 	set_mtrr_from_inactive_cpu(~0U, 0, 0, 0);
+
+	return 0;
 }
 
 /**
@@ -820,37 +824,24 @@ void mtrr_save_state(void)
 		return;
 
 	first_cpu = cpumask_first(cpu_online_mask);
-	smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1);
-}
 
-void set_mtrr_aps_delayed_init(void)
-{
-	if (!mtrr_enabled())
-		return;
-	if (!use_intel())
-		return;
+	pr_info("%s: first_cpu: %d\n", __func__, first_cpu);
 
-	mtrr_aps_delayed_init = true;
+	smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1);
 }
 
 /*
- * Delayed MTRR initialization for all AP's
+ * Delayed MTRR initialization for all APs
  */
 void mtrr_aps_init(void)
 {
-	if (!use_intel() || !mtrr_enabled())
-		return;
+	pr_info("%s: entry, use_intel: %d, mtrr_enabled: %d, mtrr_aps_delayed_init\n",
+		__func__, use_intel(), mtrr_enabled());
 
-	/*
-	 * Check if someone has requested the delay of AP MTRR initialization,
-	 * by doing set_mtrr_aps_delayed_init(), prior to this point. If not,
-	 * then we are done.
-	 */
-	if (!mtrr_aps_delayed_init)
+	if (!use_intel() || !mtrr_enabled())
 		return;
 
 	set_mtrr(~0U, 0, 0, 0);
-	mtrr_aps_delayed_init = false;
 }
 
 void mtrr_bp_restore(void)
@@ -869,6 +860,10 @@ static int __init mtrr_init_finialize(void)
 	if (use_intel()) {
 		if (!changed_by_mtrr_cleanup)
 			mtrr_state_warn();
+
+		cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/mtrr:online",
+				  mtrr_ap_init, NULL);
+
 		return 0;
 	}
 
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f24227bc3220..171acef35201 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1428,7 +1428,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 
 	uv_system_init();
 
-	set_mtrr_aps_delayed_init();
+	pr_info("%s: set_mtrr_aps_delayed_init\n", __func__);
 
 	smp_quirk_init_udelay();
 
@@ -1439,7 +1439,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 
 void arch_thaw_secondary_cpus_begin(void)
 {
-	set_mtrr_aps_delayed_init();
+	pr_info("%s: set_mtrr_aps_delayed_init\n", __func__);
 }
 
 void arch_thaw_secondary_cpus_end(void)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index bbad5e375d3b..fc14601b908c 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -591,6 +591,8 @@ static int bringup_cpu(unsigned int cpu)
 	struct task_struct *idle = idle_thread_get(cpu);
 	int ret;
 
+	pr_info("%s: CPU%d\n", __func__, cpu);
+
 	/*
 	 * Reset stale stack state from the last time this CPU was online.
 	 */


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-27 23:16                   ` Borislav Petkov
@ 2022-09-28  5:30                     ` Juergen Gross
  2022-09-28  6:16                       ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-28  5:30 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1033 bytes --]

On 28.09.22 01:16, Borislav Petkov wrote:
> On Tue, Sep 27, 2022 at 02:21:17PM +0200, Juergen Gross wrote:
>> So replacing the bool with "(system_state != SYSTEM_RUNNING)" is fine
>> with you right now? We can later switch that to the "more elegant"
>> solution when it shows up.
> 
> Ok, I think I have something. And it was staring me straight in the
> face but I didn't see it: the MTRR code needs a hotplug notifier. In
> that notifier it can do the immediate, i.e., non-delayed init while the
> delayed init becomes the default, see below.
> 
> And ignore the pr_info debugging gunk pls.
> 
> mtrr_ap_init() becomes the notifier callback. It doesn't need to be
> called in identify_secondary_cpu() anymore as in the init case that
> function doesn't do anything - delayed=true - and in the hotplug case
> the notifier runs.

Are sure the hotplug notifier doesn't get called in the boot and in the
resume cases? I don't see how those calls are being not done or resulting in
not doing anything.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28  5:30                     ` Juergen Gross
@ 2022-09-28  6:16                       ` Juergen Gross
  2022-09-28 10:48                         ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-28  6:16 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1277 bytes --]

On 28.09.22 07:30, Juergen Gross wrote:
> On 28.09.22 01:16, Borislav Petkov wrote:
>> On Tue, Sep 27, 2022 at 02:21:17PM +0200, Juergen Gross wrote:
>>> So replacing the bool with "(system_state != SYSTEM_RUNNING)" is fine
>>> with you right now? We can later switch that to the "more elegant"
>>> solution when it shows up.
>>
>> Ok, I think I have something. And it was staring me straight in the
>> face but I didn't see it: the MTRR code needs a hotplug notifier. In
>> that notifier it can do the immediate, i.e., non-delayed init while the
>> delayed init becomes the default, see below.
>>
>> And ignore the pr_info debugging gunk pls.
>>
>> mtrr_ap_init() becomes the notifier callback. It doesn't need to be
>> called in identify_secondary_cpu() anymore as in the init case that
>> function doesn't do anything - delayed=true - and in the hotplug case
>> the notifier runs.
> 
> Are sure the hotplug notifier doesn't get called in the boot and in the
> resume cases? I don't see how those calls are being not done or resulting in
> not doing anything.

In case my suspicion is correct: this can still be solved by adding the
hotplug notifier only in mtrr_aps_init(), and removing it again in
arch_thaw_secondary_cpus_begin().


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag
  2022-09-12  9:10     ` Juergen Gross
  2022-09-19 19:10       ` Borislav Petkov
@ 2022-09-28 10:17       ` Juergen Gross
  1 sibling, 0 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-28 10:17 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 3156 bytes --]

On 12.09.22 11:10, Juergen Gross wrote:
> On 11.09.22 12:16, Borislav Petkov wrote:
>> On Thu, Sep 08, 2022 at 10:49:07AM +0200, Juergen Gross wrote:
>>> diff --git a/arch/x86/include/asm/cacheinfo.h b/arch/x86/include/asm/cacheinfo.h
>>> index 86b2e0dcc4bf..1aeafa9888f7 100644
>>> --- a/arch/x86/include/asm/cacheinfo.h
>>> +++ b/arch/x86/include/asm/cacheinfo.h
>>> @@ -2,6 +2,11 @@
>>>   #ifndef _ASM_X86_CACHEINFO_H
>>>   #define _ASM_X86_CACHEINFO_H
>>> +/* Kernel controls MTRR and/or PAT MSRs. */
>>> +extern unsigned int cache_generic;
>>
>> So this should be called something more descriptive like
>>
>>     memory_caching_types
> 
> In the end this variable doesn't specify which caching types are available,
> but the ways to select/control the caching types.
> 
> So what about "memory_caching_select" or "memory_caching_control" instead?
> 
>> or so to denote that this is a bitfield of supported memory caching
>> technologies. The code then would read as
>>
>>     if (memory_caching_types & CACHE_MTRR)
>>
>> The name's still not optimal tho - needs more brooding over.
>>
>>> +#define CACHE_GENERIC_MTRR 0x01
>>> +#define CACHE_GENERIC_PAT  0x02
>>
>> And those should be CACHE_{MTRR,PAT}.
> 
> Fine with me.
> 
>>>   void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, int cpu);
>>>   void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c, int cpu);
>>> diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
>>> index 66556833d7af..3b05d3ade7a6 100644
>>> --- a/arch/x86/kernel/cpu/cacheinfo.c
>>> +++ b/arch/x86/kernel/cpu/cacheinfo.c
>>> @@ -35,6 +35,9 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
>>>   /* Shared L2 cache maps */
>>>   DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
>>> +/* Kernel controls MTRR and/or PAT MSRs. */
>>> +unsigned int cache_generic;
>>
>> This should either be __ro_after_init and initialized to 0 or you need
>> accessors...
> 
> Okay.
> 
>>
>>>   u32 num_var_ranges;
>>> -static bool __mtrr_enabled;
>>> -
>>> -static bool mtrr_enabled(void)
>>> -{
>>> -    return __mtrr_enabled;
>>> -}
>>> +static bool mtrr_enabled;
>>
>> Hmm, I don't like this. There's way too many boolean flags in the mtrr
>> code. There's mtrr_state.enabled too. ;-\
>>
>> Can we set (or clear) X86_FEATURE_MTRR to denote MTRR enablement status
>> and get rid of one more boolean flag?
> 
> I'll have a look.

Hmm, this might be a little bit risky.

It can be done, but then X86_FEATURE_MTRR could be set even for cpus
NOT supporting it (the 32-bit special cases AMD, CENTAUR, CYRIX).

So we have the following alternatives:

- do the switch to X86_FEATURE_MTRR risking code breakage for later
   code changes querying X86_FEATURE_MTRR and assuming the MTRR MSRs
   being available

- keep the current bool

- replace the bool with mtrr_if != NULL

- add a new synthetic feature, e.g. X86_FEATURE_MTRR_ENABLED (which in
   fact would be just a replacement of the current bool)

My preference would be the replacement with mtrr_if != NULL.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28  6:16                       ` Juergen Gross
@ 2022-09-28 10:48                         ` Borislav Petkov
  2022-09-28 11:14                           ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-28 10:48 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Wed, Sep 28, 2022 at 08:16:53AM +0200, Juergen Gross wrote:
> > Are sure the hotplug notifier doesn't get called in the boot and in the

It doesn't because it gets registered after smp_init()...

> > resume cases?

... but it gets called during resume because by that time the notifier
has been registered already. See my notes at the end of this mail of
what the code does currently.

> In case my suspicion is correct: this can still be solved by adding the
> hotplug notifier only in mtrr_aps_init(), and removing it again in
> arch_thaw_secondary_cpus_begin().

Pretty much. Yeah, we still need a bool. ;-(

But that bool has a much smaller scope and it is perfectly clear what it
does. And I've added a comment. Could've used comments for the delayed
init thing.

Anyway, it gets set in a thaw callback (I mean, might as well, since
we call into the MTRR code anyway). I probably can make this even
cleaner and not do any bool if I could query in the notifier whether I'm
resuming...

Thx.

---
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index 76d726074c16..86b8009d2429 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -42,9 +42,8 @@ extern int mtrr_add_page(unsigned long base, unsigned long size,
 extern int mtrr_del(int reg, unsigned long base, unsigned long size);
 extern int mtrr_del_page(int reg, unsigned long base, unsigned long size);
 extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
-extern void mtrr_ap_init(void);
-extern void set_mtrr_aps_delayed_init(void);
 extern void mtrr_aps_init(void);
+extern void mtrr_aps_thaw(void);
 extern void mtrr_bp_restore(void);
 extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
 extern int amd_special_default_mtrr(void);
@@ -83,9 +82,8 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
 static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
 {
 }
-#define mtrr_ap_init() do {} while (0)
-#define set_mtrr_aps_delayed_init() do {} while (0)
 #define mtrr_aps_init() do {} while (0)
+#define mtrr_aps_thaw() do {} while (0)
 #define mtrr_bp_restore() do {} while (0)
 #  endif
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 3e508f239098..deef1b5b27cc 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1948,7 +1948,6 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
 #ifdef CONFIG_X86_32
 	enable_sep_cpu();
 #endif
-	mtrr_ap_init();
 	validate_apic_and_package_id(c);
 	x86_spec_ctrl_setup_ap();
 	update_srbds_msr();
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
index 2746cac9d8a9..c4089fd2b477 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
@@ -69,7 +69,7 @@ unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
 static DEFINE_MUTEX(mtrr_mutex);
 
 u64 size_or_mask, size_and_mask;
-static bool mtrr_aps_delayed_init;
+static bool ap_notifier_disabled;
 
 static const struct mtrr_ops *mtrr_ops[X86_VENDOR_NUM] __ro_after_init;
 
@@ -176,7 +176,7 @@ static int mtrr_rendezvous_handler(void *info)
 	if (data->smp_reg != ~0U) {
 		mtrr_if->set(data->smp_reg, data->smp_base,
 			     data->smp_size, data->smp_type);
-	} else if (mtrr_aps_delayed_init || !cpu_online(smp_processor_id())) {
+	} else if (!cpu_online(smp_processor_id())) {
 		mtrr_if->set_all();
 	}
 	return 0;
@@ -784,13 +784,16 @@ void __init mtrr_bp_init(void)
 	}
 }
 
-void mtrr_ap_init(void)
+static int mtrr_ap_init(unsigned int cpu)
 {
 	if (!mtrr_enabled())
-		return;
+		return 1;
 
-	if (!use_intel() || mtrr_aps_delayed_init)
-		return;
+	if (!use_intel())
+		return 1;
+
+	if (ap_notifier_disabled)
+		return 0;
 
 	/*
 	 * Ideally we should hold mtrr_mutex here to avoid mtrr entries
@@ -806,6 +809,8 @@ void mtrr_ap_init(void)
 	 *      lock to prevent mtrr entry changes
 	 */
 	set_mtrr_from_inactive_cpu(~0U, 0, 0, 0);
+
+	return 0;
 }
 
 /**
@@ -823,34 +828,26 @@ void mtrr_save_state(void)
 	smp_call_function_single(first_cpu, mtrr_save_fixed_ranges, NULL, 1);
 }
 
-void set_mtrr_aps_delayed_init(void)
-{
-	if (!mtrr_enabled())
-		return;
-	if (!use_intel())
-		return;
-
-	mtrr_aps_delayed_init = true;
-}
-
 /*
- * Delayed MTRR initialization for all AP's
+ * Delayed MTRR initialization for all APs
  */
 void mtrr_aps_init(void)
 {
 	if (!use_intel() || !mtrr_enabled())
 		return;
 
-	/*
-	 * Check if someone has requested the delay of AP MTRR initialization,
-	 * by doing set_mtrr_aps_delayed_init(), prior to this point. If not,
-	 * then we are done.
-	 */
-	if (!mtrr_aps_delayed_init)
-		return;
-
 	set_mtrr(~0U, 0, 0, 0);
-	mtrr_aps_delayed_init = false;
+	ap_notifier_disabled = false;
+}
+
+/*
+ * Disable the AP notifier temporarily during resume. It is supposed to be active only
+ * during CPU hotplug as during resume mtrr_aps_init() takes care of the MTRR
+ * programming on all CPUs.
+ */
+void mtrr_aps_thaw(void)
+{
+	ap_notifier_disabled = true;
 }
 
 void mtrr_bp_restore(void)
@@ -869,6 +866,10 @@ static int __init mtrr_init_finialize(void)
 	if (use_intel()) {
 		if (!changed_by_mtrr_cleanup)
 			mtrr_state_warn();
+
+		cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/mtrr:online",
+				  mtrr_ap_init, NULL);
+
 		return 0;
 	}
 
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f24227bc3220..b90780dab88a 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1428,8 +1428,6 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 
 	uv_system_init();
 
-	set_mtrr_aps_delayed_init();
-
 	smp_quirk_init_udelay();
 
 	speculative_store_bypass_ht_init();
@@ -1439,7 +1437,7 @@ void __init native_smp_prepare_cpus(unsigned int max_cpus)
 
 void arch_thaw_secondary_cpus_begin(void)
 {
-	set_mtrr_aps_delayed_init();
+	mtrr_aps_thaw();
 }
 
 void arch_thaw_secondary_cpus_end(void)


Notes:
------

Boot sequence:

BSP:

[    0.272801] smpboot: native_smp_prepare_cpus: set_mtrr_aps_delayed_init

APs:

[    0.287190] mtrr_save_state: first_cpu: 0
[    0.287724] x86: Booting SMP configuration:
[    0.290292] .... node  #0, CPUs:        #1
[    0.061135] mtrr_ap_init: single AP entry, use_intel: 1, mtrr_enabled: 1, mtrr_aps_delayed_init: 1

 -> set_mtrr_from_inactive_cpu() gets skipped.

After all APs booted:

[    1.544506] mtrr_aps_init: entry, use_intel: 1, mtrr_enabled: 1, mtrr_aps_delayed_init: 1

 -> set_mtrr()

hotplug:

[  206.112651] smpboot: CPU 11 is now offline
[  208.286030] bringup_cpu: CPU11
[  208.286611] mtrr_save_state: first_cpu: 0
[  208.287416] smpboot: Booting Node 0 Processor 11 APIC 0xb
[  206.116567] mtrr_ap_init: single AP entry, use_intel: 1, mtrr_enabled: 1, mtrr_aps_delayed_init: 0

 -> set_mtrr_from_inactive_cpu()

suspend/resume:

BSP:

[  270.586643] smpboot: arch_thaw_secondary_cpus_begin: set_mtrr_aps_delayed_init

APs:

[  270.587947] bringup_cpu: CPU1
[  270.588470] mtrr_save_state: first_cpu: 0
[  270.589207] x86: Booting SMP configuration:
[  270.597971] smpboot: Booting Node 0 Processor 1 APIC 0x1
[  270.530418] mtrr_ap_init: single AP entry, use_intel: 1, mtrr_enabled: 1, mtrr_aps_delayed_init: 1

After all APs booted:

[  270.694168] mtrr_aps_init: entry, use_intel: 1, mtrr_enabled: 1, mtrr_aps_delayed_init: 1
[  270.696923] ACPI: PM: Waking up from system sleep state S3


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 10:48                         ` Borislav Petkov
@ 2022-09-28 11:14                           ` Juergen Gross
  2022-09-28 11:22                             ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-28 11:14 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1016 bytes --]

On 28.09.22 12:48, Borislav Petkov wrote:
> On Wed, Sep 28, 2022 at 08:16:53AM +0200, Juergen Gross wrote:
>>> Are sure the hotplug notifier doesn't get called in the boot and in the
> 
> It doesn't because it gets registered after smp_init()...
> 
>>> resume cases?
> 
> ... but it gets called during resume because by that time the notifier
> has been registered already. See my notes at the end of this mail of
> what the code does currently.
> 
>> In case my suspicion is correct: this can still be solved by adding the
>> hotplug notifier only in mtrr_aps_init(), and removing it again in
>> arch_thaw_secondary_cpus_begin().
> 
> Pretty much. Yeah, we still need a bool. ;-(

No, we don't.

Using basically your patch, but with

+	mtrr_online = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+						"x86/mtrr:online",
+						mtrr_ap_init, NULL);

moved to the end of mtrr_aps_init(), and:

+void mtrr_aps_thaw(void)
+{
+	cpuhp_remove_state_nocalls(mtrr_online);
+}


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 11:14                           ` Juergen Gross
@ 2022-09-28 11:22                             ` Borislav Petkov
  2022-09-28 13:43                               ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-28 11:22 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Wed, Sep 28, 2022 at 01:14:11PM +0200, Juergen Gross wrote:
> No, we don't.
> 
> Using basically your patch, but with
> 
> +	mtrr_online = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
> +						"x86/mtrr:online",
> +						mtrr_ap_init, NULL);
> 
> moved to the end of mtrr_aps_init(), and:
> 
> +void mtrr_aps_thaw(void)
> +{
> +	cpuhp_remove_state_nocalls(mtrr_online);
> +}

Yes, so you said. I'm not sure I like this toggling of notifier
registration like that.

Optimally, I'd like to be able to query the suspend code whether it is
in the process of resuming.

This here:


static int resume_target_kernel(bool platform_mode)
{

...

 Enable_irqs:
        system_state = SYSTEM_RUNNING;
        local_irq_enable();
 
 Enable_cpus:
        pm_sleep_enable_secondary_cpus();


but being able to do:

        pm_sleep_enable_secondary_cpus();
	system_state = SYSTEM_RUNNING | SYSTEM_RUNNING_APS_UP;

which can't work, obviously. But something like that.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 11:22                             ` Borislav Petkov
@ 2022-09-28 13:43                               ` Juergen Gross
  2022-09-28 16:12                                 ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-28 13:43 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1937 bytes --]

On 28.09.22 13:22, Borislav Petkov wrote:
> On Wed, Sep 28, 2022 at 01:14:11PM +0200, Juergen Gross wrote:
>> No, we don't.
>>
>> Using basically your patch, but with
>>
>> +	mtrr_online = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
>> +						"x86/mtrr:online",
>> +						mtrr_ap_init, NULL);
>>
>> moved to the end of mtrr_aps_init(), and:
>>
>> +void mtrr_aps_thaw(void)
>> +{
>> +	cpuhp_remove_state_nocalls(mtrr_online);
>> +}
> 
> Yes, so you said. I'm not sure I like this toggling of notifier
> registration like that.

Yeah, especially with having to remember the slot.

Would you feel better with adding a new enum member CPUHP_AP_CACHECTRL_ONLINE?

This would avoid a possible source of failure during resume in case no slot
for CPUHP_AP_ONLINE_DYN is found (quite improbable, but in theory possible).

> Optimally, I'd like to be able to query the suspend code whether it is
> in the process of resuming.
> 
> This here:
> 
> 
> static int resume_target_kernel(bool platform_mode)
> {
> 
> ...
> 
>   Enable_irqs:
>          system_state = SYSTEM_RUNNING;
>          local_irq_enable();
>   
>   Enable_cpus:
>          pm_sleep_enable_secondary_cpus();
> 
> 
> but being able to do:
> 
>          pm_sleep_enable_secondary_cpus();
> 	system_state = SYSTEM_RUNNING | SYSTEM_RUNNING_APS_UP;
> 
> which can't work, obviously. But something like that.
> 

You wouldn't want to do that there, as there are multiple places where
pm_sleep_enable_secondary_cpus() is being called. Additionally not all
cases are coming in via pm_sleep_enable_secondary_cpus(), as there is
e.g. a call of suspend_enable_secondary_cpus() from kernel_kexec(),
which wants to have the same handling.

arch_thaw_secondary_cpus_begin() and arch_thaw_secondary_cpus_end() are
the functions to mark start and end of the special region where the
delayed MTRR setup should happen.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 13:43                               ` Juergen Gross
@ 2022-09-28 16:12                                 ` Borislav Petkov
  2022-09-28 16:32                                   ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-28 16:12 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Wed, Sep 28, 2022 at 03:43:56PM +0200, Juergen Gross wrote:
> Would you feel better with adding a new enum member CPUHP_AP_CACHECTRL_ONLINE?
> 
> This would avoid a possible source of failure during resume in case no slot
> for CPUHP_AP_ONLINE_DYN is found (quite improbable, but in theory possible).

Let's keep that in the bag for the time when we get to cross that bridge.

> You wouldn't want to do that there, as there are multiple places where
> pm_sleep_enable_secondary_cpus() is being called.

We want all of them, I'd say. They're all some sort of suspend AFAICT.
But yes, if we get to do it, that would need a proper audit.

> Additionally not all cases are coming in via
> pm_sleep_enable_secondary_cpus(), as there is e.g. a call of
> suspend_enable_secondary_cpus() from kernel_kexec(), which wants to
> have the same handling.

Which means, more hairy.

> arch_thaw_secondary_cpus_begin() and arch_thaw_secondary_cpus_end() are
> the functions to mark start and end of the special region where the
> delayed MTRR setup should happen.

Yap, it seems like the best solution at the moment. Want me to do a
proper patch and test it on real hw?

:-)

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 16:12                                 ` Borislav Petkov
@ 2022-09-28 16:32                                   ` Juergen Gross
  2022-09-28 16:39                                     ` Borislav Petkov
  2022-09-29  8:26                                     ` Juergen Gross
  0 siblings, 2 replies; 41+ messages in thread
From: Juergen Gross @ 2022-09-28 16:32 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1301 bytes --]

On 28.09.22 18:12, Borislav Petkov wrote:
> On Wed, Sep 28, 2022 at 03:43:56PM +0200, Juergen Gross wrote:
>> Would you feel better with adding a new enum member CPUHP_AP_CACHECTRL_ONLINE?
>>
>> This would avoid a possible source of failure during resume in case no slot
>> for CPUHP_AP_ONLINE_DYN is found (quite improbable, but in theory possible).
> 
> Let's keep that in the bag for the time when we get to cross that bridge.
> 
>> You wouldn't want to do that there, as there are multiple places where
>> pm_sleep_enable_secondary_cpus() is being called.
> 
> We want all of them, I'd say. They're all some sort of suspend AFAICT.
> But yes, if we get to do it, that would need a proper audit.
> 
>> Additionally not all cases are coming in via
>> pm_sleep_enable_secondary_cpus(), as there is e.g. a call of
>> suspend_enable_secondary_cpus() from kernel_kexec(), which wants to
>> have the same handling.
> 
> Which means, more hairy.
> 
>> arch_thaw_secondary_cpus_begin() and arch_thaw_secondary_cpus_end() are
>> the functions to mark start and end of the special region where the
>> delayed MTRR setup should happen.
> 
> Yap, it seems like the best solution at the moment. Want me to do a
> proper patch and test it on real hw?

I can do that.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 16:32                                   ` Juergen Gross
@ 2022-09-28 16:39                                     ` Borislav Petkov
  2022-09-29  8:26                                     ` Juergen Gross
  1 sibling, 0 replies; 41+ messages in thread
From: Borislav Petkov @ 2022-09-28 16:39 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Wed, Sep 28, 2022 at 06:32:20PM +0200, Juergen Gross wrote:
> I can do that.

Thx!

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-28 16:32                                   ` Juergen Gross
  2022-09-28 16:39                                     ` Borislav Petkov
@ 2022-09-29  8:26                                     ` Juergen Gross
  2022-09-30 11:55                                       ` Borislav Petkov
  1 sibling, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-29  8:26 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 2987 bytes --]

On 28.09.22 18:32, Juergen Gross wrote:
> On 28.09.22 18:12, Borislav Petkov wrote:
>> On Wed, Sep 28, 2022 at 03:43:56PM +0200, Juergen Gross wrote:
>>> Would you feel better with adding a new enum member CPUHP_AP_CACHECTRL_ONLINE?
>>>
>>> This would avoid a possible source of failure during resume in case no slot
>>> for CPUHP_AP_ONLINE_DYN is found (quite improbable, but in theory possible).
>>
>> Let's keep that in the bag for the time when we get to cross that bridge.
>>
>>> You wouldn't want to do that there, as there are multiple places where
>>> pm_sleep_enable_secondary_cpus() is being called.
>>
>> We want all of them, I'd say. They're all some sort of suspend AFAICT.
>> But yes, if we get to do it, that would need a proper audit.
>>
>>> Additionally not all cases are coming in via
>>> pm_sleep_enable_secondary_cpus(), as there is e.g. a call of
>>> suspend_enable_secondary_cpus() from kernel_kexec(), which wants to
>>> have the same handling.
>>
>> Which means, more hairy.
>>
>>> arch_thaw_secondary_cpus_begin() and arch_thaw_secondary_cpus_end() are
>>> the functions to mark start and end of the special region where the
>>> delayed MTRR setup should happen.
>>
>> Yap, it seems like the best solution at the moment. Want me to do a
>> proper patch and test it on real hw?
> 
> I can do that.

Okay, lets define what is meant by "that" just to be on the same page.

The idea to use a hotplug callback seems to be rather risky IMHO. At least
CPUHP_AP_ONLINE_DYN seems to be way too late, as there are several device
drivers hooking in with the same or lower priority already. And device
drivers might rely on PAT settings in PTEs of MTRR being setup correctly.

Another problematic case is CPUHP_AP_MICROCODE_LOADER, which is explicitly
doing cache writeback and invalidation, which seems to be risky without
having a sane PAT/MTRR state of the processor. It should be noted that the
microcode loader is registered via late_initcall(), so boot isn't affected
by the delayed MTRR/PAT init when booting.

So the only secure way to use a hotplug callback would be to have a rather
early preregistered slot in enum cpuhp_state.

Regarding resume and kexec I'm no longer sure doing the delayed MTRR/PAT
init is such a great idea. It might save some milliseconds, but the risks
mentioned above with e.g. microcode loading should apply.

So right now I'm inclined to be better on the safe side by not adding any
cpu hotplug hook, but to use just the same "delayed AP init" flag as today,
just renaming it. This would leave the delayed MTRR/PAT init in place for
resume and kexec cases, but deferring the MTRR/PAT cleanup due to this
potential issue seems not appropriate, as the cleanup isn't changing the
behavior here.

We should, however, have a discussion in parallel or later, whether the
whole thaw_secondary_cpus() handling is really okay or whether it should
be changed in some way.


Juergen

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-29  8:26                                     ` Juergen Gross
@ 2022-09-30 11:55                                       ` Borislav Petkov
  2022-09-30 13:11                                         ` Juergen Gross
  0 siblings, 1 reply; 41+ messages in thread
From: Borislav Petkov @ 2022-09-30 11:55 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Thu, Sep 29, 2022 at 10:26:59AM +0200, Juergen Gross wrote:
> So right now I'm inclined to be better on the safe side by not adding any
> cpu hotplug hook, but to use just the same "delayed AP init" flag as today,
> just renaming it. This would leave the delayed MTRR/PAT init in place for
> resume and kexec cases, but deferring the MTRR/PAT cleanup due to this
> potential issue seems not appropriate, as the cleanup isn't changing the
> behavior here.

Ok, what's wrong with adding a special hotplug level just for that thing
and running it very early? Practically pretty much where it was in time,
in identify_secondary_cpu()?

Having a special one is warranted, as you explain, I'd say.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-30 11:55                                       ` Borislav Petkov
@ 2022-09-30 13:11                                         ` Juergen Gross
  2022-09-30 13:25                                           ` Borislav Petkov
  0 siblings, 1 reply; 41+ messages in thread
From: Juergen Gross @ 2022-09-30 13:11 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin


[-- Attachment #1.1.1: Type: text/plain, Size: 1765 bytes --]

On 30.09.22 13:55, Borislav Petkov wrote:
> On Thu, Sep 29, 2022 at 10:26:59AM +0200, Juergen Gross wrote:
>> So right now I'm inclined to be better on the safe side by not adding any
>> cpu hotplug hook, but to use just the same "delayed AP init" flag as today,
>> just renaming it. This would leave the delayed MTRR/PAT init in place for
>> resume and kexec cases, but deferring the MTRR/PAT cleanup due to this
>> potential issue seems not appropriate, as the cleanup isn't changing the
>> behavior here.
> 
> Ok, what's wrong with adding a special hotplug level just for that thing
> and running it very early? Practically pretty much where it was in time,
> in identify_secondary_cpu()?

Yes, this can be done. It would practically have to be the first one just
after CPUHP_BRINGUP_CPU.

The question is whether we really want to call the MTRR/PAT initialization
on hotplugged cpus only after enabling interrupts. Note that the callbacks
are activated only at the end of start_secondary(), while today MTRR/PAT
initialization is called some time earlier by:

   start_secondary()
     smp_callin()
       smp_store_cpu_info()
         identify_secondary_cpu()
           mtrr_ap_init()

I don't think this is a real problem, but I wanted to mention it.

The next question would be, why MTRR/PAT init should be special (meaning:
why are all the other functions called that early not realized via
callbacks)? Is it just because of the special handling during boot/resume?

It might be worth a discussion whether there shouldn't be a special group
of callbacks activated BEFORE interrupts are being enabled.

> Having a special one is warranted, as you explain, I'd say.

Thanks. I'll write a patch for that.


Juergen


[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3149 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init
  2022-09-30 13:11                                         ` Juergen Gross
@ 2022-09-30 13:25                                           ` Borislav Petkov
  0 siblings, 0 replies; 41+ messages in thread
From: Borislav Petkov @ 2022-09-30 13:25 UTC (permalink / raw)
  To: Juergen Gross
  Cc: xen-devel, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, H. Peter Anvin

On Fri, Sep 30, 2022 at 03:11:07PM +0200, Juergen Gross wrote:
> Yes, this can be done. It would practically have to be the first one just
> after CPUHP_BRINGUP_CPU.

Right.

> The question is whether we really want to call the MTRR/PAT initialization
> on hotplugged cpus only after enabling interrupts. Note that the callbacks
> are activated only at the end of start_secondary(), while today MTRR/PAT
> initialization is called some time earlier by:
> 
>   start_secondary()
>     smp_callin()
>       smp_store_cpu_info()
>         identify_secondary_cpu()
>           mtrr_ap_init()
> 
> I don't think this is a real problem, but I wanted to mention it.

Yep, I saw that too but I don't think there will be a problem either.
I mean, it should be early enough as you point out not to need proper
MTRR/PAT settings yet.

But we'll make sure we test this real good too.

> The next question would be, why MTRR/PAT init should be special
> (meaning: why are all the other functions called that early not
> realized via callbacks)?

Well, our init code is crazy. Frankly, I don't see why not more of the
"init stuff on the freshly hotplugged CPU" work is done there...

> Is it just because of the special handling during boot/resume?

... unless this is the case, ofc. Right.

> It might be worth a discussion whether there shouldn't be a special group
> of callbacks activated BEFORE interrupts are being enabled.

That's a good question. /me writes it down to ask tglx when he gets back.

I mean, that early I don't think it matters whether IRQs are enabled
or not. But this'll need to be audited on a case by case basis. As I
said, our boot code is nuts with stuff bolted on everywhere for whatever
reasons.

> Thanks. I'll write a patch for that.

Thanks too.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2022-09-30 13:26 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-08  8:49 [PATCH v3 00/10] x86: make pat and mtrr independent from each other Juergen Gross
2022-09-08  8:49 ` [PATCH v3 01/10] x86/mtrr: add comment for set_mtrr_state() serialization Juergen Gross
2022-09-08  8:49 ` [PATCH v3 02/10] x86/mtrr: remove unused cyrix_set_all() function Juergen Gross
2022-09-08  8:49 ` [PATCH v3 03/10] x86/mtrr: replace use_intel() with a local flag Juergen Gross
2022-09-11 10:16   ` Borislav Petkov
2022-09-12  9:10     ` Juergen Gross
2022-09-19 19:10       ` Borislav Petkov
2022-09-28 10:17       ` Juergen Gross
2022-09-08  8:49 ` [PATCH v3 04/10] x86: move some code out of arch/x86/kernel/cpu/mtrr Juergen Gross
2022-09-11 11:02   ` Borislav Petkov
2022-09-12  9:11     ` Juergen Gross
2022-09-08  8:49 ` [PATCH v3 05/10] x86/mtrr: split generic_set_all() Juergen Gross
2022-09-19 19:25   ` Borislav Petkov
2022-09-08  8:49 ` [PATCH v3 06/10] x86/mtrr: remove set_all callback from struct mtrr_ops Juergen Gross
2022-09-08  8:49 ` [PATCH v3 07/10] x86/mtrr: simplify mtrr_bp_init() Juergen Gross
2022-09-26 18:40   ` Borislav Petkov
2022-09-08  8:49 ` [PATCH v3 08/10] x86/mtrr: let cache_aps_delayed_init replace mtrr_aps_delayed_init Juergen Gross
2022-09-26 21:11   ` Borislav Petkov
2022-09-27  8:57     ` Juergen Gross
2022-09-27 10:10       ` Borislav Petkov
2022-09-27 10:14         ` Juergen Gross
2022-09-27 11:19           ` Borislav Petkov
2022-09-27 11:25             ` Juergen Gross
2022-09-27 12:13               ` Borislav Petkov
2022-09-27 12:21                 ` Juergen Gross
2022-09-27 23:16                   ` Borislav Petkov
2022-09-28  5:30                     ` Juergen Gross
2022-09-28  6:16                       ` Juergen Gross
2022-09-28 10:48                         ` Borislav Petkov
2022-09-28 11:14                           ` Juergen Gross
2022-09-28 11:22                             ` Borislav Petkov
2022-09-28 13:43                               ` Juergen Gross
2022-09-28 16:12                                 ` Borislav Petkov
2022-09-28 16:32                                   ` Juergen Gross
2022-09-28 16:39                                     ` Borislav Petkov
2022-09-29  8:26                                     ` Juergen Gross
2022-09-30 11:55                                       ` Borislav Petkov
2022-09-30 13:11                                         ` Juergen Gross
2022-09-30 13:25                                           ` Borislav Petkov
2022-09-08  8:49 ` [PATCH v3 09/10] x86/mtrr: add a stop_machine() handler calling only cache_cpu_init() Juergen Gross
2022-09-08  8:49 ` [PATCH v3 10/10] x86: decouple pat and mtrr handling Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.