linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] x86/MSR: Add a MSR write callback
@ 2020-06-10 11:00 Borislav Petkov
  2020-06-10 11:00 ` [PATCH 1/3] x86/msr: Pass a single MSR value to __rwmsr_on_cpus() too Borislav Petkov
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Borislav Petkov @ 2020-06-10 11:00 UTC (permalink / raw)
  To: X86 ML; +Cc: Tom Lendacky, LKML

From: Borislav Petkov <bp@suse.de>

Hi all,

here's a small set which adds a callback to execute when a MSR is
written. Patch 3 explains the need for it.

Thx.

Borislav Petkov (3):
  x86/msr: Pass a single MSR value to __rwmsr_on_cpus() too
  x86/msr: Add wrmsrl_val_on_cpus()
  x86/msr: Add an MSR write callback

 arch/x86/include/asm/msr.h   | 10 ++++++++++
 arch/x86/kernel/cpu/bugs.c   |  2 +-
 arch/x86/kernel/cpu/common.c | 31 +++++++++++++++++++++++++++++++
 arch/x86/kernel/msr.c        |  3 +++
 arch/x86/lib/msr-smp.c       | 26 +++++++++++++++++++++++---
 5 files changed, 68 insertions(+), 4 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] x86/msr: Pass a single MSR value to __rwmsr_on_cpus() too
  2020-06-10 11:00 [PATCH 0/3] x86/MSR: Add a MSR write callback Borislav Petkov
@ 2020-06-10 11:00 ` Borislav Petkov
  2020-06-10 11:00 ` [PATCH 2/3] x86/msr: Add wrmsrl_val_on_cpus() Borislav Petkov
  2020-06-10 11:00 ` [PATCH 3/3] x86/msr: Add an MSR write callback Borislav Petkov
  2 siblings, 0 replies; 7+ messages in thread
From: Borislav Petkov @ 2020-06-10 11:00 UTC (permalink / raw)
  To: X86 ML; +Cc: Tom Lendacky, LKML

From: Borislav Petkov <bp@suse.de>

__rwmsr_on_cpus() is the low-level worker function which is called by
the exported {rd,wr}msr_on_cpus() when an MSR is supposed to be set with
values for CPUs or an MSR is supposed to be read from CPUs into an array
of percpu variables in @msrs.

The low-level machinery supports also passing a single MSR value for the
single CPU {rd,wr}msr_on_cpu() handlers.

Pass that value to __rwmsr_on_cpus() too and enforce a mutually
exclusive either a single MSR value "XOR" an array of per-CPU MSR
values.

This is the prerequisite for supporting writing the same MSR value on
multiple CPUs.

Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/lib/msr-smp.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index fee8b9c0520c..15e1157d6b29 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -97,7 +97,7 @@ int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
 EXPORT_SYMBOL(wrmsrl_on_cpu);
 
 static void __rwmsr_on_cpus(const struct cpumask *mask, u32 msr_no,
-			    struct msr *msrs,
+			    struct msr *msrs, u64 reg_val,
 			    void (*msr_func) (void *info))
 {
 	struct msr_info rv;
@@ -105,8 +105,13 @@ static void __rwmsr_on_cpus(const struct cpumask *mask, u32 msr_no,
 
 	memset(&rv, 0, sizeof(rv));
 
+	/* Can't have both. */
+	if (WARN_ON(msrs && reg_val))
+		return;
+
 	rv.msrs	  = msrs;
 	rv.msr_no = msr_no;
+	rv.reg.q  = reg_val;
 
 	this_cpu = get_cpu();
 
@@ -126,7 +131,7 @@ static void __rwmsr_on_cpus(const struct cpumask *mask, u32 msr_no,
  */
 void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs)
 {
-	__rwmsr_on_cpus(mask, msr_no, msrs, __rdmsr_on_cpu);
+	__rwmsr_on_cpus(mask, msr_no, msrs, 0, __rdmsr_on_cpu);
 }
 EXPORT_SYMBOL(rdmsr_on_cpus);
 
@@ -140,7 +145,7 @@ EXPORT_SYMBOL(rdmsr_on_cpus);
  */
 void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs)
 {
-	__rwmsr_on_cpus(mask, msr_no, msrs, __wrmsr_on_cpu);
+	__rwmsr_on_cpus(mask, msr_no, msrs, 0, __wrmsr_on_cpu);
 }
 EXPORT_SYMBOL(wrmsr_on_cpus);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] x86/msr: Add wrmsrl_val_on_cpus()
  2020-06-10 11:00 [PATCH 0/3] x86/MSR: Add a MSR write callback Borislav Petkov
  2020-06-10 11:00 ` [PATCH 1/3] x86/msr: Pass a single MSR value to __rwmsr_on_cpus() too Borislav Petkov
@ 2020-06-10 11:00 ` Borislav Petkov
  2020-06-10 11:00 ` [PATCH 3/3] x86/msr: Add an MSR write callback Borislav Petkov
  2 siblings, 0 replies; 7+ messages in thread
From: Borislav Petkov @ 2020-06-10 11:00 UTC (permalink / raw)
  To: X86 ML; +Cc: Tom Lendacky, LKML

From: Borislav Petkov <bp@suse.de>

Add a helper which writes the same MSR value on a set of CPUs.

Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/msr.h |  8 ++++++++
 arch/x86/lib/msr-smp.c     | 15 +++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 86f20d520a07..71393a4c2104 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -342,6 +342,7 @@ int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
 int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
 void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs);
 void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs);
+void wrmsrl_val_on_cpus(const struct cpumask *mask, u32 msr_no, u64 reg_val);
 int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
 int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
 int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
@@ -379,6 +380,13 @@ static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no,
 {
 	wrmsr_on_cpu(0, msr_no, msrs[0].l, msrs[0].h);
 }
+
+static inline void wrmsrl_val_on_cpus(const struct cpumask *mask, u32 msr_no,
+				      u64 reg_val)
+{
+	wrmsrl(msr_no, reg_val);
+}
+
 static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no,
 				    u32 *l, u32 *h)
 {
diff --git a/arch/x86/lib/msr-smp.c b/arch/x86/lib/msr-smp.c
index 15e1157d6b29..f67ee2fdec69 100644
--- a/arch/x86/lib/msr-smp.c
+++ b/arch/x86/lib/msr-smp.c
@@ -149,6 +149,21 @@ void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs)
 }
 EXPORT_SYMBOL(wrmsr_on_cpus);
 
+/*
+ * wrmsr a single MSR value on a bunch of CPUs. To be used for MSRs which are
+ * the same on each core.
+ *
+ * @mask:       which CPUs
+ * @msr_no:     which MSR
+ * @reg_val:	MSR value
+ *
+ */
+void wrmsrl_val_on_cpus(const struct cpumask *mask, u32 msr_no, u64 reg_val)
+{
+	__rwmsr_on_cpus(mask, msr_no, NULL, reg_val, __wrmsr_on_cpu);
+}
+EXPORT_SYMBOL(wrmsrl_val_on_cpus);
+
 struct msr_info_completion {
 	struct msr_info		msr;
 	struct completion	done;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] x86/msr: Add an MSR write callback
  2020-06-10 11:00 [PATCH 0/3] x86/MSR: Add a MSR write callback Borislav Petkov
  2020-06-10 11:00 ` [PATCH 1/3] x86/msr: Pass a single MSR value to __rwmsr_on_cpus() too Borislav Petkov
  2020-06-10 11:00 ` [PATCH 2/3] x86/msr: Add wrmsrl_val_on_cpus() Borislav Petkov
@ 2020-06-10 11:00 ` Borislav Petkov
  2020-06-10 12:32   ` Peter Zijlstra
  2 siblings, 1 reply; 7+ messages in thread
From: Borislav Petkov @ 2020-06-10 11:00 UTC (permalink / raw)
  To: X86 ML; +Cc: Tom Lendacky, LKML, Eric Morton

From: Borislav Petkov <bp@suse.de>

Add a callback which gets executed after a MSR is written from
userspace. This is needed because in the case of AMD's MSR_AMD64_LS_CFG
MSR which gets cached in the kernel, the cached value needs to be
updated after the write, otherwise latter doesn't stick.

Reported-by: Eric Morton <Eric.Morton@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/msr.h   |  2 ++
 arch/x86/kernel/cpu/bugs.c   |  2 +-
 arch/x86/kernel/cpu/common.c | 31 +++++++++++++++++++++++++++++++
 arch/x86/kernel/msr.c        |  3 +++
 4 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 71393a4c2104..d151eb8cbd51 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -335,6 +335,8 @@ void msrs_free(struct msr *msrs);
 int msr_set_bit(u32 msr, u8 bit);
 int msr_clear_bit(u32 msr, u8 bit);
 
+void msr_write_callback(int cpu, u32 reg, u32 low, u32 hi);
+
 #ifdef CONFIG_SMP
 int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
 int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ed54b3b21c39..bfaf303d6965 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -57,7 +57,7 @@ static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS;
  * AMD specific MSR info for Speculative Store Bypass control.
  * x86_amd_ls_cfg_ssbd_mask is initialized in identify_boot_cpu().
  */
-u64 __ro_after_init x86_amd_ls_cfg_base;
+u64 x86_amd_ls_cfg_base;
 u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask;
 
 /* Control conditional STIBP in switch_to() */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..8d6fa73940a3 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -55,6 +55,7 @@
 #include <asm/microcode_intel.h>
 #include <asm/intel-family.h>
 #include <asm/cpu_device_id.h>
+#include <asm/spec-ctrl.h>
 #include <asm/uv/uv.h>
 
 #include "cpu.h"
@@ -1960,3 +1961,33 @@ void arch_smt_update(void)
 	/* Check whether IPI broadcasting can be enabled */
 	apic_smt_update();
 }
+
+void msr_write_callback(int cpu, u32 reg, u32 lo, u32 hi)
+{
+	if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
+		return;
+
+	if (reg == MSR_AMD64_LS_CFG) {
+		u64 new = ((u64)hi << 32) | lo;
+		cpumask_var_t tmp_mask;
+
+		if (new == x86_amd_ls_cfg_base)
+			return;
+
+		if (WARN_ON_ONCE(!alloc_cpumask_var(&tmp_mask, GFP_KERNEL)))
+			return;
+
+		/*
+		 * Remove the @cpu it was just written to from the mask
+		 * so that it doesn't get written to again pointlessly.
+		 */
+		cpumask_xor(tmp_mask, cpu_online_mask, cpumask_of(cpu));
+
+		x86_amd_ls_cfg_base = new;
+
+		wrmsrl_val_on_cpus(tmp_mask, MSR_AMD64_LS_CFG, new);
+
+		free_cpumask_var(tmp_mask);
+	}
+}
+EXPORT_SYMBOL_GPL(msr_write_callback);
diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
index 1547be359d7f..167125088eda 100644
--- a/arch/x86/kernel/msr.c
+++ b/arch/x86/kernel/msr.c
@@ -95,6 +95,9 @@ static ssize_t msr_write(struct file *file, const char __user *buf,
 		err = wrmsr_safe_on_cpu(cpu, reg, data[0], data[1]);
 		if (err)
 			break;
+
+		msr_write_callback(cpu, reg, data[0], data[1]);
+
 		tmp += 2;
 		bytes += 8;
 	}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/3] x86/msr: Add an MSR write callback
  2020-06-10 11:00 ` [PATCH 3/3] x86/msr: Add an MSR write callback Borislav Petkov
@ 2020-06-10 12:32   ` Peter Zijlstra
  2020-06-10 13:21     ` Borislav Petkov
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2020-06-10 12:32 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: X86 ML, Tom Lendacky, LKML, Eric Morton

On Wed, Jun 10, 2020 at 01:00:37PM +0200, Borislav Petkov wrote:
> From: Borislav Petkov <bp@suse.de>
> 
> Add a callback which gets executed after a MSR is written from
> userspace. This is needed because in the case of AMD's MSR_AMD64_LS_CFG
> MSR which gets cached in the kernel, the cached value needs to be
> updated after the write, otherwise latter doesn't stick.

We cache a whole bunch of MSRs in kernel. Why is this one special?

If you write using the stupid msr device, you get to keep all pieces.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/3] x86/msr: Add an MSR write callback
  2020-06-10 12:32   ` Peter Zijlstra
@ 2020-06-10 13:21     ` Borislav Petkov
  2020-06-10 14:01       ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Borislav Petkov @ 2020-06-10 13:21 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: X86 ML, Tom Lendacky, LKML, Eric Morton

On Wed, Jun 10, 2020 at 02:32:26PM +0200, Peter Zijlstra wrote:
> We cache a whole bunch of MSRs in kernel. Why is this one special?

If the others need the post-write handling, they should be added there
too. I did it with this one only as a start.

> If you write using the stupid msr device, you get to keep all pieces.

Yes, the tainting-on-write is the next thing that goes ontop of this.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/3] x86/msr: Add an MSR write callback
  2020-06-10 13:21     ` Borislav Petkov
@ 2020-06-10 14:01       ` Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2020-06-10 14:01 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: X86 ML, Tom Lendacky, LKML, Eric Morton

On Wed, Jun 10, 2020 at 03:21:31PM +0200, Borislav Petkov wrote:
> On Wed, Jun 10, 2020 at 02:32:26PM +0200, Peter Zijlstra wrote:
> > We cache a whole bunch of MSRs in kernel. Why is this one special?
> 
> If the others need the post-write handling, they should be added there
> too. I did it with this one only as a start.

Still, this is really weird. The msr device is per cpu (because MSRs are
per cpu), but this shadow value is global (because we keep the same
value on all CPUs), so you then have to broadcast IPI around to fix up
the other CPUs, which, with a bit of bad luck will also get written by
userspace, causing O(n^2) IPIs.

Also, this gives some MSRs radically different behaviour from other
MSRs.

Why not create a sane sysfs interface for this LS_CFG muck in cpu/bugs.c
or so? A simple sysfs file should not me much more lines than all this.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-06-10 14:01 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-10 11:00 [PATCH 0/3] x86/MSR: Add a MSR write callback Borislav Petkov
2020-06-10 11:00 ` [PATCH 1/3] x86/msr: Pass a single MSR value to __rwmsr_on_cpus() too Borislav Petkov
2020-06-10 11:00 ` [PATCH 2/3] x86/msr: Add wrmsrl_val_on_cpus() Borislav Petkov
2020-06-10 11:00 ` [PATCH 3/3] x86/msr: Add an MSR write callback Borislav Petkov
2020-06-10 12:32   ` Peter Zijlstra
2020-06-10 13:21     ` Borislav Petkov
2020-06-10 14:01       ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).