All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] MSR cleanups
@ 2014-02-09 13:48 Borislav Petkov
  2014-02-09 13:48 ` [RFC PATCH 1/3] x86: Add another set of MSR accessor functions Borislav Petkov
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Borislav Petkov @ 2014-02-09 13:48 UTC (permalink / raw)
  To: X86 ML; +Cc: LKML, Borislav Petkov

From: Borislav Petkov <bp@suse.de>

Hi guys,

this is just a preview of what I think we talked about. I'm sending it
out now to gather some feedback before I go and test it more thoroughly
(it boots in a guest though :)).

Thanks.

Borislav Petkov (3):
  x86: Add another set of MSR accessor functions
  x86, AMD: Convert to the new MSR accessors
  x86, intel: Convert to the new MSR accessors

 arch/x86/include/asm/msr.h            |   2 +
 arch/x86/include/uapi/asm/msr-index.h |   6 +-
 arch/x86/kernel/cpu/amd.c             |  46 ++++-----------
 arch/x86/kernel/cpu/intel.c           |  29 +++------
 arch/x86/lib/msr.c                    | 107 +++++++++++++++++++++++++++++++++-
 5 files changed, 131 insertions(+), 59 deletions(-)

-- 
1.8.5.2.192.g7794a68


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH 1/3] x86: Add another set of MSR accessor functions
  2014-02-09 13:48 [RFC PATCH 0/3] MSR cleanups Borislav Petkov
@ 2014-02-09 13:48 ` Borislav Petkov
  2014-02-17 16:21   ` H. Peter Anvin
  2014-02-09 13:48 ` [RFC PATCH 2/3] x86, AMD: Convert to the new MSR accessors Borislav Petkov
  2014-02-09 13:48 ` [RFC PATCH 3/3] x86, intel: " Borislav Petkov
  2 siblings, 1 reply; 6+ messages in thread
From: Borislav Petkov @ 2014-02-09 13:48 UTC (permalink / raw)
  To: X86 ML; +Cc: LKML, Borislav Petkov

From: Borislav Petkov <bp@suse.de>

We very often need to set or clear a bit in an MSR as a result of doing
some sort of a hardware configuration. Add generic versions of that
repeated functionality in order to save us a bunch of duplicated code in
the early CPU vendor detection/config code.

Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/asm/msr.h |   2 +
 arch/x86/lib/msr.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 108 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index e139b13f2a33..de36f22eb0b9 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -214,6 +214,8 @@ do {                                                            \
 
 struct msr *msrs_alloc(void);
 void msrs_free(struct msr *msrs);
+int msr_set_bit(u32 msr, u8 bit);
+int msr_clear_bit(u32 msr, u8 bit);
 
 #ifdef CONFIG_SMP
 int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c
index 8f8eebdca7d4..ebab60c549ae 100644
--- a/arch/x86/lib/msr.c
+++ b/arch/x86/lib/msr.c
@@ -8,7 +8,7 @@ struct msr *msrs_alloc(void)
 
 	msrs = alloc_percpu(struct msr);
 	if (!msrs) {
-		pr_warning("%s: error allocating msrs\n", __func__);
+		pr_warn("%s: error allocating msrs\n", __func__);
 		return NULL;
 	}
 
@@ -21,3 +21,108 @@ void msrs_free(struct msr *msrs)
 	free_percpu(msrs);
 }
 EXPORT_SYMBOL(msrs_free);
+
+/**
+ * Read an MSR with error handling
+ *
+ * @msr: MSR to read
+ * @m: value to read into
+ *
+ * It returns read data only on success, otherwise it doesn't change the output
+ * argument @m.
+ *
+ */
+int msr_read(u32 msr, struct msr *m)
+{
+	int err;
+	u64 val;
+
+	val = native_read_msr_safe(msr, &err);
+	if (err)
+		pr_warn("%s: Error reading MSR 0x%08x\n", __func__, msr);
+	else
+		m->q = val;
+
+	return err;
+}
+
+/**
+ * Write an MSR with error handling
+ *
+ * @msr: MSR to write
+ * @m: value to write
+ */
+int msr_write(u32 msr, struct msr *m)
+{
+	int err;
+
+	err = native_write_msr_safe(msr, m->l, m->h);
+	if (err)
+		pr_warn("%s: Error writing MSR 0x%08x\n", __func__, msr);
+
+	return err;
+}
+
+static int __flip_bit(u32 msr, u8 bit, bool set)
+{
+	struct msr m;
+
+	if (bit > 63)
+		return -1;
+
+	if (msr_read(msr, &m))
+		return -1;
+
+	if (set) {
+		if ((BIT_64(bit) | m.q) == m.q)
+			return 0;
+
+		m.q |= BIT_64(bit);
+	} else {
+		if ((m.q & ~BIT_64(bit)) == m.q)
+			return 0;
+
+		m.q &= ~BIT_64(bit);
+	}
+
+	if (msr_write(msr, &m))
+		return -1;
+
+	return 1;
+}
+
+/**
+ * Set @bit in a MSR @msr.
+ *
+ * Retval:
+ * < 0: An error was encountered.
+ * = 0: Bit was already set.
+ * > 0: Hardware accepted the MSR write.
+ */
+int msr_set_bit(u32 msr, u8 bit)
+{
+	int err = __flip_bit(msr, bit, true);
+	if (err < 0)
+		pr_err("%s: Error setting bit %d in MSR 0x%08x.\n",
+			__func__, bit, msr);
+
+	return err;
+}
+
+/**
+ * Clear @bit in a MSR @msr.
+ *
+ * Retval:
+ * < 0: An error was encountered.
+ * = 0: Bit was already cleared.
+ * > 0: Hardware accepted the MSR write.
+ */
+int msr_clear_bit(u32 msr, u8 bit)
+{
+	int err = __flip_bit(msr, bit, false);
+	if (err < 0)
+		pr_err("%s: Error clearing bit %d in MSR 0x%08x.\n",
+			__func__, bit, msr);
+
+	return err;
+}
-- 
1.8.5.2.192.g7794a68


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH 2/3] x86, AMD: Convert to the new MSR accessors
  2014-02-09 13:48 [RFC PATCH 0/3] MSR cleanups Borislav Petkov
  2014-02-09 13:48 ` [RFC PATCH 1/3] x86: Add another set of MSR accessor functions Borislav Petkov
@ 2014-02-09 13:48 ` Borislav Petkov
  2014-02-09 13:48 ` [RFC PATCH 3/3] x86, intel: " Borislav Petkov
  2 siblings, 0 replies; 6+ messages in thread
From: Borislav Petkov @ 2014-02-09 13:48 UTC (permalink / raw)
  To: X86 ML; +Cc: LKML, Borislav Petkov

From: Borislav Petkov <bp@suse.de>

... and save us a bunch of code.

Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/kernel/cpu/amd.c | 46 +++++++++++-----------------------------------
 1 file changed, 11 insertions(+), 35 deletions(-)

diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index d3153e281d72..3218de9a171e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -233,9 +233,7 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
 	if (c->x86_model >= 6 && c->x86_model <= 10) {
 		if (!cpu_has(c, X86_FEATURE_XMM)) {
 			printk(KERN_INFO "Enabling disabled K7/SSE Support.\n");
-			rdmsr(MSR_K7_HWCR, l, h);
-			l &= ~0x00008000;
-			wrmsr(MSR_K7_HWCR, l, h);
+			msr_clear_bit(MSR_K7_HWCR, 15);
 			set_cpu_cap(c, X86_FEATURE_XMM);
 		}
 	}
@@ -509,14 +507,8 @@ static void early_init_amd(struct cpuinfo_x86 *c)
 #endif
 
 	/* F16h erratum 793, CVE-2013-6885 */
-	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
-		u64 val;
-
-		rdmsrl(MSR_AMD64_LS_CFG, val);
-		if (!(val & BIT(15)))
-			wrmsrl(MSR_AMD64_LS_CFG, val | BIT(15));
-	}
-
+	if (c->x86 == 0x16 && c->x86_model <= 0xf)
+		msr_set_bit(MSR_AMD64_LS_CFG, 15);
 }
 
 static const int amd_erratum_383[];
@@ -536,11 +528,8 @@ static void init_amd(struct cpuinfo_x86 *c)
 	 * Errata 63 for SH-B3 steppings
 	 * Errata 122 for all steppings (F+ have it disabled by default)
 	 */
-	if (c->x86 == 0xf) {
-		rdmsrl(MSR_K7_HWCR, value);
-		value |= 1 << 6;
-		wrmsrl(MSR_K7_HWCR, value);
-	}
+	if (c->x86 == 0xf)
+		msr_set_bit(MSR_K7_HWCR, 6);
 #endif
 
 	early_init_amd(c);
@@ -623,14 +612,11 @@ static void init_amd(struct cpuinfo_x86 *c)
 	    (c->x86_model >= 0x10) && (c->x86_model <= 0x1f) &&
 	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
 
-		if (!rdmsrl_safe(0xc0011005, &value)) {
-			value |= 1ULL << 54;
-			wrmsrl_safe(0xc0011005, value);
+		if (msr_set_bit(0xc0011005, 54) > 0) {
 			rdmsrl(0xc0011005, value);
 			if (value & (1ULL << 54)) {
 				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
-				printk(KERN_INFO FW_INFO "CPU: Re-enabling "
-				  "disabled Topology Extensions Support\n");
+				pr_info(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support\n");
 			}
 		}
 	}
@@ -709,19 +695,12 @@ static void init_amd(struct cpuinfo_x86 *c)
 		 * Disable GART TLB Walk Errors on Fam10h. We do this here
 		 * because this is always needed when GART is enabled, even in a
 		 * kernel which has no MCE support built in.
-		 * BIOS should disable GartTlbWlk Errors themself. If
-		 * it doesn't do it here as suggested by the BKDG.
+		 * BIOS should disable GartTlbWlk Errors already. If
+		 * it doesn't, do it here as suggested by the BKDG.
 		 *
 		 * Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=33012
 		 */
-		u64 mask;
-		int err;
-
-		err = rdmsrl_safe(MSR_AMD64_MCx_MASK(4), &mask);
-		if (err == 0) {
-			mask |= (1 << 10);
-			wrmsrl_safe(MSR_AMD64_MCx_MASK(4), mask);
-		}
+		msr_set_bit(MSR_AMD64_MCx_MASK(4), 10);
 
 		/*
 		 * On family 10h BIOS may not have properly enabled WC+ support,
@@ -733,10 +712,7 @@ static void init_amd(struct cpuinfo_x86 *c)
 		 * NOTE: we want to use the _safe accessors so as not to #GP kvm
 		 * guests on older kvm hosts.
 		 */
-
-		rdmsrl_safe(MSR_AMD64_BU_CFG2, &value);
-		value &= ~(1ULL << 24);
-		wrmsrl_safe(MSR_AMD64_BU_CFG2, value);
+		msr_clear_bit(MSR_AMD64_BU_CFG2, 24);
 
 		if (cpu_has_amd_erratum(c, amd_erratum_383))
 			set_cpu_bug(c, X86_BUG_AMD_TLB_MMATCH);
-- 
1.8.5.2.192.g7794a68


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [RFC PATCH 3/3] x86, intel: Convert to the new MSR accessors
  2014-02-09 13:48 [RFC PATCH 0/3] MSR cleanups Borislav Petkov
  2014-02-09 13:48 ` [RFC PATCH 1/3] x86: Add another set of MSR accessor functions Borislav Petkov
  2014-02-09 13:48 ` [RFC PATCH 2/3] x86, AMD: Convert to the new MSR accessors Borislav Petkov
@ 2014-02-09 13:48 ` Borislav Petkov
  2 siblings, 0 replies; 6+ messages in thread
From: Borislav Petkov @ 2014-02-09 13:48 UTC (permalink / raw)
  To: X86 ML; +Cc: LKML, Borislav Petkov

From: Borislav Petkov <bp@suse.de>

... and save some lines of code.

Signed-off-by: Borislav Petkov <bp@suse.de>
---
 arch/x86/include/uapi/asm/msr-index.h |  6 ++++--
 arch/x86/kernel/cpu/intel.c           | 29 ++++++++---------------------
 2 files changed, 12 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h
index c19fc60ff062..0a93ea3e70da 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -368,14 +368,16 @@
 #define THERM_LOG_THRESHOLD1           (1 << 9)
 
 /* MISC_ENABLE bits: architectural */
-#define MSR_IA32_MISC_ENABLE_FAST_STRING	(1ULL << 0)
+#define BIT_FAST_STRING				0
+#define MSR_IA32_MISC_ENABLE_FAST_STRING	(1ULL << BIT_FAST_STRING)
 #define MSR_IA32_MISC_ENABLE_TCC		(1ULL << 1)
 #define MSR_IA32_MISC_ENABLE_EMON		(1ULL << 7)
 #define MSR_IA32_MISC_ENABLE_BTS_UNAVAIL	(1ULL << 11)
 #define MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL	(1ULL << 12)
 #define MSR_IA32_MISC_ENABLE_ENHANCED_SPEEDSTEP	(1ULL << 16)
 #define MSR_IA32_MISC_ENABLE_MWAIT		(1ULL << 18)
-#define MSR_IA32_MISC_ENABLE_LIMIT_CPUID	(1ULL << 22)
+#define BIT_LIMIT_CPUID				22
+#define MSR_IA32_MISC_ENABLE_LIMIT_CPUID	(1ULL << BIT_LIMIT_CPUID);
 #define MSR_IA32_MISC_ENABLE_XTPR_DISABLE	(1ULL << 23)
 #define MSR_IA32_MISC_ENABLE_XD_DISABLE		(1ULL << 34)
 
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 3db61c644e44..ce12e535c9a6 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -31,11 +31,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 
 	/* Unmask CPUID levels if masked: */
 	if (c->x86 > 6 || (c->x86 == 6 && c->x86_model >= 0xd)) {
-		rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
-
-		if (misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID) {
-			misc_enable &= ~MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
-			wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
+		if (msr_clear_bit(MSR_IA32_MISC_ENABLE, BIT_LIMIT_CPUID) > 0) {
 			c->cpuid_level = cpuid_eax(0);
 			get_cpu_cap(c);
 		}
@@ -129,16 +125,9 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 	 * Ingo Molnar reported a Pentium D (model 6) and a Xeon
 	 * (model 2) with the same problem.
 	 */
-	if (c->x86 == 15) {
-		rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
-
-		if (misc_enable & MSR_IA32_MISC_ENABLE_FAST_STRING) {
-			printk(KERN_INFO "kmemcheck: Disabling fast string operations\n");
-
-			misc_enable &= ~MSR_IA32_MISC_ENABLE_FAST_STRING;
-			wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
-		}
-	}
+	if (c->x86 == 15)
+		if (msr_clear_bit(MSR_IA32_MISC_ENABLE, BIT_FAST_STRING) > 0)
+			pr_info("kmemcheck: Disabling fast string operations\n");
 #endif
 
 	/*
@@ -229,12 +218,10 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
 	 * Hardware prefetcher may cause stale data to be loaded into the cache.
 	 */
 	if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_mask == 1)) {
-		rdmsr(MSR_IA32_MISC_ENABLE, lo, hi);
-		if ((lo & MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE) == 0) {
-			printk (KERN_INFO "CPU: C0 stepping P4 Xeon detected.\n");
-			printk (KERN_INFO "CPU: Disabling hardware prefetching (Errata 037)\n");
-			lo |= MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE;
-			wrmsr(MSR_IA32_MISC_ENABLE, lo, hi);
+		if (msr_set_bit(MSR_IA32_MISC_ENABLE,
+				MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE) > 0) {
+			pr_info("CPU: C0 stepping P4 Xeon detected.\n");
+			pr_info("CPU: Disabling hardware prefetching (Errata 037)\n");
 		}
 	}
 
-- 
1.8.5.2.192.g7794a68


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] x86: Add another set of MSR accessor functions
  2014-02-09 13:48 ` [RFC PATCH 1/3] x86: Add another set of MSR accessor functions Borislav Petkov
@ 2014-02-17 16:21   ` H. Peter Anvin
  2014-02-17 20:08     ` Borislav Petkov
  0 siblings, 1 reply; 6+ messages in thread
From: H. Peter Anvin @ 2014-02-17 16:21 UTC (permalink / raw)
  To: Borislav Petkov, X86 ML; +Cc: LKML, Borislav Petkov

Good patch series overall, but I do have some issues with this one:

On 02/09/2014 05:48 AM, Borislav Petkov wrote:
> + */
> +int msr_read(u32 msr, struct msr *m)
> +{
> +	int err;
> +	u64 val;
> +
> +	val = native_read_msr_safe(msr, &err);

I don't think we should use the native_ function here.

> +	if (err)
> +		pr_warn("%s: Error reading MSR 0x%08x\n", __func__, msr);
> +	else
> +		m->q = val;

I also don't think we should print a message if the MSR doesn't exist.
This will be a normal occurrence in a number of flows.

> +static int __flip_bit(u32 msr, u8 bit, bool set)
> +{
> +	struct msr m;
> +
> +	if (bit > 63)
> +		return -1;

Feels a bit excessive, but I'd suggest returning -EINVAL instead.

I would suggest explicitly making this an inline function.

> +	if (msr_read(msr, &m))
> +		return -1;

Return -EIO?

How about:

	m1 = m;
	if (set)
		m1.q |= BIT_64(bit);
	else
		m1.q &= ~BIT_64(bit);

	if (m1.q != m.q) {
		if (msr_write(...))
			...
	}

> +
> +/**
> + * Set @bit in a MSR @msr.
> + *
> + * Retval:
> + * < 0: An error was encountered.
> + * = 0: Bit was already set.
> + * > 0: Hardware accepted the MSR write.
> + */
> +int msr_set_bit(u32 msr, u8 bit)
> +{
> +	int err = __flip_bit(msr, bit, true);
> +	if (err < 0)
> +		pr_err("%s: Error setting bit %d in MSR 0x%08x.\n",
> +			__func__, bit, msr);
> +
> +	return err;
> +}

Again, I'm not sure if printing a message here makes sense.  In fact,
this is the second message you print for the same thing.

	-hpa


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] x86: Add another set of MSR accessor functions
  2014-02-17 16:21   ` H. Peter Anvin
@ 2014-02-17 20:08     ` Borislav Petkov
  0 siblings, 0 replies; 6+ messages in thread
From: Borislav Petkov @ 2014-02-17 20:08 UTC (permalink / raw)
  To: H. Peter Anvin; +Cc: X86 ML, LKML, Borislav Petkov

On Mon, Feb 17, 2014 at 08:21:47AM -0800, H. Peter Anvin wrote:
> Good patch series overall, but I do have some issues with this one:
> 
> On 02/09/2014 05:48 AM, Borislav Petkov wrote:
> > + */
> > +int msr_read(u32 msr, struct msr *m)
> > +{
> > +	int err;
> > +	u64 val;
> > +
> > +	val = native_read_msr_safe(msr, &err);
> 
> I don't think we should use the native_ function here.

Ah, right, pv gunk, I conveniently victimized those cache lines away
from my head. :-P

rdmsrl_safe it is.

> 
> > +	if (err)
> > +		pr_warn("%s: Error reading MSR 0x%08x\n", __func__, msr);
> > +	else
> > +		m->q = val;
> 
> I also don't think we should print a message if the MSR doesn't exist.
> This will be a normal occurrence in a number of flows.

Right, I was suspecting the screaming in dmesg could upset people but
wasn't sure. Good point.

> > +static int __flip_bit(u32 msr, u8 bit, bool set)
> > +{
> > +	struct msr m;
> > +
> > +	if (bit > 63)
> > +		return -1;
> 
> Feels a bit excessive, but I'd suggest returning -EINVAL instead.

Ok.

> I would suggest explicitly making this an inline function.

Sure.

> > +	if (msr_read(msr, &m))
> > +		return -1;
> 
> Return -EIO?

Actually, msr_read already gives a retval so I can simply carry that
out. And it *is* -EIO already :-)

> How about:
> 
> 	m1 = m;
> 	if (set)
> 		m1.q |= BIT_64(bit);
> 	else
> 		m1.q &= ~BIT_64(bit);
> 
> 	if (m1.q != m.q) {
> 		if (msr_write(...))
> 			...

It's all the same to me, sure.

> Again, I'm not sure if printing a message here makes sense. In fact,
> this is the second message you print for the same thing.

Ok, I'll make them completely silent - if their users wanna say
something, they can do that based on the retval. Good.

Thanks for checking them out - I'll start playing with the revised
versions on real hw.

-- 
Regards/Gruss,
    Boris.

Sent from a fat crate under my desk. Formatting is fine.
--

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-02-17 20:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-09 13:48 [RFC PATCH 0/3] MSR cleanups Borislav Petkov
2014-02-09 13:48 ` [RFC PATCH 1/3] x86: Add another set of MSR accessor functions Borislav Petkov
2014-02-17 16:21   ` H. Peter Anvin
2014-02-17 20:08     ` Borislav Petkov
2014-02-09 13:48 ` [RFC PATCH 2/3] x86, AMD: Convert to the new MSR accessors Borislav Petkov
2014-02-09 13:48 ` [RFC PATCH 3/3] x86, intel: " Borislav Petkov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.