All of lore.kernel.org
 help / color / mirror / Atom feed
* [MODERATED] [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
@ 2018-04-26 23:48 konrad.wilk
  2018-04-28 19:38 ` Thomas Gleixner
  0 siblings, 1 reply; 11+ messages in thread
From: konrad.wilk @ 2018-04-26 23:48 UTC (permalink / raw)
  To: speck

Intel CPUs expose methods to:

 - Detect whether Reduced Data Speculation capability is available via
   CPUID.7.0.EDX[31],

 - The SPEC_CTRL MSR(0x48), bit 2 set to enable Reduced Data Speculation.

 - MSR_IA32_ARCH_CAPABILITIES, Bit(4) no need to enable Reduced Data Speculation.

With that in mind if spec_store_bypass_disable=[auto,on] is selected we will
set at boot-time the SPEC_CTRL MSR to enable Reduced Data Speculation if the
platform requires it.

Note that this does not fix the KVM case where the SPEC_CTRL
is exposed to guests who can muck with, see patch titled :
 KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest IBRS and ours.

And for the firmware (IBRS to be set), see patch titled:
 x86/spectre_v2: Read SPEC_CTRL MSR during boot and re-use reserved bits

Note that we are moving the identify_boot_cpu to be done _after_ the
ssb_select_mitigation and _before_ spectre_v2_select_mitigation.
This is required as identify_boot_cpu enables the mitigation.

And we need to call identify_boot_cpu _before_ spectre_v2_select_mitigation
because the making LFENCE serializable bit is done in init_amd()
- which is then used in spectre_v2_select_mitigation to figure out
which mitigation is used.

xReviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---

v1.2: Expand on the commit description
  s/md_v4/mdd/
  s/spec_ctrl_msr_on/spec_ctrl_priv/
  s/spec_ctrl_msr_off/spec_ctrp_unpriv/

v1.3:
 - Add comment about privilege level changes.

v1.4: Simplify and incorporate various suggestions from Jon Masters
 - Export a single x86_spec_ctrl_base value with initial bits

v2: Rip out the c_fix_cpu.
 Depend on synthetic CPU flag
v3: Move the generic_identify to be done _after_ we figure out whether
  we can do the mitigation.
v4: s/MDD/RDS/
   s/Memory Disambiguation Disable/Reduced Data Speculation/
   Tweak the various 'disable', enabled now that it is called RDS.
   Set the x86_spec_ctrl with SPEC_CTRL_RDS if RDS is detected
   Fixup x86_set_spec_ctrl to deal with two Bitfields.
v5: s/X86_FEATURE_DISABLE_SSB/X86_FEATURE_SPEC_STORE_BYPASS_DISABLE/
   Also check MSR_IA32_ARCH_CAPABILITIES for Bit(4)
   Add documentation on what those three flags mean
   Add docs on why we set x86_spec_ctrl only on Intel
   Add extra check in ssb_parse_cmdline for RDS be available
   In init_intel drop the check for RDS as the X86_FEATURE_SPEC_STORE_BYPASS_DISABLE
    is implicitly set only iff RDS has been set in ssb_parse_cmdline.
v6: s/spec_store_bypass_select_mitigation/ssb_select_mitigation/
   Added Reviewed-by
   Change the order of calling identify_boot_cpu so that on AMD LFENCE serialization
   check can be done.
   Expand the commit message to include the identify_boot_cpu order.
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/include/asm/msr-index.h   |  2 ++
 arch/x86/kernel/cpu/bugs.c         | 49 ++++++++++++++++++++++++++++----------
 arch/x86/kernel/cpu/common.c       |  9 +++----
 arch/x86/kernel/cpu/intel.c        |  6 +++++
 5 files changed, 51 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index fc5a8378652e..914391814ba7 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -334,6 +334,7 @@
 #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
 #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+#define X86_FEATURE_RDS			(18*32+31) /* Reduced Data Speculation */
 
 /*
  * BUG word(s)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 53d5b1b9255e..4c6b8f354119 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -42,6 +42,7 @@
 #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
 #define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
 #define SPEC_CTRL_STIBP			(1 << 1)   /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_RDS			(1 << 2)   /* Reduced Data Speculation */
 
 #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
 #define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
@@ -68,6 +69,7 @@
 #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
 #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
 #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
+#define ARCH_CAP_RDS_NO			(1 << 4)   /* Not susceptible to speculative store bypass */
 
 #define MSR_IA32_BBL_CR_CTL		0x00000119
 #define MSR_IA32_BBL_CR_CTL3		0x0000011e
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 398c482c6fc3..0cda27398a73 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -37,8 +37,6 @@ static u64 __read_mostly x86_spec_ctrl_base;
 
 void __init check_bugs(void)
 {
-	identify_boot_cpu();
-
 	/*
 	 * Read the SPEC_CTRL MSR to account for reserved bits which may have
 	 * unknown values.
@@ -46,20 +44,27 @@ void __init check_bugs(void)
 	if (boot_cpu_has(X86_FEATURE_IBRS))
 		rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 
-	if (!IS_ENABLED(CONFIG_SMP)) {
-		pr_info("CPU: ");
-		print_cpu_info(&boot_cpu_data);
-	}
-
-	/* Select the proper spectre mitigation before patching alternatives */
-	spectre_v2_select_mitigation();
-
 	/*
 	 * Select proper mitigation for any exposure to the Speculative Store
 	 * Bypass vulnerability.
 	 */
 	ssb_select_mitigation();
 
+	/*
+	 * Need to do this _after_ ssb_select_mitigation to figure if any
+	 * mitigation should be latched.
+	 */
+	identify_boot_cpu();
+
+	/* Select the proper spectre mitigation before patching alternatives */
+	spectre_v2_select_mitigation();
+
+	if (!IS_ENABLED(CONFIG_SMP)) {
+		pr_info("CPU: ");
+		print_cpu_info(&boot_cpu_data);
+	}
+
+
 #ifdef CONFIG_X86_32
 	/*
 	 * Check whether we are able to run this kernel safely on SMP.
@@ -117,7 +122,7 @@ static enum spectre_v2_mitigation spectre_v2_enabled = SPECTRE_V2_NONE;
 
 void x86_set_spec_ctrl(u64 val)
 {
-	if (val & ~(SPEC_CTRL_IBRS))
+	if (val & ~(SPEC_CTRL_IBRS | SPEC_CTRL_RDS))
 		WARN_ONCE(1, "SPEC_CTRL MSR value 0x%16llx is unknown.\n", val);
 	else
 		wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base | val);
@@ -399,6 +404,13 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
 	if (!boot_cpu_has(X86_BUG_SPEC_STORE_BYPASS))
 		return SPEC_STORE_BYPASS_CMD_NONE;
 
+	/*
+	 * On Intel it is exposed in CPUID as Reduced Data Speculation
+	 * bit.
+	 */
+	if (!boot_cpu_has(X86_FEATURE_RDS))
+		return SPEC_STORE_BYPASS_CMD_NONE;
+
 	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable"))
 		return SPEC_STORE_BYPASS_CMD_NONE;
 	else {
@@ -446,8 +458,21 @@ static void __init ssb_select_mitigation(void)
 	ssb_mode = mode;
 	pr_info("%s\n", ssb_strings[mode]);
 
-	if (mode != SPEC_STORE_BYPASS_NONE)
+	/*
+	 * We have three CPU feature flags that are in play here:
+	 *  - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
+	 *  - X86_FEATURE_RDS - CPU is able to turn off speculative store bypass
+	 *  - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
+	 */
+	if (mode != SPEC_STORE_BYPASS_NONE) {
 		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
+		/*
+		 * Intel uses the SPEC CTRL MSR Bit(2) for this.
+		 */
+		if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+			x86_spec_ctrl_base |= SPEC_CTRL_RDS;
+
+	}
 }
 
 #undef pr_fmt
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 581eb0440aee..c418d205d3c9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -944,7 +944,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 {
 	u64 ia32_cap = 0;
 
-	if (!x86_match_cpu(cpu_no_spec_store_bypass))
+	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
+	   !(ia32_cap & ARCH_CAP_RDS_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (x86_match_cpu(cpu_no_speculation))
@@ -956,9 +960,6 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
 	if (x86_match_cpu(cpu_no_meltdown))
 		return;
 
-	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
-
 	/* Rogue Data Cache Load? No! */
 	if (ia32_cap & ARCH_CAP_RDCL_NO)
 		return;
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index b9693b80fc21..8d6d6f4071f7 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -17,6 +17,7 @@
 #include <asm/cpu.h>
 #include <asm/intel-family.h>
 #include <asm/microcode_intel.h>
+#include <asm/nospec-branch.h>
 #include <asm/hwcap2.h>
 #include <asm/elf.h>
 
@@ -189,6 +190,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 		setup_clear_cpu_cap(X86_FEATURE_STIBP);
 		setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
 		setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
+		setup_clear_cpu_cap(X86_FEATURE_RDS);
 	}
 
 	/*
@@ -769,8 +771,12 @@ static void init_intel(struct cpuinfo_x86 *c)
 	init_intel_energy_perf(c);
 
 	init_intel_misc_features(c);
+
+	if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
+		x86_set_spec_ctrl(SPEC_CTRL_RDS);
 }
 
+
 #ifdef CONFIG_X86_32
 static unsigned int intel_size_cache(struct cpuinfo_x86 *c, unsigned int size)
 {
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-26 23:48 [MODERATED] [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7 konrad.wilk
@ 2018-04-28 19:38 ` Thomas Gleixner
  2018-04-29  2:13   ` [MODERATED] " Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2018-04-28 19:38 UTC (permalink / raw)
  To: speck

On Thu, 26 Apr 2018, speck for konrad.wilk_at_oracle.com wrote:
>  void __init check_bugs(void)
>  {
> -	identify_boot_cpu();
> -

I know that you did this for AMD, but it does not make it less wrong.  The
right thing to do is to store the MSR_LS_CTR bit somewhere during the
identification and then use it here. That keeps the fiddling with these
MSRs in this code and not in some weird latch mode burried in the CPU init
code.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [MODERATED] Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-28 19:38 ` Thomas Gleixner
@ 2018-04-29  2:13   ` Konrad Rzeszutek Wilk
  2018-04-29  7:22     ` Thomas Gleixner
  0 siblings, 1 reply; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-29  2:13 UTC (permalink / raw)
  To: speck

On Sat, Apr 28, 2018 at 09:38:29PM +0200, speck for Thomas Gleixner wrote:
> On Thu, 26 Apr 2018, speck for konrad.wilk_at_oracle.com wrote:
> >  void __init check_bugs(void)
> >  {
> > -	identify_boot_cpu();
> > -
> 
> I know that you did this for AMD, but it does not make it less wrong.  The
> right thing to do is to store the MSR_LS_CTR bit somewhere during the
> identification and then use it here. That keeps the fiddling with these
> MSRs in this code and not in some weird latch mode burried in the CPU init
> code.

Please keep in mind that the AP processors still need to engage the latch
mode for the CPU - and they don't don't venture in the 'check_bugs'
code.  Which means "some weird latch mode burried in the CPU init code"
still has to happen.

It could be something as simple as this for BSP:

 43 void __init check_bugs(void)
 44 {
 45         identify_boot_cpu();
 46 
 47         if (!IS_ENABLED(CONFIG_SMP)) {
 48                 pr_info("CPU: ");
 49                 print_cpu_info(&boot_cpu_data);
 50         }
 51 
 52         /*
 53          * Read the SPEC_CTRL MSR to account for reserved bits which may have
 54          * unknown values.
 55          */
 56         if (boot_cpu_has(X86_FEATURE_IBRS))
 57                 rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 58 
 59         /*
 60          * Select proper mitigation for any exposure to the Speculative Store
 61          * Bypass vulnerability.
 62          */
 63         ssb_select_mitigation();
 64 
 65         if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE)) {
 66                 if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
 67                         x86_set_spec_ctrl(SPEC_CTRL_RDS);
 68                 } else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
 69                         u64 val;
 70 
 71                         rdmsrl(MSR_AMD64_LS_CFG, val);
 72                         val |= (1 << x86_msr_ls_ctr_bit);
 73                         wrmsrl(MSR_AMD64_LS_CFG, val);
 74                 }
 75         }

[I actually would likely move this to some function called
 ssb_latch_mitigation_boot_cpu() or ssb_latch_mitigation()]

 76 
 77         /* Select the proper spectre mitigation before patching alternatives */
 78         spectre_v2_select_mitigation();


For the AP processors:

in init_amd:

     if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
               msr_set_bit(MSR_AMD64_LS_CFG, x86_msr_ls_ctr_bit);

for the AP CPUs this is needed in init_intel:

 775         if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
 776                 x86_set_spec_ctrl(SPEC_CTRL_RDS);


And to figure out which bits to fiddle, for AMD we would do this in
in early_init_amd:

 678         if (c->x86 >= 0x15 && c->x86 <= 0x17) {
 679                 set_cpu_cap(c, X86_FEATURE_RDS);
 680                 switch (c->x86) {
 681                 case 0x15: x86_msr_ls_ctr_bit = 54; break;
 682                 case 0x16: x86_msr_ls_ctr_bit = 33; break;
 683                 case 0x17: x86_msr_ls_ctr_bit = 10; break;
 684                 }
 685         }


for Intel we already identify the X86_FEATURE_RDS thanks to reading
during early init the CPUID.7.edx in get_cpu_cap.

That ties all together..

Taking this one step further - the init_amd or init_intel could
call the ssb_latch_mitigation_boot_cpu() function (which now should be
called ssb_latch_mitigation) - and that would put all the bit/MSR
fiddling code in one location (bugs.c).

I think this is what you had in mind. I will do those changes tomorrow
night and try them out, unless you had in mind a different way?.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29  2:13   ` [MODERATED] " Konrad Rzeszutek Wilk
@ 2018-04-29  7:22     ` Thomas Gleixner
  2018-04-29 12:42       ` [MODERATED] " Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2018-04-29  7:22 UTC (permalink / raw)
  To: speck

On Sat, 28 Apr 2018, speck for Konrad Rzeszutek Wilk wrote:

> On Sat, Apr 28, 2018 at 09:38:29PM +0200, speck for Thomas Gleixner wrote:
> > On Thu, 26 Apr 2018, speck for konrad.wilk_at_oracle.com wrote:
> > >  void __init check_bugs(void)
> > >  {
> > > -	identify_boot_cpu();
> > > -
> > 
> > I know that you did this for AMD, but it does not make it less wrong.  The
> > right thing to do is to store the MSR_LS_CTR bit somewhere during the
> > identification and then use it here. That keeps the fiddling with these
> > MSRs in this code and not in some weird latch mode burried in the CPU init
> > code.
> 
> Please keep in mind that the AP processors still need to engage the latch
> mode for the CPU - and they don't don't venture in the 'check_bugs'
> code.  Which means "some weird latch mode burried in the CPU init code"
> still has to happen.

Right.

> For the AP processors:
> 
> in init_amd:
> 
>      if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
>                msr_set_bit(MSR_AMD64_LS_CFG, x86_msr_ls_ctr_bit);
> 
> for the AP CPUs this is needed in init_intel:
> 
>  775         if (cpu_has(c, X86_FEATURE_SPEC_STORE_BYPASS_DISABLE))
>  776                 x86_set_spec_ctrl(SPEC_CTRL_RDS);

That's what I want to avoid for various reasons.

> 
> And to figure out which bits to fiddle, for AMD we would do this in
> in early_init_amd:
> 
>  678         if (c->x86 >= 0x15 && c->x86 <= 0x17) {
>  679                 set_cpu_cap(c, X86_FEATURE_RDS);
>  680                 switch (c->x86) {
>  681                 case 0x15: x86_msr_ls_ctr_bit = 54; break;
>  682                 case 0x16: x86_msr_ls_ctr_bit = 33; break;
>  683                 case 0x17: x86_msr_ls_ctr_bit = 10; break;
>  684                 }
>  685         }

Fun. I have exact the same code in my variant just with the difference that
it has x86_ams_ls_ctr_mask = 1ULL << N


> Taking this one step further - the init_amd or init_intel could
> call the ssb_latch_mitigation_boot_cpu() function (which now should be
> called ssb_latch_mitigation) - and that would put all the bit/MSR
> fiddling code in one location (bugs.c).
> 
> I think this is what you had in mind. I will do those changes tomorrow
> night and try them out, unless you had in mind a different way?.

Very close. I just integrate it with a sane version of the prctl stuff and
I should have something later today. Untested obviously because I have no
access to the magic ucode ...

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [MODERATED] Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29  7:22     ` Thomas Gleixner
@ 2018-04-29 12:42       ` Konrad Rzeszutek Wilk
  2018-04-29 16:19         ` Jon Masters
  0 siblings, 1 reply; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-04-29 12:42 UTC (permalink / raw)
  To: speck

..giant snip..
> > Taking this one step further - the init_amd or init_intel could
> > call the ssb_latch_mitigation_boot_cpu() function (which now should be
> > called ssb_latch_mitigation) - and that would put all the bit/MSR
> > fiddling code in one location (bugs.c).
> > 
> > I think this is what you had in mind. I will do those changes tomorrow
> > night and try them out, unless you had in mind a different way?.
> 
> Very close. I just integrate it with a sane version of the prctl stuff and
> I should have something later today. Untested obviously because I have no
> access to the magic ucode ...

I would be more than happy to test it and do the fixes up (just in case
some gremlin came by) tonight on the Intel box. Please send a tarball.

Or alternatively if you would like to send me your public SSH key
I have an Skylake monster beast in the cloud (the SSH key would be used
for SSH in the machine and in the serial console).

> 
> Thanks,
> 
> 	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [MODERATED] Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29 12:42       ` [MODERATED] " Konrad Rzeszutek Wilk
@ 2018-04-29 16:19         ` Jon Masters
  2018-04-29 16:57           ` Thomas Gleixner
  0 siblings, 1 reply; 11+ messages in thread
From: Jon Masters @ 2018-04-29 16:19 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1191 bytes --]

On 04/29/2018 08:42 AM, speck for Konrad Rzeszutek Wilk wrote:
> ..giant snip..
>>> Taking this one step further - the init_amd or init_intel could
>>> call the ssb_latch_mitigation_boot_cpu() function (which now should be
>>> called ssb_latch_mitigation) - and that would put all the bit/MSR
>>> fiddling code in one location (bugs.c).
>>>
>>> I think this is what you had in mind. I will do those changes tomorrow
>>> night and try them out, unless you had in mind a different way?.
>>
>> Very close. I just integrate it with a sane version of the prctl stuff and
>> I should have something later today. Untested obviously because I have no
>> access to the magic ucode ...
> 
> I would be more than happy to test it and do the fixes up (just in case
> some gremlin came by) tonight on the Intel box. Please send a tarball.
> 
> Or alternatively if you would like to send me your public SSH key
> I have an Skylake monster beast in the cloud (the SSH key would be used
> for SSH in the machine and in the serial console).

Also happy to test here. Let us know when you're ready for that tglx.

Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29 16:19         ` Jon Masters
@ 2018-04-29 16:57           ` Thomas Gleixner
  2018-04-29 17:05             ` [MODERATED] " Linus Torvalds
                               ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Thomas Gleixner @ 2018-04-29 16:57 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 937 bytes --]

On Sun, 29 Apr 2018, speck for Jon Masters wrote:
> On 04/29/2018 08:42 AM, speck for Konrad Rzeszutek Wilk wrote:
> >> Very close. I just integrate it with a sane version of the prctl stuff and
> >> I should have something later today. Untested obviously because I have no
> >> access to the magic ucode ...
> > 
> > I would be more than happy to test it and do the fixes up (just in case
> > some gremlin came by) tonight on the Intel box. Please send a tarball.
> > 
> > Or alternatively if you would like to send me your public SSH key
> > I have an Skylake monster beast in the cloud (the SSH key would be used
> > for SSH in the machine and in the serial console).
> 
> Also happy to test here. Let us know when you're ready for that tglx.

Here you go. It boots in a VM and on a AMD Zen, but I did not check for
much more yet.

I'm off for an hour to walk the dogs and then post the patches properly on
the list.

Thanks,

	tglx


[-- Attachment #2: Type: application/octet-stream, Size: 16842 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [MODERATED] Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29 16:57           ` Thomas Gleixner
@ 2018-04-29 17:05             ` Linus Torvalds
  2018-04-29 18:08               ` Thomas Gleixner
  2018-04-29 18:09             ` Thomas Gleixner
  2018-04-29 18:29             ` [MODERATED] " Jon Masters
  2 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2018-04-29 17:05 UTC (permalink / raw)
  To: speck



On Sun, 29 Apr 2018, speck for Thomas Gleixner wrote:
> 
> I'm off for an hour to walk the dogs and then post the patches properly on
> the list.

Side note: "git bundle" is actually very good for distributing a series of 
patches if you use git. Much easier to generate, but also much easier to 
apply and test (you can just "git fetch/pull" a bundle file directly).

It's not used very often, because obviously it's easier to just make a git 
repository available directly, but for things like encrypted mailing lists 
it's actually quite convenient.

           Linus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29 17:05             ` [MODERATED] " Linus Torvalds
@ 2018-04-29 18:08               ` Thomas Gleixner
  0 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2018-04-29 18:08 UTC (permalink / raw)
  To: speck

On Sun, 29 Apr 2018, speck for Linus Torvalds wrote:
> On Sun, 29 Apr 2018, speck for Thomas Gleixner wrote:
> > 
> > I'm off for an hour to walk the dogs and then post the patches properly on
> > the list.
> 
> Side note: "git bundle" is actually very good for distributing a series of 
> patches if you use git. Much easier to generate, but also much easier to 
> apply and test (you can just "git fetch/pull" a bundle file directly).
> 
> It's not used very often, because obviously it's easier to just make a git 
> repository available directly, but for things like encrypted mailing lists 
> it's actually quite convenient.

Did not think about that, but then I'm one of those old school quilt dudes...

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29 16:57           ` Thomas Gleixner
  2018-04-29 17:05             ` [MODERATED] " Linus Torvalds
@ 2018-04-29 18:09             ` Thomas Gleixner
  2018-04-29 18:29             ` [MODERATED] " Jon Masters
  2 siblings, 0 replies; 11+ messages in thread
From: Thomas Gleixner @ 2018-04-29 18:09 UTC (permalink / raw)
  To: speck

On Sun, 29 Apr 2018, speck for Thomas Gleixner wrote:
> On Sun, 29 Apr 2018, speck for Jon Masters wrote:
> > On 04/29/2018 08:42 AM, speck for Konrad Rzeszutek Wilk wrote:
> > >> Very close. I just integrate it with a sane version of the prctl stuff and
> > >> I should have something later today. Untested obviously because I have no
> > >> access to the magic ucode ...
> > > 
> > > I would be more than happy to test it and do the fixes up (just in case
> > > some gremlin came by) tonight on the Intel box. Please send a tarball.
> > > 
> > > Or alternatively if you would like to send me your public SSH key
> > > I have an Skylake monster beast in the cloud (the SSH key would be used
> > > for SSH in the machine and in the serial console).
> > 
> > Also happy to test here. Let us know when you're ready for that tglx.
> 
> Here you go. It boots in a VM and on a AMD Zen, but I did not check for
> much more yet.
> 
> I'm off for an hour to walk the dogs and then post the patches properly on
> the list.

Here's a delta fix. It occured to me while walking and trying to get my
brain free of that stuff.

Thanks,

	tglx
8<-----------------
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -180,7 +180,7 @@ static void x86_amd_rds_enable(void)
 {
 	u64 msrval = x86_amd_ls_cfg_base | x86_amd_ls_cfg_rds_mask;
 
-	if (boot_cpu_has(X86_FEATURE_RDS))
+	if (boot_cpu_has(X86_FEATURE_AMD_RDS))
 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
 }
 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [MODERATED] Re: [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7
  2018-04-29 16:57           ` Thomas Gleixner
  2018-04-29 17:05             ` [MODERATED] " Linus Torvalds
  2018-04-29 18:09             ` Thomas Gleixner
@ 2018-04-29 18:29             ` Jon Masters
  2 siblings, 0 replies; 11+ messages in thread
From: Jon Masters @ 2018-04-29 18:29 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 1174 bytes --]

On 04/29/2018 12:57 PM, speck for Thomas Gleixner wrote:
> On Sun, 29 Apr 2018, speck for Jon Masters wrote:
>> On 04/29/2018 08:42 AM, speck for Konrad Rzeszutek Wilk wrote:
>>>> Very close. I just integrate it with a sane version of the prctl stuff and
>>>> I should have something later today. Untested obviously because I have no
>>>> access to the magic ucode ...
>>>
>>> I would be more than happy to test it and do the fixes up (just in case
>>> some gremlin came by) tonight on the Intel box. Please send a tarball.
>>>
>>> Or alternatively if you would like to send me your public SSH key
>>> I have an Skylake monster beast in the cloud (the SSH key would be used
>>> for SSH in the machine and in the serial console).
>>
>> Also happy to test here. Let us know when you're ready for that tglx.
> 
> Here you go. It boots in a VM and on a AMD Zen, but I did not check for
> much more yet.
> 
> I'm off for an hour to walk the dogs and then post the patches properly on
> the list.

Headed to gym and walk but will test on Coffeelake with ucode this pm
and AMD EPYC.

Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-04-29 18:29 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-26 23:48 [MODERATED] [PATCH v6 07/11] [PATCH v6 07/10] Linux Patch #7 konrad.wilk
2018-04-28 19:38 ` Thomas Gleixner
2018-04-29  2:13   ` [MODERATED] " Konrad Rzeszutek Wilk
2018-04-29  7:22     ` Thomas Gleixner
2018-04-29 12:42       ` [MODERATED] " Konrad Rzeszutek Wilk
2018-04-29 16:19         ` Jon Masters
2018-04-29 16:57           ` Thomas Gleixner
2018-04-29 17:05             ` [MODERATED] " Linus Torvalds
2018-04-29 18:08               ` Thomas Gleixner
2018-04-29 18:09             ` Thomas Gleixner
2018-04-29 18:29             ` [MODERATED] " Jon Masters

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.