kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 4.14 069/146] x86/bugs: Add AMDs variant of SSB_NO
       [not found] <20181204103726.750894136@linuxfoundation.org>
@ 2018-12-04 10:49 ` Greg Kroah-Hartman
  2018-12-04 10:49 ` [PATCH 4.14 070/146] x86/bugs: Add AMDs SPEC_CTRL MSR usage Greg Kroah-Hartman
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-04 10:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Konrad Rzeszutek Wilk,
	Thomas Gleixner, Tom Lendacky, Janakarajan Natarajan, kvm,
	andrew.cooper3, Andy Lutomirski, H. Peter Anvin, Borislav Petkov,
	David Woodhouse

4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com

commit 24809860012e0130fbafe536709e08a22b3e959e upstream

The AMD document outlining the SSBD handling
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
mentions that the CPUID 8000_0008.EBX[26] will mean that the
speculative store bypass disable is no longer needed.

A copy of this document is available at:
    https://bugzilla.kernel.org/show_bug.cgi?id=199889

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: kvm@vger.kernel.org
Cc: andrew.cooper3@citrix.com
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lkml.kernel.org/r/20180601145921.9500-2-konrad.wilk@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/x86/include/asm/cpufeatures.h |    1 +
 arch/x86/kernel/cpu/common.c       |    3 ++-
 arch/x86/kvm/cpuid.c               |    2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -285,6 +285,7 @@
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
+#define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 
 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
 #define X86_FEATURE_DTHERM		(14*32+ 0) /* Digital Thermal Sensor */
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -958,7 +958,8 @@ static void __init cpu_set_bug_bits(stru
 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
 
 	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
-	   !(ia32_cap & ARCH_CAP_SSB_NO))
+	   !(ia32_cap & ARCH_CAP_SSB_NO) &&
+	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
 
 	if (x86_match_cpu(cpu_no_speculation))
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -367,7 +367,7 @@ static inline int __do_cpuid_ent(struct
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 4.14 070/146] x86/bugs: Add AMDs SPEC_CTRL MSR usage
       [not found] <20181204103726.750894136@linuxfoundation.org>
  2018-12-04 10:49 ` [PATCH 4.14 069/146] x86/bugs: Add AMDs variant of SSB_NO Greg Kroah-Hartman
@ 2018-12-04 10:49 ` Greg Kroah-Hartman
  2018-12-04 10:49 ` [PATCH 4.14 071/146] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Greg Kroah-Hartman
  2018-12-04 10:50 ` [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers Greg Kroah-Hartman
  3 siblings, 0 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-04 10:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Konrad Rzeszutek Wilk,
	Thomas Gleixner, Tom Lendacky, Janakarajan Natarajan, kvm,
	KarimAllah Ahmed, andrew.cooper3, Joerg Roedel,
	Radim Krčmář,
	Andy Lutomirski, H. Peter Anvin, Paolo Bonzini, Borislav Petkov,
	David Woodhouse, Kees Cook

4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com

commit 6ac2f49edb1ef5446089c7c660017732886d62d6 upstream

The AMD document outlining the SSBD handling
124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf
mentions that if CPUID 8000_0008.EBX[24] is set we should be using
the SPEC_CTRL MSR (0x48) over the VIRT SPEC_CTRL MSR (0xC001_011f)
for speculative store bypass disable.

This in effect means we should clear the X86_FEATURE_VIRT_SSBD
flag so that we would prefer the SPEC_CTRL MSR.

See the document titled:
   124441_AMD64_SpeculativeStoreBypassDisable_Whitepaper_final.pdf

A copy of this document is available at
   https://bugzilla.kernel.org/show_bug.cgi?id=199889

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: kvm@vger.kernel.org
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Cc: andrew.cooper3@citrix.com
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20180601145921.9500-3-konrad.wilk@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/x86/include/asm/cpufeatures.h |    1 +
 arch/x86/kernel/cpu/bugs.c         |   12 +++++++-----
 arch/x86/kernel/cpu/common.c       |    6 ++++++
 arch/x86/kvm/cpuid.c               |   10 ++++++++--
 arch/x86/kvm/svm.c                 |    8 +++++---
 5 files changed, 27 insertions(+), 10 deletions(-)

--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -284,6 +284,7 @@
 #define X86_FEATURE_AMD_IBPB		(13*32+12) /* "" Indirect Branch Prediction Barrier */
 #define X86_FEATURE_AMD_IBRS		(13*32+14) /* "" Indirect Branch Restricted Speculation */
 #define X86_FEATURE_AMD_STIBP		(13*32+15) /* "" Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_AMD_SSBD		(13*32+24) /* "" Speculative Store Bypass Disable */
 #define X86_FEATURE_VIRT_SSBD		(13*32+25) /* Virtualized Speculative Store Bypass Disable */
 #define X86_FEATURE_AMD_SSB_NO		(13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
 
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -532,18 +532,20 @@ static enum ssb_mitigation __init __ssb_
 	if (mode == SPEC_STORE_BYPASS_DISABLE) {
 		setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
 		/*
-		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD uses
-		 * a completely different MSR and bit dependent on family.
+		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
+		 * use a completely different MSR and bit dependent on family.
 		 */
 		switch (boot_cpu_data.x86_vendor) {
 		case X86_VENDOR_INTEL:
+		case X86_VENDOR_AMD:
+			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
+				x86_amd_ssb_disable();
+				break;
+			}
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
 			break;
-		case X86_VENDOR_AMD:
-			x86_amd_ssb_disable();
-			break;
 		}
 	}
 
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -760,6 +760,12 @@ static void init_speculation_control(str
 		set_cpu_cap(c, X86_FEATURE_STIBP);
 		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
 	}
+
+	if (cpu_has(c, X86_FEATURE_AMD_SSBD)) {
+		set_cpu_cap(c, X86_FEATURE_SSBD);
+		set_cpu_cap(c, X86_FEATURE_MSR_SPEC_CTRL);
+		clear_cpu_cap(c, X86_FEATURE_VIRT_SSBD);
+	}
 }
 
 void get_cpu_cap(struct cpuinfo_x86 *c)
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -367,7 +367,8 @@ static inline int __do_cpuid_ent(struct
 
 	/* cpuid 0x80000008.ebx */
 	const u32 kvm_cpuid_8000_0008_ebx_x86_features =
-		F(AMD_IBPB) | F(AMD_IBRS) | F(VIRT_SSBD) | F(AMD_SSB_NO);
+		F(AMD_IBPB) | F(AMD_IBRS) | F(AMD_SSBD) | F(VIRT_SSBD) |
+		F(AMD_SSB_NO);
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
@@ -649,7 +650,12 @@ static inline int __do_cpuid_ent(struct
 			entry->ebx |= F(VIRT_SSBD);
 		entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
 		cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
-		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+		/*
+		 * The preference is to use SPEC CTRL MSR instead of the
+		 * VIRT_SPEC MSR.
+		 */
+		if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) &&
+		    !boot_cpu_has(X86_FEATURE_AMD_SSBD))
 			entry->ebx |= F(VIRT_SSBD);
 		break;
 	}
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3644,7 +3644,8 @@ static int svm_get_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr_info->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
 			return 1;
 
 		msr_info->data = svm->spec_ctrl;
@@ -3749,11 +3750,12 @@ static int svm_set_msr(struct kvm_vcpu *
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
-		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS))
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBRS) &&
+		    !guest_cpuid_has(vcpu, X86_FEATURE_AMD_SSBD))
 			return 1;
 
 		/* The STIBP bit doesn't fault even if it's not advertised */
-		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
+		if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD))
 			return 1;
 
 		svm->spec_ctrl = data;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 4.14 071/146] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features
       [not found] <20181204103726.750894136@linuxfoundation.org>
  2018-12-04 10:49 ` [PATCH 4.14 069/146] x86/bugs: Add AMDs variant of SSB_NO Greg Kroah-Hartman
  2018-12-04 10:49 ` [PATCH 4.14 070/146] x86/bugs: Add AMDs SPEC_CTRL MSR usage Greg Kroah-Hartman
@ 2018-12-04 10:49 ` Greg Kroah-Hartman
  2018-12-04 10:50 ` [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers Greg Kroah-Hartman
  3 siblings, 0 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-04 10:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Konrad Rzeszutek Wilk,
	Thomas Gleixner, Kees Cook, kvm, KarimAllah Ahmed,
	andrew.cooper3, H. Peter Anvin, Borislav Petkov, David Woodhouse

4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Konrad Rzeszutek Wilk konrad.wilk@oracle.com

commit 108fab4b5c8f12064ef86e02cb0459992affb30f upstream

Both AMD and Intel can have SPEC_CTRL_MSR for SSBD.

However AMD also has two more other ways of doing it - which
are !SPEC_CTRL MSR ways.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Kees Cook <keescook@chromium.org>
Cc: kvm@vger.kernel.org
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Cc: andrew.cooper3@citrix.com
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lkml.kernel.org/r/20180601145921.9500-4-konrad.wilk@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/x86/kernel/cpu/bugs.c |   11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -535,17 +535,12 @@ static enum ssb_mitigation __init __ssb_
 		 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
 		 * use a completely different MSR and bit dependent on family.
 		 */
-		switch (boot_cpu_data.x86_vendor) {
-		case X86_VENDOR_INTEL:
-		case X86_VENDOR_AMD:
-			if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) {
-				x86_amd_ssb_disable();
-				break;
-			}
+		if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+			x86_amd_ssb_disable();
+		else {
 			x86_spec_ctrl_base |= SPEC_CTRL_SSBD;
 			x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;
 			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
-			break;
 		}
 	}
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers
       [not found] <20181204103726.750894136@linuxfoundation.org>
                   ` (2 preceding siblings ...)
  2018-12-04 10:49 ` [PATCH 4.14 071/146] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Greg Kroah-Hartman
@ 2018-12-04 10:50 ` Greg Kroah-Hartman
  2018-12-05 16:26   ` Jari Ruusu
  3 siblings, 1 reply; 10+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-04 10:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Greg Kroah-Hartman, stable, Sebastian Andrzej Siewior,
	Borislav Petkov, Ingo Molnar, Thomas Gleixner, Andy Lutomirski,
	Dave Hansen, H. Peter Anvin, Jason A. Donenfeld, kvm ML,
	Paolo Bonzini, Radim Krčmář,
	Rik van Riel, x86-ml

4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

commit 68239654acafe6aad5a3c1dc7237e60accfebc03 upstream.

The sequence

  fpu->initialized = 1;		/* step A */
  preempt_disable();		/* step B */
  fpu__restore(fpu);
  preempt_enable();

in __fpu__restore_sig() is racy in regard to a context switch.

For 32bit frames, __fpu__restore_sig() prepares the FPU state within
fpu->state. To ensure that a context switch (switch_fpu_prepare() in
particular) does not modify fpu->state it uses fpu__drop() which sets
fpu->initialized to 0.

After fpu->initialized is cleared, the CPU's FPU state is not saved
to fpu->state during a context switch. The new state is loaded via
fpu__restore(). It gets loaded into fpu->state from userland and
ensured it is sane. fpu->initialized is then set to 1 in order to avoid
fpu__initialize() doing anything (overwrite the new state) which is part
of fpu__restore().

A context switch between step A and B above would save CPU's current FPU
registers to fpu->state and overwrite the newly prepared state. This
looks like a tiny race window but the Kernel Test Robot reported this
back in 2016 while we had lazy FPU support. Borislav Petkov made the
link between that report and another patch that has been posted. Since
the removal of the lazy FPU support, this race goes unnoticed because
the warning has been removed.

Disable bottom halves around the restore sequence to avoid the race. BH
need to be disabled because BH is allowed to run (even with preemption
disabled) and might invoke kernel_fpu_begin() by doing IPsec.

 [ bp: massage commit message a bit. ]

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: kvm ML <kvm@vger.kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: stable@vger.kernel.org
Cc: x86-ml <x86@kernel.org>
Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de
Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/kernel/fpu/signal.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/kernel/fpu/signal.c
@@ -344,10 +344,10 @@ static int __fpu__restore_sig(void __use
 			sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
 		}
 
+		local_bh_disable();
 		fpu->initialized = 1;
-		preempt_disable();
 		fpu__restore(fpu);
-		preempt_enable();
+		local_bh_enable();
 
 		return err;
 	} else {

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers
  2018-12-04 10:50 ` [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers Greg Kroah-Hartman
@ 2018-12-05 16:26   ` Jari Ruusu
  2018-12-05 19:00     ` Borislav Petkov
  0 siblings, 1 reply; 10+ messages in thread
From: Jari Ruusu @ 2018-12-05 16:26 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, stable, Sebastian Andrzej Siewior, Borislav Petkov,
	Ingo Molnar, Thomas Gleixner, Andy Lutomirski, Dave Hansen,
	H. Peter Anvin, Jason A. Donenfeld, kvm ML, Paolo Bonzini,
	 Radim Kr?má?,
	Rik van Riel, x86-ml

Greg Kroah-Hartman wrote:
> commit 68239654acafe6aad5a3c1dc7237e60accfebc03 upstream.
> 
> The sequence
> 
>   fpu->initialized = 1;         /* step A */
>   preempt_disable();            /* step B */
>   fpu__restore(fpu);
>   preempt_enable();
> 
> in __fpu__restore_sig() is racy in regard to a context switch.

That same race appears to be present in older kernel branches also.
The context is sligthly different, so the patch for 4.14 does not
apply cleanly to older kernels. For 4.9 branch, this edit works:

    s/fpu->initialized/fpu->fpstate_active/

--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/kernel/fpu/signal.c
@@ -342,10 +342,10 @@ static int __fpu__restore_sig(void __use
 			sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
 		}
 
+		local_bh_disable();
 		fpu->fpstate_active = 1;
-		preempt_disable();
 		fpu__restore(fpu);
-		preempt_enable();
+		local_bh_enable();
 
 		return err;
 	} else {

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers
  2018-12-05 16:26   ` Jari Ruusu
@ 2018-12-05 19:00     ` Borislav Petkov
  2018-12-06 10:54       ` Greg Kroah-Hartman
  0 siblings, 1 reply; 10+ messages in thread
From: Borislav Petkov @ 2018-12-05 19:00 UTC (permalink / raw)
  To: Jari Ruusu
  Cc: Greg Kroah-Hartman, linux-kernel, stable,
	Sebastian Andrzej Siewior, Ingo Molnar, Thomas Gleixner,
	Andy Lutomirski, Dave Hansen, H. Peter Anvin, Jason A. Donenfeld,
	kvm ML, Paolo Bonzini, Radim Kr?má?,
	Rik van Riel, x86-ml

On Wed, Dec 05, 2018 at 06:26:24PM +0200, Jari Ruusu wrote:
> That same race appears to be present in older kernel branches also.
> The context is sligthly different, so the patch for 4.14 does not
> apply cleanly to older kernels. For 4.9 branch, this edit works:

You could take the upstream one, amend it with your change, test it and
send it to Greg - I believe he'll take the backport gladly.

:-)

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers
  2018-12-05 19:00     ` Borislav Petkov
@ 2018-12-06 10:54       ` Greg Kroah-Hartman
  2018-12-21 16:23         ` [PATCH v4.9 STABLE] " Sebastian Andrzej Siewior
  0 siblings, 1 reply; 10+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-06 10:54 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Jari Ruusu, linux-kernel, stable, Sebastian Andrzej Siewior,
	Ingo Molnar, Thomas Gleixner, Andy Lutomirski, Dave Hansen,
	H. Peter Anvin, Jason A. Donenfeld, kvm ML, Paolo Bonzini,
	Radim Kr?má?,
	Rik van Riel, x86-ml

On Wed, Dec 05, 2018 at 08:00:20PM +0100, Borislav Petkov wrote:
> On Wed, Dec 05, 2018 at 06:26:24PM +0200, Jari Ruusu wrote:
> > That same race appears to be present in older kernel branches also.
> > The context is sligthly different, so the patch for 4.14 does not
> > apply cleanly to older kernels. For 4.9 branch, this edit works:
> 
> You could take the upstream one, amend it with your change, test it and
> send it to Greg - I believe he'll take the backport gladly.
> 
> :-)

Yes, that's the easiest way for me to accept such a patch, otherwise it
gets put on the end of the very-long-queue...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v4.9 STABLE] x86/fpu: Disable bottom halves while loading FPU registers
  2018-12-06 10:54       ` Greg Kroah-Hartman
@ 2018-12-21 16:23         ` Sebastian Andrzej Siewior
  2018-12-21 16:29           ` Greg Kroah-Hartman
  0 siblings, 1 reply; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-21 16:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Borislav Petkov, Jari Ruusu, linux-kernel, stable, Ingo Molnar,
	Thomas Gleixner, Andy Lutomirski, Dave Hansen, H. Peter Anvin,
	Jason A. Donenfeld, kvm ML, Paolo Bonzini, Radim Kr?má?,
	Rik van Riel, x86-ml

The sequence

  fpu->initialized = 1;		/* step A */
  preempt_disable();		/* step B */
  fpu__restore(fpu);
  preempt_enable();

in __fpu__restore_sig() is racy in regard to a context switch.

For 32bit frames, __fpu__restore_sig() prepares the FPU state within
fpu->state. To ensure that a context switch (switch_fpu_prepare() in
particular) does not modify fpu->state it uses fpu__drop() which sets
fpu->initialized to 0.

After fpu->initialized is cleared, the CPU's FPU state is not saved
to fpu->state during a context switch. The new state is loaded via
fpu__restore(). It gets loaded into fpu->state from userland and
ensured it is sane. fpu->initialized is then set to 1 in order to avoid
fpu__initialize() doing anything (overwrite the new state) which is part
of fpu__restore().

A context switch between step A and B above would save CPU's current FPU
registers to fpu->state and overwrite the newly prepared state. This
looks like a tiny race window but the Kernel Test Robot reported this
back in 2016 while we had lazy FPU support. Borislav Petkov made the
link between that report and another patch that has been posted. Since
the removal of the lazy FPU support, this race goes unnoticed because
the warning has been removed.

Disable bottom halves around the restore sequence to avoid the race. BH
need to be disabled because BH is allowed to run (even with preemption
disabled) and might invoke kernel_fpu_begin() by doing IPsec.

 [ bp: massage commit message a bit. ]

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: kvm ML <kvm@vger.kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: stable@vger.kernel.org
Cc: x86-ml <x86@kernel.org>
Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de
Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/kernel/fpu/signal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index ae52ef05d0981..769831d9fd114 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/kernel/fpu/signal.c
@@ -342,10 +342,10 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 			sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
 		}
 
+		local_bh_disable();
 		fpu->fpstate_active = 1;
-		preempt_disable();
 		fpu__restore(fpu);
-		preempt_enable();
+		local_bh_enable();
 
 		return err;
 	} else {
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4.9 STABLE] x86/fpu: Disable bottom halves while loading FPU registers
  2018-12-21 16:23         ` [PATCH v4.9 STABLE] " Sebastian Andrzej Siewior
@ 2018-12-21 16:29           ` Greg Kroah-Hartman
  2018-12-21 16:38             ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 10+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-21 16:29 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Borislav Petkov, Jari Ruusu, linux-kernel, stable, Ingo Molnar,
	Thomas Gleixner, Andy Lutomirski, Dave Hansen, H. Peter Anvin,
	Jason A. Donenfeld, kvm ML, Paolo Bonzini, Radim Kr?má?,
	Rik van Riel, x86-ml

On Fri, Dec 21, 2018 at 05:23:38PM +0100, Sebastian Andrzej Siewior wrote:
> The sequence
> 
>   fpu->initialized = 1;		/* step A */
>   preempt_disable();		/* step B */
>   fpu__restore(fpu);
>   preempt_enable();
> 
> in __fpu__restore_sig() is racy in regard to a context switch.
> 
> For 32bit frames, __fpu__restore_sig() prepares the FPU state within
> fpu->state. To ensure that a context switch (switch_fpu_prepare() in
> particular) does not modify fpu->state it uses fpu__drop() which sets
> fpu->initialized to 0.
> 
> After fpu->initialized is cleared, the CPU's FPU state is not saved
> to fpu->state during a context switch. The new state is loaded via
> fpu__restore(). It gets loaded into fpu->state from userland and
> ensured it is sane. fpu->initialized is then set to 1 in order to avoid
> fpu__initialize() doing anything (overwrite the new state) which is part
> of fpu__restore().
> 
> A context switch between step A and B above would save CPU's current FPU
> registers to fpu->state and overwrite the newly prepared state. This
> looks like a tiny race window but the Kernel Test Robot reported this
> back in 2016 while we had lazy FPU support. Borislav Petkov made the
> link between that report and another patch that has been posted. Since
> the removal of the lazy FPU support, this race goes unnoticed because
> the warning has been removed.
> 
> Disable bottom halves around the restore sequence to avoid the race. BH
> need to be disabled because BH is allowed to run (even with preemption
> disabled) and might invoke kernel_fpu_begin() by doing IPsec.
> 
>  [ bp: massage commit message a bit. ]
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Acked-by: Ingo Molnar <mingo@kernel.org>
> Acked-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
> Cc: kvm ML <kvm@vger.kernel.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: stable@vger.kernel.org
> Cc: x86-ml <x86@kernel.org>
> Link: http://lkml.kernel.org/r/20181120102635.ddv3fvavxajjlfqk@linutronix.de
> Link: https://lkml.kernel.org/r/20160226074940.GA28911@pd.tnic
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>  arch/x86/kernel/fpu/signal.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

What is the git commit id of this patch upstream?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4.9 STABLE] x86/fpu: Disable bottom halves while loading FPU registers
  2018-12-21 16:29           ` Greg Kroah-Hartman
@ 2018-12-21 16:38             ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-21 16:38 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Borislav Petkov, Jari Ruusu, linux-kernel, stable, Ingo Molnar,
	Thomas Gleixner, Andy Lutomirski, Dave Hansen, H. Peter Anvin,
	Jason A. Donenfeld, kvm ML, Paolo Bonzini, Radim Kr?má?,
	Rik van Riel, x86-ml

On 2018-12-21 17:29:05 [+0100], Greg Kroah-Hartman wrote:
> What is the git commit id of this patch upstream?

commit 68239654acafe6aad5a3c1dc7237e60accfebc03 upstream.

I'm sorry. I cherry picked the original commit, resolved the conflict
and forgot about the original commit id…

> thanks,
> 
> greg k-h

Sebastian

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-12-21 16:38 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20181204103726.750894136@linuxfoundation.org>
2018-12-04 10:49 ` [PATCH 4.14 069/146] x86/bugs: Add AMDs variant of SSB_NO Greg Kroah-Hartman
2018-12-04 10:49 ` [PATCH 4.14 070/146] x86/bugs: Add AMDs SPEC_CTRL MSR usage Greg Kroah-Hartman
2018-12-04 10:49 ` [PATCH 4.14 071/146] x86/bugs: Switch the selection of mitigation from CPU vendor to CPU features Greg Kroah-Hartman
2018-12-04 10:50 ` [PATCH 4.14 121/146] x86/fpu: Disable bottom halves while loading FPU registers Greg Kroah-Hartman
2018-12-05 16:26   ` Jari Ruusu
2018-12-05 19:00     ` Borislav Petkov
2018-12-06 10:54       ` Greg Kroah-Hartman
2018-12-21 16:23         ` [PATCH v4.9 STABLE] " Sebastian Andrzej Siewior
2018-12-21 16:29           ` Greg Kroah-Hartman
2018-12-21 16:38             ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).