All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] 5.4 backport of recent mds improvement patches
@ 2024-02-26 12:22 Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix Nikolay Borisov
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: Nikolay Borisov

Here's the recently merged mds improvement patches adapted to latest stable tree.
I've only compile tested them, but since I have also done similar backports for
older kernels I'm sure they should work.
The main difference is in the definition of the CLEAR_CPU_BUFFERS macro since
5.4 doesn't contains the alternative relocation handling logic hence the verw
instruction is moved out of the alternative definition and instead we have a jump which
skips the verw instruction there. That way the relocation will be handled by the
toolchain rather than the kernel.

Since I don't know if I will have time to work on the other branches this patchset
can be used as basis for the rest of the stable kernels. The main difference would be
which bit is used for CLEAR_CPU_BUFFERS. For kernel 6.6 the 2nd patch can be used verbatim
from upstrem (unlike this modified version) since the alternative relocation
did land in v6.5. However, even if used as-is from this patchset it's not a problem.

V2:

Added upstream commit id to individual patches.

H. Peter Anvin (Intel) (1):
  x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix

Pawan Gupta (5):
  x86/bugs: Add asm helpers for executing VERW
  x86/entry_64: Add VERW just before userspace transition
  x86/entry_32: Add VERW just before userspace transition
  x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
  KVM/VMX: Move VERW closer to VMentry for MDS mitigation

Sean Christopherson (1):
  KVM/VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH

 Documentation/x86/mds.rst            | 38 ++++++++++++++++++++--------
 arch/x86/entry/Makefile              |  2 +-
 arch/x86/entry/common.c              |  2 --
 arch/x86/entry/entry.S               | 23 +++++++++++++++++
 arch/x86/entry/entry_32.S            |  3 +++
 arch/x86/entry/entry_64.S            | 10 ++++++++
 arch/x86/entry/entry_64_compat.S     |  1 +
 arch/x86/include/asm/asm.h           |  6 ++++-
 arch/x86/include/asm/cpufeatures.h   |  2 +-
 arch/x86/include/asm/irqflags.h      |  1 +
 arch/x86/include/asm/nospec-branch.h | 26 ++++++++++---------
 arch/x86/kernel/cpu/bugs.c           | 15 +++++------
 arch/x86/kernel/nmi.c                |  3 ---
 arch/x86/kvm/vmx/run_flags.h         |  7 +++--
 arch/x86/kvm/vmx/vmenter.S           |  9 ++++---
 arch/x86/kvm/vmx/vmx.c               | 12 ++++++---
 16 files changed, 111 insertions(+), 49 deletions(-)
 create mode 100644 arch/x86/entry/entry.S

--
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-03-12  1:33   ` Pawan Gupta
  2024-02-26 12:22 ` [PATCH v2 2/7] x86/bugs: Add asm helpers for executing VERW Nikolay Borisov
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: H. Peter Anvin (Intel), Borislav Petkov, Nikolay Borisov

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

[ Upstream commit 0576d1ed1e153bf34b54097e0561ede382ba88b0 ]

Add a macro _ASM_RIP() to add a (%rip) suffix on 64 bits only. This is
useful for immediate memory references where one doesn't want gcc
to possibly use a register indirection as it may in the case of an "m"
constraint.

Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20210910195910.2542662-3-hpa@zytor.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 arch/x86/include/asm/asm.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index cd339b88d5d4..9116ef22bc53 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -6,12 +6,13 @@
 # define __ASM_FORM(x)	x
 # define __ASM_FORM_RAW(x)     x
 # define __ASM_FORM_COMMA(x) x,
+# define __ASM_REGPFX			%
 #else
 #include <linux/stringify.h>
-
 # define __ASM_FORM(x)	" " __stringify(x) " "
 # define __ASM_FORM_RAW(x)     __stringify(x)
 # define __ASM_FORM_COMMA(x) " " __stringify(x) ","
+# define __ASM_REGPFX			%%
 #endif
 
 #ifndef __x86_64__
@@ -48,6 +49,9 @@
 #define _ASM_SI		__ASM_REG(si)
 #define _ASM_DI		__ASM_REG(di)
 
+/* Adds a (%rip) suffix on 64 bits only; for immediate memory references */
+#define _ASM_RIP(x)	__ASM_SEL_RAW(x, x (__ASM_REGPFX rip))
+
 #ifndef __x86_64__
 /* 32 bit */
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/7] x86/bugs: Add asm helpers for executing VERW
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 3/7] x86/entry_64: Add VERW just before userspace transition Nikolay Borisov
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable
  Cc: Pawan Gupta, Alyssa Milburn, Andrew Cooper, Peter Zijlstra,
	Dave Hansen, Nikolay Borisov

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

[ Upstream commit baf8361e54550a48a7087b603313ad013cc13386 ]

MDS mitigation requires clearing the CPU buffers before returning to
user. This needs to be done late in the exit-to-user path. Current
location of VERW leaves a possibility of kernel data ending up in CPU
buffers for memory accesses done after VERW such as:

  1. Kernel data accessed by an NMI between VERW and return-to-user can
     remain in CPU buffers since NMI returning to kernel does not
     execute VERW to clear CPU buffers.
  2. Alyssa reported that after VERW is executed,
     CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
     call. Memory accesses during stack scrubbing can move kernel stack
     contents into CPU buffers.
  3. When caller saved registers are restored after a return from
     function executing VERW, the kernel stack accesses can remain in
     CPU buffers(since they occur after VERW).

To fix this VERW needs to be moved very late in exit-to-user path.

In preparation for moving VERW to entry/exit asm code, create macros
that can be used in asm. Also make VERW patching depend on a new feature
flag X86_FEATURE_CLEAR_CPU_BUF.

Reported-by: Alyssa Milburn <alyssa.milburn@intel.com>
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lore.kernel.org/all/20240213-delay-verw-v8-1-a6216d83edb7%40linux.intel.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 arch/x86/entry/Makefile              |  2 +-
 arch/x86/entry/entry.S               | 23 +++++++++++++++++++++++
 arch/x86/include/asm/cpufeatures.h   |  2 +-
 arch/x86/include/asm/nospec-branch.h | 14 ++++++++++++++
 4 files changed, 39 insertions(+), 2 deletions(-)
 create mode 100644 arch/x86/entry/entry.S

diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile
index 06fc70cf5433..b8da38e81e96 100644
--- a/arch/x86/entry/Makefile
+++ b/arch/x86/entry/Makefile
@@ -7,7 +7,7 @@ OBJECT_FILES_NON_STANDARD_entry_64_compat.o := y
 
 CFLAGS_syscall_64.o		+= $(call cc-option,-Wno-override-init,)
 CFLAGS_syscall_32.o		+= $(call cc-option,-Wno-override-init,)
-obj-y				:= entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
+obj-y				:= entry.o entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o
 obj-y				+= common.o
 
 obj-y				+= vdso/
diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
new file mode 100644
index 000000000000..21bf05354670
--- /dev/null
+++ b/arch/x86/entry/entry.S
@@ -0,0 +1,23 @@
+#include <linux/linkage.h>
+#include <asm/export.h>
+#include <asm/segment.h>
+#include <asm/cache.h>
+
+
+/*
+ * Define the VERW operand that is disguised as entry code so that
+ * it can be referenced with KPTI enabled. This ensure VERW can be
+ * used late in exit-to-user path after page tables are switched.
+ */
+.pushsection .entry.text, "ax"
+
+.align L1_CACHE_BYTES, 0xcc
+SYM_CODE_START_NOALIGN(mds_verw_sel)
+      .word __KERNEL_DS
+.align L1_CACHE_BYTES, 0xcc
+SYM_CODE_END(mds_verw_sel)
+/* For KVM */
+EXPORT_SYMBOL_GPL(mds_verw_sel);
+
+.popsection
+
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index f42286e9a2b1..6d024db8384c 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -96,7 +96,7 @@
 #define X86_FEATURE_SYSCALL32		( 3*32+14) /* "" syscall in IA32 userspace */
 #define X86_FEATURE_SYSENTER32		( 3*32+15) /* "" sysenter in IA32 userspace */
 #define X86_FEATURE_REP_GOOD		( 3*32+16) /* REP microcode works well */
-/* FREE!                                ( 3*32+17) */
+#define X86_FEATURE_CLEAR_CPU_BUF	( 3*32+17) /* "" Clear CPU buffers using VERW */
 #define X86_FEATURE_LFENCE_RDTSC	( 3*32+18) /* "" LFENCE synchronizes RDTSC */
 #define X86_FEATURE_ACC_POWER		( 3*32+19) /* AMD Accumulated Power Mechanism */
 #define X86_FEATURE_NOPL		( 3*32+20) /* The NOPL (0F 1F) instructions */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c8819358a332..ba069ed16f94 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -175,6 +175,18 @@
 .Lskip_rsb_\@:
 .endm
 
+/*
+ * Macro to execute VERW instruction that mitigate transient data sampling
+ * attacks such as MDS. On affected systems a microcode update overloaded VERW
+ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+ *
+ * Note: Only the memory operand variant of VERW clears the CPU buffers.
+ */
+.macro CLEAR_CPU_BUFFERS
+	ALTERNATIVE "jmp .Lskip_verw_\@", "", X86_FEATURE_CLEAR_CPU_BUF
+	verw _ASM_RIP(mds_verw_sel)
+.Lskip_verw_\@:
+.endm
 #else /* __ASSEMBLY__ */
 
 #define ANNOTATE_RETPOLINE_SAFE					\
@@ -346,6 +358,8 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 
 DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
 
+extern u16 mds_verw_sel;
+
 #include <asm/segment.h>
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/7] x86/entry_64: Add VERW just before userspace transition
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 2/7] x86/bugs: Add asm helpers for executing VERW Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 4/7] x86/entry_32: " Nikolay Borisov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: Pawan Gupta, Dave Hansen, Dave Hansen, Nikolay Borisov

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

[ Upstream commit 3c7501722e6b31a6e56edd23cea5e77dbb9ffd1a ]

Mitigation for MDS is to use VERW instruction to clear any secrets in
CPU Buffers. Any memory accesses after VERW execution can still remain
in CPU buffers. It is safer to execute VERW late in return to user path
to minimize the window in which kernel data can end up in CPU buffers.
There are not many kernel secrets to be had after SWITCH_TO_USER_CR3.

Add support for deploying VERW mitigation after user register state is
restored. This helps minimize the chances of kernel data ending up into
CPU buffers after executing VERW.

Note that the mitigation at the new location is not yet enabled.

  Corner case not handled
  =======================
  Interrupts returning to kernel don't clear CPUs buffers since the
  exit-to-user path is expected to do that anyways. But, there could be
  a case when an NMI is generated in kernel after the exit-to-user path
  has cleared the buffers. This case is not handled and NMI returning to
  kernel don't clear CPU buffers because:

  1. It is rare to get an NMI after VERW, but before returning to userspace.
  2. For an unprivileged user, there is no known way to make that NMI
     less rare or target it.
  3. It would take a large number of these precisely-timed NMIs to mount
     an actual attack.  There's presumably not enough bandwidth.
  4. The NMI in question occurs after a VERW, i.e. when user state is
     restored and most interesting data is already scrubbed. Whats left
     is only the data that NMI touches, and that may or may not be of
     any interest.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lore.kernel.org/all/20240213-delay-verw-v8-2-a6216d83edb7%40linux.intel.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 arch/x86/entry/entry_64.S        | 10 ++++++++++
 arch/x86/entry/entry_64_compat.S |  1 +
 arch/x86/include/asm/irqflags.h  |  1 +
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 640c7d36c26c..1029c6c59d31 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -663,6 +663,7 @@ GLOBAL(swapgs_restore_regs_and_return_to_usermode)
 	/* Restore RDI. */
 	popq	%rdi
 	SWAPGS
+	CLEAR_CPU_BUFFERS
 	INTERRUPT_RETURN
 
 
@@ -786,6 +787,8 @@ ENTRY(native_iret)
 	 */
 	popq	%rax				/* Restore user RAX */
 
+	CLEAR_CPU_BUFFERS
+
 	/*
 	 * RSP now points to an ordinary IRET frame, except that the page
 	 * is read-only and RSP[31:16] are preloaded with the userspace
@@ -1736,6 +1739,12 @@ ENTRY(nmi)
 	std
 	movq	$0, 5*8(%rsp)		/* clear "NMI executing" */
 
+	/*
+	 * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like
+	 * NMI in kernel after user state is restored. For an unprivileged user
+	 * these conditions are hard to meet.
+	 */
+
 	/*
 	 * iretq reads the "iret" frame and exits the NMI stack in a
 	 * single instruction.  We are returning to kernel mode, so this
@@ -1753,6 +1762,7 @@ END(nmi)
 ENTRY(ignore_sysret)
 	UNWIND_HINT_EMPTY
 	mov	$-ENOSYS, %eax
+	CLEAR_CPU_BUFFERS
 	sysret
 END(ignore_sysret)
 #endif
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index c3c4ea4a6711..bc37015ca1a4 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -318,6 +318,7 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
 	xorl	%r9d, %r9d
 	xorl	%r10d, %r10d
 	swapgs
+	CLEAR_CPU_BUFFERS
 	sysretl
 END(entry_SYSCALL_compat)
 
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 8a0e56e1dcc9..5ea4d34f6591 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -146,6 +146,7 @@ static inline notrace unsigned long arch_local_irq_save(void)
 #define INTERRUPT_RETURN	jmp native_iret
 #define USERGS_SYSRET64				\
 	swapgs;					\
+	CLEAR_CPU_BUFFERS;			\
 	sysretq;
 #define USERGS_SYSRET32				\
 	swapgs;					\
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/7] x86/entry_32: Add VERW just before userspace transition
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
                   ` (2 preceding siblings ...)
  2024-02-26 12:22 ` [PATCH v2 3/7] x86/entry_64: Add VERW just before userspace transition Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 5/7] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Nikolay Borisov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: Pawan Gupta, Dave Hansen, Nikolay Borisov

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

[ Upstream commit a0e2dab44d22b913b4c228c8b52b2a104434b0b3 ]
As done for entry_64, add support for executing VERW late in exit to
user path for 32-bit mode.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lore.kernel.org/all/20240213-delay-verw-v8-3-a6216d83edb7%40linux.intel.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 arch/x86/entry/entry_32.S | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 740df9cc2196..45419307fa1a 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1013,6 +1013,7 @@ ENTRY(entry_SYSENTER_32)
 	BUG_IF_WRONG_CR3 no_user_check=1
 	popfl
 	popl	%eax
+	CLEAR_CPU_BUFFERS
 
 	/*
 	 * Return back to the vDSO, which will pop ecx and edx.
@@ -1094,6 +1095,7 @@ ENTRY(entry_INT80_32)
 
 	/* Restore user state */
 	RESTORE_REGS pop=4			# skip orig_eax/error_code
+	CLEAR_CPU_BUFFERS
 .Lirq_return:
 	/*
 	 * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
@@ -1567,6 +1569,7 @@ ENTRY(nmi)
 
 	/* Not on SYSENTER stack. */
 	call	do_nmi
+	CLEAR_CPU_BUFFERS
 	jmp	.Lnmi_return
 
 .Lnmi_from_sysenter_stack:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/7] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
                   ` (3 preceding siblings ...)
  2024-02-26 12:22 ` [PATCH v2 4/7] x86/entry_32: " Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 6/7] KVM/VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Nikolay Borisov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: Pawan Gupta, Dave Hansen, Nikolay Borisov

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

[ Upstream commit 6613d82e617dd7eb8b0c40b2fe3acea655b1d611 ]

The VERW mitigation at exit-to-user is enabled via a static branch
mds_user_clear. This static branch is never toggled after boot, and can
be safely replaced with an ALTERNATIVE() which is convenient to use in
asm.

Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user
path. Also remove the now redundant VERW in exc_nmi() and
arch_exit_to_user_mode().

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lore.kernel.org/all/20240213-delay-verw-v8-4-a6216d83edb7%40linux.intel.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 Documentation/x86/mds.rst            | 38 ++++++++++++++++++++--------
 arch/x86/entry/common.c              |  2 --
 arch/x86/include/asm/nospec-branch.h | 12 ---------
 arch/x86/kernel/cpu/bugs.c           | 15 +++++------
 arch/x86/kernel/nmi.c                |  3 ---
 arch/x86/kvm/vmx/vmx.c               |  2 +-
 6 files changed, 34 insertions(+), 38 deletions(-)

diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
index 5d4330be200f..e801df0bb3a8 100644
--- a/Documentation/x86/mds.rst
+++ b/Documentation/x86/mds.rst
@@ -95,6 +95,9 @@ enters a C-state.
 
     mds_clear_cpu_buffers()
 
+Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
+Other than CFLAGS.ZF, this macro doesn't clobber any registers.
+
 The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
 (idle) transitions.
 
@@ -138,17 +141,30 @@ Mitigation points
 
    When transitioning from kernel to user space the CPU buffers are flushed
    on affected CPUs when the mitigation is not disabled on the kernel
-   command line. The migitation is enabled through the static key
-   mds_user_clear.
-
-   The mitigation is invoked in prepare_exit_to_usermode() which covers
-   all but one of the kernel to user space transitions.  The exception
-   is when we return from a Non Maskable Interrupt (NMI), which is
-   handled directly in do_nmi().
-
-   (The reason that NMI is special is that prepare_exit_to_usermode() can
-    enable IRQs.  In NMI context, NMIs are blocked, and we don't want to
-    enable IRQs with NMIs blocked.)
+   command line. The mitigation is enabled through the feature flag
+   X86_FEATURE_CLEAR_CPU_BUF.
+
+   The mitigation is invoked just before transitioning to userspace after
+   user registers are restored. This is done to minimize the window in
+   which kernel data could be accessed after VERW e.g. via an NMI after
+   VERW.
+
+   **Corner case not handled**
+   Interrupts returning to kernel don't clear CPUs buffers since the
+   exit-to-user path is expected to do that anyways. But, there could be
+   a case when an NMI is generated in kernel after the exit-to-user path
+   has cleared the buffers. This case is not handled and NMI returning to
+   kernel don't clear CPU buffers because:
+
+   1. It is rare to get an NMI after VERW, but before returning to userspace.
+   2. For an unprivileged user, there is no known way to make that NMI
+      less rare or target it.
+   3. It would take a large number of these precisely-timed NMIs to mount
+      an actual attack.  There's presumably not enough bandwidth.
+   4. The NMI in question occurs after a VERW, i.e. when user state is
+      restored and most interesting data is already scrubbed. Whats left
+      is only the data that NMI touches, and that may or may not be of
+      any interest.
 
 
 2. C-State transition
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 3f8e22615812..c93c9f3a6a25 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -216,8 +216,6 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
 #endif
 
 	user_enter_irqoff();
-
-	mds_user_clear_cpu_buffers();
 }
 
 #define SYSCALL_EXIT_WORK_FLAGS				\
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index ba069ed16f94..649d734d90bd 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -353,7 +353,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
 DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
-DECLARE_STATIC_KEY_FALSE(mds_user_clear);
 DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
 
 DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
@@ -385,17 +384,6 @@ static __always_inline void mds_clear_cpu_buffers(void)
 	asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
 }
 
-/**
- * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
- *
- * Clear CPU buffers if the corresponding static key is enabled
- */
-static __always_inline void mds_user_clear_cpu_buffers(void)
-{
-	if (static_branch_likely(&mds_user_clear))
-		mds_clear_cpu_buffers();
-}
-
 /**
  * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
  *
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 48ae44cf7795..dfac90db22a0 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -102,9 +102,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
 /* Control unconditional IBPB in switch_mm() */
 DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
 
-/* Control MDS CPU buffer clear before returning to user space */
-DEFINE_STATIC_KEY_FALSE(mds_user_clear);
-EXPORT_SYMBOL_GPL(mds_user_clear);
 /* Control MDS CPU buffer clear before idling (halt, mwait) */
 DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
 EXPORT_SYMBOL_GPL(mds_idle_clear);
@@ -236,7 +233,7 @@ static void __init mds_select_mitigation(void)
 		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
 			mds_mitigation = MDS_MITIGATION_VMWERV;
 
-		static_branch_enable(&mds_user_clear);
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 
 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
 		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
@@ -333,7 +330,7 @@ static void __init taa_select_mitigation(void)
 	 * For guests that can't determine whether the correct microcode is
 	 * present on host, enable the mitigation for UCODE_NEEDED as well.
 	 */
-	static_branch_enable(&mds_user_clear);
+	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 
 	if (taa_nosmt || cpu_mitigations_auto_nosmt())
 		cpu_smt_disable(false);
@@ -401,7 +398,7 @@ static void __init mmio_select_mitigation(void)
 	 */
 	if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
 					      boot_cpu_has(X86_FEATURE_RTM)))
-		static_branch_enable(&mds_user_clear);
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
 	else
 		static_branch_enable(&mmio_stale_data_clear);
 
@@ -461,12 +458,12 @@ static void __init md_clear_update_mitigation(void)
 	if (cpu_mitigations_off())
 		return;
 
-	if (!static_key_enabled(&mds_user_clear))
+	if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
 		goto out;
 
 	/*
-	 * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
-	 * mitigation, if necessary.
+	 * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
+	 * Stale Data mitigation, if necessary.
 	 */
 	if (mds_mitigation == MDS_MITIGATION_OFF &&
 	    boot_cpu_has_bug(X86_BUG_MDS)) {
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 5bb001c0c771..b0caa7185922 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -553,9 +553,6 @@ do_nmi(struct pt_regs *regs, long error_code)
 		write_cr2(this_cpu_read(nmi_cr2));
 	if (this_cpu_dec_return(nmi_state))
 		goto nmi_restart;
-
-	if (user_mode(regs))
-		mds_user_clear_cpu_buffers();
 }
 NOKPROBE_SYMBOL(do_nmi);
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c93070829790..4bf0c6221ec8 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6662,7 +6662,7 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	/* L1D Flush includes CPU buffer clear to mitigate MDS */
 	if (static_branch_unlikely(&vmx_l1d_should_flush))
 		vmx_l1d_flush(vcpu);
-	else if (static_branch_unlikely(&mds_user_clear))
+	else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
 		mds_clear_cpu_buffers();
 	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
 		 kvm_arch_has_assigned_device(vcpu->kvm))
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 6/7] KVM/VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
                   ` (4 preceding siblings ...)
  2024-02-26 12:22 ` [PATCH v2 5/7] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-02-26 12:22 ` [PATCH v2 7/7] KVM/VMX: Move VERW closer to VMentry for MDS mitigation Nikolay Borisov
  2024-02-26 13:30 ` [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Greg KH
  7 siblings, 0 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: Sean Christopherson, Pawan Gupta, Dave Hansen, Nikolay Borisov

From: Sean Christopherson <seanjc@google.com>

[ Upstream commit 706a189dcf74d3b3f955e9384785e726ed6c7c80 ]

Use EFLAGS.CF instead of EFLAGS.ZF to track whether to use VMRESUME versus
VMLAUNCH.  Freeing up EFLAGS.ZF will allow doing VERW, which clobbers ZF,
for MDS mitigations as late as possible without needing to duplicate VERW
for both paths.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Link: https://lore.kernel.org/all/20240213-delay-verw-v8-5-a6216d83edb7%40linux.intel.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 arch/x86/kvm/vmx/run_flags.h | 7 +++++--
 arch/x86/kvm/vmx/vmenter.S   | 6 +++---
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index edc3f16cc189..6a9bfdfbb6e5 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -2,7 +2,10 @@
 #ifndef __KVM_X86_VMX_RUN_FLAGS_H
 #define __KVM_X86_VMX_RUN_FLAGS_H
 
-#define VMX_RUN_VMRESUME	(1 << 0)
-#define VMX_RUN_SAVE_SPEC_CTRL	(1 << 1)
+#define VMX_RUN_VMRESUME_SHIFT		0
+#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT	1
+
+#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
+#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
 
 #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 2850670c38bb..04517546e3dc 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -76,7 +76,7 @@ ENTRY(__vmx_vcpu_run)
 	mov (%_ASM_SP), %_ASM_AX
 
 	/* Check if vmlaunch or vmresume is needed */
-	testb $VMX_RUN_VMRESUME, %bl
+	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
 
 	/* Load guest registers.  Don't clobber flags. */
 	mov VCPU_RBX(%_ASM_AX), %_ASM_BX
@@ -98,8 +98,8 @@ ENTRY(__vmx_vcpu_run)
 	/* Load guest RAX.  This kills the @regs pointer! */
 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
 
-	/* Check EFLAGS.ZF from 'testb' above */
-	jz .Lvmlaunch
+	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
+	jnc .Lvmlaunch
 
 /*
  * If VMRESUME/VMLAUNCH and corresponding vmexit succeed, execution resumes at
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 7/7] KVM/VMX: Move VERW closer to VMentry for MDS mitigation
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
                   ` (5 preceding siblings ...)
  2024-02-26 12:22 ` [PATCH v2 6/7] KVM/VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Nikolay Borisov
@ 2024-02-26 12:22 ` Nikolay Borisov
  2024-02-26 13:30 ` [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Greg KH
  7 siblings, 0 replies; 12+ messages in thread
From: Nikolay Borisov @ 2024-02-26 12:22 UTC (permalink / raw)
  To: stable; +Cc: Pawan Gupta, Dave Hansen, Sean Christopherson, Nikolay Borisov

From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

[ Upstream commit 43fb862de8f628c5db5e96831c915b9aebf62d33 ]

During VMentry VERW is executed to mitigate MDS. After VERW, any memory
access like register push onto stack may put host data in MDS affected
CPU buffers. A guest can then use MDS to sample host data.

Although likelihood of secrets surviving in registers at current VERW
callsite is less, but it can't be ruled out. Harden the MDS mitigation
by moving the VERW mitigation late in VMentry path.

Note that VERW for MMIO Stale Data mitigation is unchanged because of
the complexity of per-guest conditional VERW which is not easy to handle
that late in asm with no GPRs available. If the CPU is also affected by
MDS, VERW is unconditionally executed late in asm regardless of guest
having MMIO access.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/all/20240213-delay-verw-v8-6-a6216d83edb7%40linux.intel.com
Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>
---
 arch/x86/kvm/vmx/vmenter.S |  3 +++
 arch/x86/kvm/vmx/vmx.c     | 12 ++++++++----
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 04517546e3dc..1ca759f74bb5 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -98,6 +98,9 @@ ENTRY(__vmx_vcpu_run)
 	/* Load guest RAX.  This kills the @regs pointer! */
 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
 
+	/* Clobbers EFLAGS.ZF */
+	CLEAR_CPU_BUFFERS
+
 	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
 	jnc .Lvmlaunch
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 4bf0c6221ec8..56f044854c29 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -377,7 +377,8 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx)
 
 static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
 {
-	vmx->disable_fb_clear = vmx_fb_clear_ctrl_available;
+	vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) &&
+		vmx_fb_clear_ctrl_available;
 
 	/*
 	 * If guest will not execute VERW, there is no need to set FB_CLEAR_DIS
@@ -6659,11 +6660,14 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	 */
 	x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0);
 
-	/* L1D Flush includes CPU buffer clear to mitigate MDS */
+       /*
+        * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
+        * mitigation for MDS is done late in VMentry and is still
+        * executed in spite of L1D Flush. This is because an extra VERW
+        * should not matter much after the big hammer L1D Flush.
+        */
 	if (static_branch_unlikely(&vmx_l1d_should_flush))
 		vmx_l1d_flush(vcpu);
-	else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
-		mds_clear_cpu_buffers();
 	else if (static_branch_unlikely(&mmio_stale_data_clear) &&
 		 kvm_arch_has_assigned_device(vcpu->kvm))
 		mds_clear_cpu_buffers();
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/7] 5.4 backport of recent mds improvement patches
  2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
                   ` (6 preceding siblings ...)
  2024-02-26 12:22 ` [PATCH v2 7/7] KVM/VMX: Move VERW closer to VMentry for MDS mitigation Nikolay Borisov
@ 2024-02-26 13:30 ` Greg KH
  7 siblings, 0 replies; 12+ messages in thread
From: Greg KH @ 2024-02-26 13:30 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: stable

On Mon, Feb 26, 2024 at 02:22:30PM +0200, Nikolay Borisov wrote:
> Here's the recently merged mds improvement patches adapted to latest stable tree.
> I've only compile tested them, but since I have also done similar backports for
> older kernels I'm sure they should work.
> The main difference is in the definition of the CLEAR_CPU_BUFFERS macro since
> 5.4 doesn't contains the alternative relocation handling logic hence the verw
> instruction is moved out of the alternative definition and instead we have a jump which
> skips the verw instruction there. That way the relocation will be handled by the
> toolchain rather than the kernel.
> 
> Since I don't know if I will have time to work on the other branches this patchset
> can be used as basis for the rest of the stable kernels. The main difference would be
> which bit is used for CLEAR_CPU_BUFFERS. For kernel 6.6 the 2nd patch can be used verbatim
> from upstrem (unlike this modified version) since the alternative relocation
> did land in v6.5. However, even if used as-is from this patchset it's not a problem.

As mentioned on IRC, I can't take these now, without the newer branches
fixed first, otherwise someone could upgrade and have a regression.

So I'll hold off on these until we backports for all of the other stable
trees as well.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix
  2024-02-26 12:22 ` [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix Nikolay Borisov
@ 2024-03-12  1:33   ` Pawan Gupta
  2024-03-12  5:57     ` Nikolay Borisov
  0 siblings, 1 reply; 12+ messages in thread
From: Pawan Gupta @ 2024-03-12  1:33 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: stable, H. Peter Anvin (Intel)

On Mon, Feb 26, 2024 at 02:22:31PM +0200, Nikolay Borisov wrote:
> From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
> 
> [ Upstream commit 0576d1ed1e153bf34b54097e0561ede382ba88b0 ]

Looks like the correct sha is f87bc8dc7a7c438c70f97b4e51c76a183313272e

> Add a macro _ASM_RIP() to add a (%rip) suffix on 64 bits only. This is
> useful for immediate memory references where one doesn't want gcc
> to possibly use a register indirection as it may in the case of an "m"
> constraint.
> 
> Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Link: https://lkml.kernel.org/r/20210910195910.2542662-3-hpa@zytor.com
> Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix
  2024-03-12  1:33   ` Pawan Gupta
@ 2024-03-12  5:57     ` Nikolay Borisov
  2024-03-29 12:46       ` Greg KH
  0 siblings, 1 reply; 12+ messages in thread
From: Nikolay Borisov @ 2024-03-12  5:57 UTC (permalink / raw)
  To: Pawan Gupta; +Cc: stable, H. Peter Anvin (Intel)



On 12.03.24 г. 3:33 ч., Pawan Gupta wrote:
> On Mon, Feb 26, 2024 at 02:22:31PM +0200, Nikolay Borisov wrote:
>> From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
>>
>> [ Upstream commit 0576d1ed1e153bf34b54097e0561ede382ba88b0 ]
> 
> Looks like the correct sha is f87bc8dc7a7c438c70f97b4e51c76a183313272e

Indeed, 0576d1ed1e153bf34b54097e0561ede382ba88b0 is my local shaid of 
the backported commit. Thanks for catching it!

> 
>> Add a macro _ASM_RIP() to add a (%rip) suffix on 64 bits only. This is
>> useful for immediate memory references where one doesn't want gcc
>> to possibly use a register indirection as it may in the case of an "m"
>> constraint.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
>> Signed-off-by: Borislav Petkov <bp@suse.de>
>> Link: https://lkml.kernel.org/r/20210910195910.2542662-3-hpa@zytor.com
>> Signed-off-by: Nikolay Borisov <nik.borisov@suse.com>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix
  2024-03-12  5:57     ` Nikolay Borisov
@ 2024-03-29 12:46       ` Greg KH
  0 siblings, 0 replies; 12+ messages in thread
From: Greg KH @ 2024-03-29 12:46 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Pawan Gupta, stable, H. Peter Anvin (Intel)

On Tue, Mar 12, 2024 at 07:57:19AM +0200, Nikolay Borisov wrote:
> 
> 
> On 12.03.24 г. 3:33 ч., Pawan Gupta wrote:
> > On Mon, Feb 26, 2024 at 02:22:31PM +0200, Nikolay Borisov wrote:
> > > From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
> > > 
> > > [ Upstream commit 0576d1ed1e153bf34b54097e0561ede382ba88b0 ]
> > 
> > Looks like the correct sha is f87bc8dc7a7c438c70f97b4e51c76a183313272e
> 
> Indeed, 0576d1ed1e153bf34b54097e0561ede382ba88b0 is my local shaid of the
> backported commit. Thanks for catching it!

Can you fix this up and verify the other commit ids and resend so I
don't have to manually change them by hand?

Also, why is this series so much smaller than the 5.10 and 5.15
backports?  What is missing here that is in the 5.10 and newer kernels?
KVM stuff?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2024-03-29 12:46 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-26 12:22 [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Nikolay Borisov
2024-02-26 12:22 ` [PATCH v2 1/7] x86/asm: Add _ASM_RIP() macro for x86-64 (%rip) suffix Nikolay Borisov
2024-03-12  1:33   ` Pawan Gupta
2024-03-12  5:57     ` Nikolay Borisov
2024-03-29 12:46       ` Greg KH
2024-02-26 12:22 ` [PATCH v2 2/7] x86/bugs: Add asm helpers for executing VERW Nikolay Borisov
2024-02-26 12:22 ` [PATCH v2 3/7] x86/entry_64: Add VERW just before userspace transition Nikolay Borisov
2024-02-26 12:22 ` [PATCH v2 4/7] x86/entry_32: " Nikolay Borisov
2024-02-26 12:22 ` [PATCH v2 5/7] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Nikolay Borisov
2024-02-26 12:22 ` [PATCH v2 6/7] KVM/VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Nikolay Borisov
2024-02-26 12:22 ` [PATCH v2 7/7] KVM/VMX: Move VERW closer to VMentry for MDS mitigation Nikolay Borisov
2024-02-26 13:30 ` [PATCH v2 0/7] 5.4 backport of recent mds improvement patches Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.