* [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
@ 2023-10-27 14:38 ` Pawan Gupta
2023-10-27 15:32 ` Borislav Petkov
2023-12-01 19:36 ` Josh Poimboeuf
2023-10-27 14:38 ` [PATCH v4 2/6] x86/entry_64: Add VERW just before userspace transition Pawan Gupta
` (5 subsequent siblings)
6 siblings, 2 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 14:38 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov
Cc: linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Pawan Gupta,
Alyssa Milburn
MDS mitigation requires clearing the CPU buffers before returning to
user. This needs to be done late in the exit-to-user path. Current
location of VERW leaves a possibility of kernel data ending up in CPU
buffers for memory accesses done after VERW such as:
1. Kernel data accessed by an NMI between VERW and return-to-user can
remain in CPU buffers ( since NMI returning to kernel does not
execute VERW to clear CPU buffers.
2. Alyssa reported that after VERW is executed,
CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
call. Memory accesses during stack scrubbing can move kernel stack
contents into CPU buffers.
3. When caller saved registers are restored after a return from
function executing VERW, the kernel stack accesses can remain in
CPU buffers(since they occur after VERW).
To fix this VERW needs to be moved very late in exit-to-user path.
In preparation for moving VERW to entry/exit asm code, create macros
that can be used in asm. Also make them depend on a new feature flag
X86_FEATURE_CLEAR_CPU_BUF.
Reported-by: Alyssa Milburn <alyssa.milburn@intel.com>
Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry.S | 17 +++++++++++++++++
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/nospec-branch.h | 15 +++++++++++++++
3 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
index bfb7bcb362bc..8dc84bb9dc0b 100644
--- a/arch/x86/entry/entry.S
+++ b/arch/x86/entry/entry.S
@@ -6,6 +6,9 @@
#include <linux/linkage.h>
#include <asm/export.h>
#include <asm/msr-index.h>
+#include <asm/unwind_hints.h>
+#include <asm/segment.h>
+#include <asm/cache.h>
.pushsection .noinstr.text, "ax"
@@ -20,3 +23,17 @@ SYM_FUNC_END(entry_ibpb)
EXPORT_SYMBOL_GPL(entry_ibpb);
.popsection
+
+.pushsection .entry.text, "ax"
+
+.align L1_CACHE_BYTES, 0xcc
+SYM_CODE_START_NOALIGN(mds_verw_sel)
+ UNWIND_HINT_UNDEFINED
+ ANNOTATE_NOENDBR
+ .word __KERNEL_DS
+.align L1_CACHE_BYTES, 0xcc
+SYM_CODE_END(mds_verw_sel);
+/* For KVM */
+EXPORT_SYMBOL_GPL(mds_verw_sel);
+
+.popsection
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 58cb9495e40f..f21fc0f12737 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -308,10 +308,10 @@
#define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
#define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
#define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */
-
#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */
#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */
#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
+#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c55cc243592e..005e69f93115 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -329,6 +329,21 @@
#endif
.endm
+/*
+ * Macros to execute VERW instruction that mitigate transient data sampling
+ * attacks such as MDS. On affected systems a microcode update overloaded VERW
+ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+ *
+ * Note: Only the memory operand variant of VERW clears the CPU buffers.
+ */
+.macro EXEC_VERW
+ verw _ASM_RIP(mds_verw_sel)
+.endm
+
+.macro CLEAR_CPU_BUFFERS
+ ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF
+.endm
+
#else /* __ASSEMBLY__ */
#define ANNOTATE_RETPOLINE_SAFE \
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-10-27 14:38 ` [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW Pawan Gupta
@ 2023-10-27 15:32 ` Borislav Petkov
2023-11-02 0:01 ` Pawan Gupta
2023-12-01 19:36 ` Josh Poimboeuf
1 sibling, 1 reply; 23+ messages in thread
From: Borislav Petkov @ 2023-10-27 15:32 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn
On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote:
> MDS mitigation requires clearing the CPU buffers before returning to
> user. This needs to be done late in the exit-to-user path. Current
> location of VERW leaves a possibility of kernel data ending up in CPU
> buffers for memory accesses done after VERW such as:
>
> 1. Kernel data accessed by an NMI between VERW and return-to-user can
> remain in CPU buffers ( since NMI returning to kernel does not
Some leftover '('
> execute VERW to clear CPU buffers.
> 2. Alyssa reported that after VERW is executed,
> CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
> call. Memory accesses during stack scrubbing can move kernel stack
> contents into CPU buffers.
> 3. When caller saved registers are restored after a return from
> function executing VERW, the kernel stack accesses can remain in
> CPU buffers(since they occur after VERW).
>
> To fix this VERW needs to be moved very late in exit-to-user path.
>
> In preparation for moving VERW to entry/exit asm code, create macros
> that can be used in asm. Also make them depend on a new feature flag
> X86_FEATURE_CLEAR_CPU_BUF.
The macros don't depend on the feature flag - VERW patching is done
based on it.
> @@ -20,3 +23,17 @@ SYM_FUNC_END(entry_ibpb)
> EXPORT_SYMBOL_GPL(entry_ibpb);
>
> .popsection
> +
> +.pushsection .entry.text, "ax"
> +
> +.align L1_CACHE_BYTES, 0xcc
> +SYM_CODE_START_NOALIGN(mds_verw_sel)
That weird thing needs a comment explaining what it is for.
> + UNWIND_HINT_UNDEFINED
> + ANNOTATE_NOENDBR
> + .word __KERNEL_DS
> +.align L1_CACHE_BYTES, 0xcc
> +SYM_CODE_END(mds_verw_sel);
> +/* For KVM */
> +EXPORT_SYMBOL_GPL(mds_verw_sel);
> +
> +.popsection
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 58cb9495e40f..f21fc0f12737 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -308,10 +308,10 @@
> #define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
> #define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
> #define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */
> -
> #define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */
> #define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */
> #define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
> +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */
... using VERW
>
> /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
> #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index c55cc243592e..005e69f93115 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -329,6 +329,21 @@
> #endif
> .endm
>
> +/*
> + * Macros to execute VERW instruction that mitigate transient data sampling
> + * attacks such as MDS. On affected systems a microcode update overloaded VERW
> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> + *
> + * Note: Only the memory operand variant of VERW clears the CPU buffers.
> + */
> +.macro EXEC_VERW
> + verw _ASM_RIP(mds_verw_sel)
> +.endm
> +
> +.macro CLEAR_CPU_BUFFERS
> + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF
> +.endm
Why can't this simply be:
.macro CLEAR_CPU_BUFFERS
ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
.endm
without that silly EXEC_VERW macro?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-10-27 15:32 ` Borislav Petkov
@ 2023-11-02 0:01 ` Pawan Gupta
0 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-11-02 0:01 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn
On Fri, Oct 27, 2023 at 05:32:03PM +0200, Borislav Petkov wrote:
> On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote:
> > 1. Kernel data accessed by an NMI between VERW and return-to-user can
> > remain in CPU buffers ( since NMI returning to kernel does not
>
> Some leftover '('
Ok.
> > In preparation for moving VERW to entry/exit asm code, create macros
> > that can be used in asm. Also make them depend on a new feature flag
> > X86_FEATURE_CLEAR_CPU_BUF.
>
> The macros don't depend on the feature flag - VERW patching is done
> based on it.
Will fix.
> > @@ -20,3 +23,17 @@ SYM_FUNC_END(entry_ibpb)
> > EXPORT_SYMBOL_GPL(entry_ibpb);
> >
> > .popsection
> > +
> > +.pushsection .entry.text, "ax"
> > +
> > +.align L1_CACHE_BYTES, 0xcc
> > +SYM_CODE_START_NOALIGN(mds_verw_sel)
>
> That weird thing needs a comment explaining what it is for.
Right.
> > +#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */
>
> ... using VERW
Ok.
> > +/*
> > + * Macros to execute VERW instruction that mitigate transient data sampling
> > + * attacks such as MDS. On affected systems a microcode update overloaded VERW
> > + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> > + *
> > + * Note: Only the memory operand variant of VERW clears the CPU buffers.
> > + */
> > +.macro EXEC_VERW
> > + verw _ASM_RIP(mds_verw_sel)
> > +.endm
> > +
> > +.macro CLEAR_CPU_BUFFERS
> > + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF
> > +.endm
>
> Why can't this simply be:
>
> .macro CLEAR_CPU_BUFFERS
> ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
This will not work in 32-bit mode that uses the same macro.
Thanks for the review.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-10-27 14:38 ` [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW Pawan Gupta
2023-10-27 15:32 ` Borislav Petkov
@ 2023-12-01 19:36 ` Josh Poimboeuf
2023-12-01 19:39 ` Andrew Cooper
1 sibling, 1 reply; 23+ messages in thread
From: Josh Poimboeuf @ 2023-12-01 19:36 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn
On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote:
> +.pushsection .entry.text, "ax"
> +
> +.align L1_CACHE_BYTES, 0xcc
> +SYM_CODE_START_NOALIGN(mds_verw_sel)
> + UNWIND_HINT_UNDEFINED
> + ANNOTATE_NOENDBR
> + .word __KERNEL_DS
> +.align L1_CACHE_BYTES, 0xcc
> +SYM_CODE_END(mds_verw_sel);
> +/* For KVM */
> +EXPORT_SYMBOL_GPL(mds_verw_sel);
> +
> +.popsection
This is data, so why is it "CODE" in .entry.text?
--
Josh
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-12-01 19:36 ` Josh Poimboeuf
@ 2023-12-01 19:39 ` Andrew Cooper
2023-12-01 20:04 ` Josh Poimboeuf
0 siblings, 1 reply; 23+ messages in thread
From: Andrew Cooper @ 2023-12-01 19:39 UTC (permalink / raw)
To: Josh Poimboeuf, Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Nikolay Borisov, linux-kernel, linux-doc, kvm, Alyssa Milburn,
Daniel Sneddon, antonio.gomez.iglesias, Greg Kroah-Hartman,
Alyssa Milburn
On 01/12/2023 7:36 pm, Josh Poimboeuf wrote:
> On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote:
>> +.pushsection .entry.text, "ax"
>> +
>> +.align L1_CACHE_BYTES, 0xcc
>> +SYM_CODE_START_NOALIGN(mds_verw_sel)
>> + UNWIND_HINT_UNDEFINED
>> + ANNOTATE_NOENDBR
>> + .word __KERNEL_DS
>> +.align L1_CACHE_BYTES, 0xcc
>> +SYM_CODE_END(mds_verw_sel);
>> +/* For KVM */
>> +EXPORT_SYMBOL_GPL(mds_verw_sel);
>> +
>> +.popsection
> This is data, so why is it "CODE" in .entry.text?
Because KPTI.
~Andrew
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-12-01 19:39 ` Andrew Cooper
@ 2023-12-01 20:04 ` Josh Poimboeuf
2023-12-20 1:15 ` Pawan Gupta
0 siblings, 1 reply; 23+ messages in thread
From: Josh Poimboeuf @ 2023-12-01 20:04 UTC (permalink / raw)
To: Andrew Cooper
Cc: Pawan Gupta, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Andy Lutomirski, Jonathan Corbet, Sean Christopherson,
Paolo Bonzini, tony.luck, ak, tim.c.chen, Nikolay Borisov,
linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Alyssa Milburn
On Fri, Dec 01, 2023 at 07:39:05PM +0000, Andrew Cooper wrote:
> On 01/12/2023 7:36 pm, Josh Poimboeuf wrote:
> > On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote:
> >> +.pushsection .entry.text, "ax"
> >> +
> >> +.align L1_CACHE_BYTES, 0xcc
> >> +SYM_CODE_START_NOALIGN(mds_verw_sel)
> >> + UNWIND_HINT_UNDEFINED
> >> + ANNOTATE_NOENDBR
> >> + .word __KERNEL_DS
> >> +.align L1_CACHE_BYTES, 0xcc
> >> +SYM_CODE_END(mds_verw_sel);
> >> +/* For KVM */
> >> +EXPORT_SYMBOL_GPL(mds_verw_sel);
> >> +
> >> +.popsection
> > This is data, so why is it "CODE" in .entry.text?
>
> Because KPTI.
Urgh... Pawan please add a comment.
--
Josh
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW
2023-12-01 20:04 ` Josh Poimboeuf
@ 2023-12-20 1:15 ` Pawan Gupta
0 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-12-20 1:15 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Andrew Cooper, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H. Peter Anvin, Peter Zijlstra,
Andy Lutomirski, Jonathan Corbet, Sean Christopherson,
Paolo Bonzini, tony.luck, ak, tim.c.chen, Nikolay Borisov,
linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Alyssa Milburn
On Fri, Dec 01, 2023 at 12:04:42PM -0800, Josh Poimboeuf wrote:
> On Fri, Dec 01, 2023 at 07:39:05PM +0000, Andrew Cooper wrote:
> > On 01/12/2023 7:36 pm, Josh Poimboeuf wrote:
> > > On Fri, Oct 27, 2023 at 07:38:40AM -0700, Pawan Gupta wrote:
> > >> +.pushsection .entry.text, "ax"
> > >> +
> > >> +.align L1_CACHE_BYTES, 0xcc
> > >> +SYM_CODE_START_NOALIGN(mds_verw_sel)
> > >> + UNWIND_HINT_UNDEFINED
> > >> + ANNOTATE_NOENDBR
> > >> + .word __KERNEL_DS
> > >> +.align L1_CACHE_BYTES, 0xcc
> > >> +SYM_CODE_END(mds_verw_sel);
> > >> +/* For KVM */
> > >> +EXPORT_SYMBOL_GPL(mds_verw_sel);
> > >> +
> > >> +.popsection
> > > This is data, so why is it "CODE" in .entry.text?
> >
> > Because KPTI.
>
> Urgh... Pawan please add a comment.
Yes, this place needs a comment, will add.
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v4 2/6] x86/entry_64: Add VERW just before userspace transition
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
2023-10-27 14:38 ` [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW Pawan Gupta
@ 2023-10-27 14:38 ` Pawan Gupta
2023-10-27 14:38 ` [PATCH v4 3/6] x86/entry_32: " Pawan Gupta
` (4 subsequent siblings)
6 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 14:38 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov
Cc: linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Pawan Gupta,
Dave Hansen
Mitigation for MDS is to use VERW instruction to clear any secrets in
CPU Buffers. Any memory accesses after VERW execution can still remain
in CPU buffers. It is safer to execute VERW late in return to user path
to minimize the window in which kernel data can end up in CPU buffers.
There are not many kernel secrets to be had after SWITCH_TO_USER_CR3.
Add support for deploying VERW mitigation after user register state is
restored. This helps minimize the chances of kernel data ending up into
CPU buffers after executing VERW.
Note that the mitigation at the new location is not yet enabled.
Corner case not handled
=======================
Interrupts returning to kernel don't clear CPUs buffers since the
exit-to-user path is expected to do that anyways. But, there could be
a case when an NMI is generated in kernel after the exit-to-user path
has cleared the buffers. This case is not handled and NMI returning to
kernel don't clear CPU buffers because:
1. It is rare to get an NMI after VERW, but before returning to userspace.
2. For an unprivileged user, there is no known way to make that NMI
less rare or target it.
3. It would take a large number of these precisely-timed NMIs to mount
an actual attack. There's presumably not enough bandwidth.
4. The NMI in question occurs after a VERW, i.e. when user state is
restored and most interesting data is already scrubbed. Whats left
is only the data that NMI touches, and that may or may not be of
any interest.
Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry_64.S | 11 +++++++++++
arch/x86/entry/entry_64_compat.S | 1 +
2 files changed, 12 insertions(+)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 43606de22511..9f97a8bd11e8 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -223,6 +223,7 @@ syscall_return_via_sysret:
SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
swapgs
+ CLEAR_CPU_BUFFERS
sysretq
SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
@@ -663,6 +664,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
/* Restore RDI. */
popq %rdi
swapgs
+ CLEAR_CPU_BUFFERS
jmp .Lnative_iret
@@ -774,6 +776,8 @@ native_irq_return_ldt:
*/
popq %rax /* Restore user RAX */
+ CLEAR_CPU_BUFFERS
+
/*
* RSP now points to an ordinary IRET frame, except that the page
* is read-only and RSP[31:16] are preloaded with the userspace
@@ -1502,6 +1506,12 @@ nmi_restore:
std
movq $0, 5*8(%rsp) /* clear "NMI executing" */
+ /*
+ * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like
+ * NMI in kernel after user state is restored. For an unprivileged user
+ * these conditions are hard to meet.
+ */
+
/*
* iretq reads the "iret" frame and exits the NMI stack in a
* single instruction. We are returning to kernel mode, so this
@@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
UNWIND_HINT_END_OF_STACK
ENDBR
mov $-ENOSYS, %eax
+ CLEAR_CPU_BUFFERS
sysretl
SYM_CODE_END(ignore_sysret)
#endif
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 70150298f8bd..245697eb8485 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -271,6 +271,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL)
xorl %r9d, %r9d
xorl %r10d, %r10d
swapgs
+ CLEAR_CPU_BUFFERS
sysretl
SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 3/6] x86/entry_32: Add VERW just before userspace transition
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
2023-10-27 14:38 ` [PATCH v4 1/6] x86/bugs: Add asm helpers for executing VERW Pawan Gupta
2023-10-27 14:38 ` [PATCH v4 2/6] x86/entry_64: Add VERW just before userspace transition Pawan Gupta
@ 2023-10-27 14:38 ` Pawan Gupta
2023-10-27 14:38 ` [PATCH v4 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Pawan Gupta
` (3 subsequent siblings)
6 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 14:38 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov
Cc: linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Pawan Gupta
As done for entry_64, add support for executing VERW late in exit to
user path for 32-bit mode.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/entry/entry_32.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 6e6af42e044a..74a4358c7f45 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -885,6 +885,7 @@ SYM_FUNC_START(entry_SYSENTER_32)
BUG_IF_WRONG_CR3 no_user_check=1
popfl
popl %eax
+ CLEAR_CPU_BUFFERS
/*
* Return back to the vDSO, which will pop ecx and edx.
@@ -954,6 +955,7 @@ restore_all_switch_stack:
/* Restore user state */
RESTORE_REGS pop=4 # skip orig_eax/error_code
+ CLEAR_CPU_BUFFERS
.Lirq_return:
/*
* ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
@@ -1146,6 +1148,7 @@ SYM_CODE_START(asm_exc_nmi)
/* Not on SYSENTER stack. */
call exc_nmi
+ CLEAR_CPU_BUFFERS
jmp .Lnmi_return
.Lnmi_from_sysenter_stack:
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
` (2 preceding siblings ...)
2023-10-27 14:38 ` [PATCH v4 3/6] x86/entry_32: " Pawan Gupta
@ 2023-10-27 14:38 ` Pawan Gupta
2023-12-01 19:59 ` Josh Poimboeuf
2023-10-27 14:39 ` [PATCH v4 5/6] KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Pawan Gupta
` (2 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 14:38 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov
Cc: linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Pawan Gupta
The VERW mitigation at exit-to-user is enabled via a static branch
mds_user_clear. This static branch is never toggled after boot, and can
be safely replaced with an ALTERNATIVE() which is convenient to use in
asm.
Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user
path. Also remove the now redundant VERW in exc_nmi() and
arch_exit_to_user_mode().
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
Documentation/arch/x86/mds.rst | 38 +++++++++++++++++++++++++-----------
arch/x86/include/asm/entry-common.h | 1 -
arch/x86/include/asm/nospec-branch.h | 12 ------------
arch/x86/kernel/cpu/bugs.c | 15 ++++++--------
arch/x86/kernel/nmi.c | 2 --
arch/x86/kvm/vmx/vmx.c | 2 +-
6 files changed, 34 insertions(+), 36 deletions(-)
diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
index e73fdff62c0a..a5c5091b9ccd 100644
--- a/Documentation/arch/x86/mds.rst
+++ b/Documentation/arch/x86/mds.rst
@@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing:
mds_clear_cpu_buffers()
+Also macro CLEAR_CPU_BUFFERS is meant to be used in ASM late in exit-to-user
+path. This macro works for cases where GPRs can't be clobbered.
+
The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
(idle) transitions.
@@ -138,17 +141,30 @@ Mitigation points
When transitioning from kernel to user space the CPU buffers are flushed
on affected CPUs when the mitigation is not disabled on the kernel
- command line. The migitation is enabled through the static key
- mds_user_clear.
-
- The mitigation is invoked in prepare_exit_to_usermode() which covers
- all but one of the kernel to user space transitions. The exception
- is when we return from a Non Maskable Interrupt (NMI), which is
- handled directly in do_nmi().
-
- (The reason that NMI is special is that prepare_exit_to_usermode() can
- enable IRQs. In NMI context, NMIs are blocked, and we don't want to
- enable IRQs with NMIs blocked.)
+ command line. The mitigation is enabled through the feature flag
+ X86_FEATURE_CLEAR_CPU_BUF.
+
+ The mitigation is invoked just before transitioning to userspace after
+ user registers are restored. This is done to minimize the window in
+ which kernel data could be accessed after VERW e.g. via an NMI after
+ VERW.
+
+ **Corner case not handled**
+ Interrupts returning to kernel don't clear CPUs buffers since the
+ exit-to-user path is expected to do that anyways. But, there could be
+ a case when an NMI is generated in kernel after the exit-to-user path
+ has cleared the buffers. This case is not handled and NMI returning to
+ kernel don't clear CPU buffers because:
+
+ 1. It is rare to get an NMI after VERW, but before returning to userspace.
+ 2. For an unprivileged user, there is no known way to make that NMI
+ less rare or target it.
+ 3. It would take a large number of these precisely-timed NMIs to mount
+ an actual attack. There's presumably not enough bandwidth.
+ 4. The NMI in question occurs after a VERW, i.e. when user state is
+ restored and most interesting data is already scrubbed. Whats left
+ is only the data that NMI touches, and that may or may not be of
+ any interest.
2. C-State transition
diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
index ce8f50192ae3..7e523bb3d2d3 100644
--- a/arch/x86/include/asm/entry-common.h
+++ b/arch/x86/include/asm/entry-common.h
@@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
static __always_inline void arch_exit_to_user_mode(void)
{
- mds_user_clear_cpu_buffers();
amd_clear_divider();
}
#define arch_exit_to_user_mode arch_exit_to_user_mode
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 005e69f93115..12b8e86678bf 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -553,7 +553,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
-DECLARE_STATIC_KEY_FALSE(mds_user_clear);
DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
@@ -585,17 +584,6 @@ static __always_inline void mds_clear_cpu_buffers(void)
asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
}
-/**
- * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
- *
- * Clear CPU buffers if the corresponding static key is enabled
- */
-static __always_inline void mds_user_clear_cpu_buffers(void)
-{
- if (static_branch_likely(&mds_user_clear))
- mds_clear_cpu_buffers();
-}
-
/**
* mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 10499bcd4e39..00aab0c0937f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
/* Control unconditional IBPB in switch_mm() */
DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
-/* Control MDS CPU buffer clear before returning to user space */
-DEFINE_STATIC_KEY_FALSE(mds_user_clear);
-EXPORT_SYMBOL_GPL(mds_user_clear);
/* Control MDS CPU buffer clear before idling (halt, mwait) */
DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
EXPORT_SYMBOL_GPL(mds_idle_clear);
@@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
- static_branch_enable(&mds_user_clear);
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
(mds_nosmt || cpu_mitigations_auto_nosmt()))
@@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void)
* For guests that can't determine whether the correct microcode is
* present on host, enable the mitigation for UCODE_NEEDED as well.
*/
- static_branch_enable(&mds_user_clear);
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
if (taa_nosmt || cpu_mitigations_auto_nosmt())
cpu_smt_disable(false);
@@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void)
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
boot_cpu_has(X86_FEATURE_RTM)))
- static_branch_enable(&mds_user_clear);
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
else
static_branch_enable(&mmio_stale_data_clear);
@@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void)
if (cpu_mitigations_off())
return;
- if (!static_key_enabled(&mds_user_clear))
+ if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
goto out;
/*
- * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
- * mitigation, if necessary.
+ * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
+ * Stale Data mitigation, if necessary.
*/
if (mds_mitigation == MDS_MITIGATION_OFF &&
boot_cpu_has_bug(X86_BUG_MDS)) {
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index a0c551846b35..ebfff8dca661 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -551,8 +551,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
if (this_cpu_dec_return(nmi_state))
goto nmi_restart;
- if (user_mode(regs))
- mds_user_clear_cpu_buffers();
if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) {
WRITE_ONCE(nsp->idt_seq, nsp->idt_seq + 1);
WARN_ON_ONCE(nsp->idt_seq & 0x1);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 72e3943f3693..24e8694b83fc 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7229,7 +7229,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
/* L1D Flush includes CPU buffer clear to mitigate MDS */
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (static_branch_unlikely(&mds_user_clear))
+ else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
mds_clear_cpu_buffers();
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
kvm_arch_has_assigned_device(vcpu->kvm))
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v4 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
2023-10-27 14:38 ` [PATCH v4 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Pawan Gupta
@ 2023-12-01 19:59 ` Josh Poimboeuf
2023-12-20 1:20 ` Pawan Gupta
0 siblings, 1 reply; 23+ messages in thread
From: Josh Poimboeuf @ 2023-12-01 19:59 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman
On Fri, Oct 27, 2023 at 07:38:59AM -0700, Pawan Gupta wrote:
> The VERW mitigation at exit-to-user is enabled via a static branch
> mds_user_clear. This static branch is never toggled after boot, and can
> be safely replaced with an ALTERNATIVE() which is convenient to use in
> asm.
>
> Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user
> path. Also remove the now redundant VERW in exc_nmi() and
> arch_exit_to_user_mode().
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
> Documentation/arch/x86/mds.rst | 38 +++++++++++++++++++++++++-----------
> arch/x86/include/asm/entry-common.h | 1 -
> arch/x86/include/asm/nospec-branch.h | 12 ------------
> arch/x86/kernel/cpu/bugs.c | 15 ++++++--------
> arch/x86/kernel/nmi.c | 2 --
> arch/x86/kvm/vmx/vmx.c | 2 +-
> 6 files changed, 34 insertions(+), 36 deletions(-)
>
> diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
> index e73fdff62c0a..a5c5091b9ccd 100644
> --- a/Documentation/arch/x86/mds.rst
> +++ b/Documentation/arch/x86/mds.rst
> @@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing:
>
> mds_clear_cpu_buffers()
>
> +Also macro CLEAR_CPU_BUFFERS is meant to be used in ASM late in exit-to-user
> +path. This macro works for cases where GPRs can't be clobbered.
What does this last sentence mean? Is it trying to say that the macro
doesn't clobber registers (other than ZF)?
--
Josh
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
2023-12-01 19:59 ` Josh Poimboeuf
@ 2023-12-20 1:20 ` Pawan Gupta
0 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-12-20 1:20 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman
On Fri, Dec 01, 2023 at 11:59:54AM -0800, Josh Poimboeuf wrote:
> On Fri, Oct 27, 2023 at 07:38:59AM -0700, Pawan Gupta wrote:
> > The VERW mitigation at exit-to-user is enabled via a static branch
> > mds_user_clear. This static branch is never toggled after boot, and can
> > be safely replaced with an ALTERNATIVE() which is convenient to use in
> > asm.
> >
> > Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user
> > path. Also remove the now redundant VERW in exc_nmi() and
> > arch_exit_to_user_mode().
> >
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> > Documentation/arch/x86/mds.rst | 38 +++++++++++++++++++++++++-----------
> > arch/x86/include/asm/entry-common.h | 1 -
> > arch/x86/include/asm/nospec-branch.h | 12 ------------
> > arch/x86/kernel/cpu/bugs.c | 15 ++++++--------
> > arch/x86/kernel/nmi.c | 2 --
> > arch/x86/kvm/vmx/vmx.c | 2 +-
> > 6 files changed, 34 insertions(+), 36 deletions(-)
> >
> > diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
> > index e73fdff62c0a..a5c5091b9ccd 100644
> > --- a/Documentation/arch/x86/mds.rst
> > +++ b/Documentation/arch/x86/mds.rst
> > @@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing:
> >
> > mds_clear_cpu_buffers()
> >
> > +Also macro CLEAR_CPU_BUFFERS is meant to be used in ASM late in exit-to-user
> > +path. This macro works for cases where GPRs can't be clobbered.
>
> What does this last sentence mean? Is it trying to say that the macro
> doesn't clobber registers (other than ZF)?
Yes. I will rephrase it to say that macro doesn't clobber registers
other than ZF.
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v4 5/6] KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
` (3 preceding siblings ...)
2023-10-27 14:38 ` [PATCH v4 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Pawan Gupta
@ 2023-10-27 14:39 ` Pawan Gupta
2023-10-27 14:39 ` [PATCH v4 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Pawan Gupta
2023-10-27 14:48 ` [PATCH v4 0/6] Delay VERW Borislav Petkov
6 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 14:39 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov
Cc: linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Pawan Gupta
From: Sean Christopherson <seanjc@google.com>
Use EFLAGS.CF instead of EFLAGS.ZF to track whether to use VMRESUME versus
VMLAUNCH. Freeing up EFLAGS.ZF will allow doing VERW, which clobbers ZF,
for MDS mitigations as late as possible without needing to duplicate VERW
for both paths.
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/kvm/vmx/run_flags.h | 7 +++++--
arch/x86/kvm/vmx/vmenter.S | 6 +++---
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index edc3f16cc189..6a9bfdfbb6e5 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -2,7 +2,10 @@
#ifndef __KVM_X86_VMX_RUN_FLAGS_H
#define __KVM_X86_VMX_RUN_FLAGS_H
-#define VMX_RUN_VMRESUME (1 << 0)
-#define VMX_RUN_SAVE_SPEC_CTRL (1 << 1)
+#define VMX_RUN_VMRESUME_SHIFT 0
+#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
+
+#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
+#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index be275a0410a8..b3b13ec04bac 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -139,7 +139,7 @@ SYM_FUNC_START(__vmx_vcpu_run)
mov (%_ASM_SP), %_ASM_AX
/* Check if vmlaunch or vmresume is needed */
- test $VMX_RUN_VMRESUME, %ebx
+ bt $VMX_RUN_VMRESUME_SHIFT, %ebx
/* Load guest registers. Don't clobber flags. */
mov VCPU_RCX(%_ASM_AX), %_ASM_CX
@@ -161,8 +161,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
/* Load guest RAX. This kills the @regs pointer! */
mov VCPU_RAX(%_ASM_AX), %_ASM_AX
- /* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */
- jz .Lvmlaunch
+ /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
+ jnc .Lvmlaunch
/*
* After a successful VMRESUME/VMLAUNCH, control flow "magically"
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
` (4 preceding siblings ...)
2023-10-27 14:39 ` [PATCH v4 5/6] KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH Pawan Gupta
@ 2023-10-27 14:39 ` Pawan Gupta
2023-12-01 20:02 ` Josh Poimboeuf
2023-10-27 14:48 ` [PATCH v4 0/6] Delay VERW Borislav Petkov
6 siblings, 1 reply; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 14:39 UTC (permalink / raw)
To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov
Cc: linux-kernel, linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Greg Kroah-Hartman, Pawan Gupta
During VMentry VERW is executed to mitigate MDS. After VERW, any memory
access like register push onto stack may put host data in MDS affected
CPU buffers. A guest can then use MDS to sample host data.
Although likelihood of secrets surviving in registers at current VERW
callsite is less, but it can't be ruled out. Harden the MDS mitigation
by moving the VERW mitigation late in VMentry path.
Note that VERW for MMIO Stale Data mitigation is unchanged because of
the complexity of per-guest conditional VERW which is not easy to handle
that late in asm with no GPRs available. If the CPU is also affected by
MDS, VERW is unconditionally executed late in asm regardless of guest
having MMIO access.
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
arch/x86/kvm/vmx/vmenter.S | 3 +++
arch/x86/kvm/vmx/vmx.c | 19 ++++++++++++++-----
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index b3b13ec04bac..139960deb736 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -161,6 +161,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
/* Load guest RAX. This kills the @regs pointer! */
mov VCPU_RAX(%_ASM_AX), %_ASM_AX
+ /* Clobbers EFLAGS.ZF */
+ CLEAR_CPU_BUFFERS
+
/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
jnc .Lvmlaunch
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 24e8694b83fc..a05c6b80b06c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7226,16 +7226,24 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
guest_state_enter_irqoff();
- /* L1D Flush includes CPU buffer clear to mitigate MDS */
+ /*
+ * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
+ * mitigation for MDS is done late in VMentry and is still
+ * executed in spite of L1D Flush. This is because an extra VERW
+ * should not matter much after the big hammer L1D Flush.
+ */
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
- mds_clear_cpu_buffers();
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
kvm_arch_has_assigned_device(vcpu->kvm))
mds_clear_cpu_buffers();
- vmx_disable_fb_clear(vmx);
+ /*
+ * Optimize the latency of VERW in guests for MMIO mitigation. Skip
+ * the optimization when MDS mitigation(later in asm) is enabled.
+ */
+ if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
+ vmx_disable_fb_clear(vmx);
if (vcpu->arch.cr2 != native_read_cr2())
native_write_cr2(vcpu->arch.cr2);
@@ -7248,7 +7256,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
vmx->idt_vectoring_info = 0;
- vmx_enable_fb_clear(vmx);
+ if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
+ vmx_enable_fb_clear(vmx);
if (unlikely(vmx->fail)) {
vmx->exit_reason.full = 0xdead;
--
2.34.1
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v4 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation
2023-10-27 14:39 ` [PATCH v4 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Pawan Gupta
@ 2023-12-01 20:02 ` Josh Poimboeuf
2023-12-20 1:25 ` Pawan Gupta
0 siblings, 1 reply; 23+ messages in thread
From: Josh Poimboeuf @ 2023-12-01 20:02 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman
On Fri, Oct 27, 2023 at 07:39:12AM -0700, Pawan Gupta wrote:
> - vmx_disable_fb_clear(vmx);
> + /*
> + * Optimize the latency of VERW in guests for MMIO mitigation. Skip
> + * the optimization when MDS mitigation(later in asm) is enabled.
> + */
> + if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> + vmx_disable_fb_clear(vmx);
>
> if (vcpu->arch.cr2 != native_read_cr2())
> native_write_cr2(vcpu->arch.cr2);
> @@ -7248,7 +7256,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>
> vmx->idt_vectoring_info = 0;
>
> - vmx_enable_fb_clear(vmx);
> + if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> + vmx_enable_fb_clear(vmx);
>
It may be cleaner to instead check X86_FEATURE_CLEAR_CPU_BUF when
setting vmx->disable_fb_clear in the first place, in
vmx_update_fb_clear_dis().
--
Josh
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation
2023-12-01 20:02 ` Josh Poimboeuf
@ 2023-12-20 1:25 ` Pawan Gupta
0 siblings, 0 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-12-20 1:25 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman
On Fri, Dec 01, 2023 at 12:02:47PM -0800, Josh Poimboeuf wrote:
> On Fri, Oct 27, 2023 at 07:39:12AM -0700, Pawan Gupta wrote:
> > - vmx_disable_fb_clear(vmx);
> > + /*
> > + * Optimize the latency of VERW in guests for MMIO mitigation. Skip
> > + * the optimization when MDS mitigation(later in asm) is enabled.
> > + */
> > + if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> > + vmx_disable_fb_clear(vmx);
> >
> > if (vcpu->arch.cr2 != native_read_cr2())
> > native_write_cr2(vcpu->arch.cr2);
> > @@ -7248,7 +7256,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> >
> > vmx->idt_vectoring_info = 0;
> >
> > - vmx_enable_fb_clear(vmx);
> > + if (!cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
> > + vmx_enable_fb_clear(vmx);
> >
>
> It may be cleaner to instead check X86_FEATURE_CLEAR_CPU_BUF when
> setting vmx->disable_fb_clear in the first place, in
> vmx_update_fb_clear_dis().
Right. Thanks for the review.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 0/6] Delay VERW
2023-10-27 14:38 [PATCH v4 0/6] Delay VERW Pawan Gupta
` (5 preceding siblings ...)
2023-10-27 14:39 ` [PATCH v4 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Pawan Gupta
@ 2023-10-27 14:48 ` Borislav Petkov
2023-10-27 15:05 ` Pawan Gupta
6 siblings, 1 reply; 23+ messages in thread
From: Borislav Petkov @ 2023-10-27 14:48 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn, Dave Hansen
On Fri, Oct 27, 2023 at 07:38:34AM -0700, Pawan Gupta wrote:
> v4:
Why are you spamming people with your patchset? You've sent it 4 times
in a week:
Oct 20 Pawan Gupta ( : 75|) [PATCH 0/6] Delay VERW
Oct 24 Pawan Gupta ( :7.3K|) [PATCH v2 0/6] Delay VERW
Oct 25 Pawan Gupta ( :7.5K|) [PATCH v3 0/6] Delay VERW
Oct 27 Pawan Gupta ( :8.8K|) [PATCH v4 0/6] Delay VERW
Is this something urgent or can you take your time like everyone else?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 0/6] Delay VERW
2023-10-27 14:48 ` [PATCH v4 0/6] Delay VERW Borislav Petkov
@ 2023-10-27 15:05 ` Pawan Gupta
2023-10-27 15:12 ` Borislav Petkov
0 siblings, 1 reply; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 15:05 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn, Dave Hansen
On Fri, Oct 27, 2023 at 04:48:48PM +0200, Borislav Petkov wrote:
> On Fri, Oct 27, 2023 at 07:38:34AM -0700, Pawan Gupta wrote:
> > v4:
>
> Why are you spamming people with your patchset? You've sent it 4 times
> in a week:
>
> Oct 20 Pawan Gupta ( : 75|) [PATCH 0/6] Delay VERW
> Oct 24 Pawan Gupta ( :7.3K|) [PATCH v2 0/6] Delay VERW
> Oct 25 Pawan Gupta ( :7.5K|) [PATCH v3 0/6] Delay VERW
> Oct 27 Pawan Gupta ( :8.8K|) [PATCH v4 0/6] Delay VERW
>
> Is this something urgent or can you take your time like everyone else?
I am going on a long vacation next week, I won't be working for the rest
of the year. So I wanted to get this in a good shape quickly. This
patchset addresses some security issues (although theoretical). So there
is some sense of urgency. Sorry for spamming, I'll take you off the To:
list.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 0/6] Delay VERW
2023-10-27 15:05 ` Pawan Gupta
@ 2023-10-27 15:12 ` Borislav Petkov
2023-10-27 15:32 ` Pawan Gupta
0 siblings, 1 reply; 23+ messages in thread
From: Borislav Petkov @ 2023-10-27 15:12 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn, Dave Hansen
On Fri, Oct 27, 2023 at 08:05:35AM -0700, Pawan Gupta wrote:
> I am going on a long vacation next week, I won't be working for the rest
> of the year. So I wanted to get this in a good shape quickly. This
> patchset addresses some security issues (although theoretical). So there
> is some sense of urgency. Sorry for spamming, I'll take you off the To:
> list.
Even if you're leaving for vacation, I'm sure some colleague of yours or
dhansen will take over this for you. So there's no need to keep sending
this every day. Imagine everyone who leaves for vacation would start
doing that...
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 0/6] Delay VERW
2023-10-27 15:12 ` Borislav Petkov
@ 2023-10-27 15:32 ` Pawan Gupta
2023-10-27 15:36 ` Borislav Petkov
2023-10-27 15:38 ` Greg Kroah-Hartman
0 siblings, 2 replies; 23+ messages in thread
From: Pawan Gupta @ 2023-10-27 15:32 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn, Dave Hansen
On Fri, Oct 27, 2023 at 05:12:26PM +0200, Borislav Petkov wrote:
> On Fri, Oct 27, 2023 at 08:05:35AM -0700, Pawan Gupta wrote:
> > I am going on a long vacation next week, I won't be working for the rest
> > of the year. So I wanted to get this in a good shape quickly. This
> > patchset addresses some security issues (although theoretical). So there
> > is some sense of urgency. Sorry for spamming, I'll take you off the To:
> > list.
>
> Even if you're leaving for vacation, I'm sure some colleague of yours or
> dhansen will take over this for you. So there's no need to keep sending
> this every day. Imagine everyone who leaves for vacation would start
> doing that...
I can imagine the amount emails maintainers get. I'll take care of this
in future. But, its good to get some idea on how much is too much,
specially for a security issue?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 0/6] Delay VERW
2023-10-27 15:32 ` Pawan Gupta
@ 2023-10-27 15:36 ` Borislav Petkov
2023-10-27 15:38 ` Greg Kroah-Hartman
1 sibling, 0 replies; 23+ messages in thread
From: Borislav Petkov @ 2023-10-27 15:36 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, Jonathan Corbet,
Sean Christopherson, Paolo Bonzini, tony.luck, ak, tim.c.chen,
Andrew Cooper, Nikolay Borisov, linux-kernel, linux-doc, kvm,
Alyssa Milburn, Daniel Sneddon, antonio.gomez.iglesias,
Greg Kroah-Hartman, Alyssa Milburn, Dave Hansen
On Fri, Oct 27, 2023 at 08:32:42AM -0700, Pawan Gupta wrote:
> I can imagine the amount emails maintainers get. I'll take care of this
> in future. But, its good to get some idea on how much is too much,
> specially for a security issue?
If it ain't really urgent, once a week like every other patchset. We
have all this documented in
Documentation/process/submitting-patches.rst
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 0/6] Delay VERW
2023-10-27 15:32 ` Pawan Gupta
2023-10-27 15:36 ` Borislav Petkov
@ 2023-10-27 15:38 ` Greg Kroah-Hartman
1 sibling, 0 replies; 23+ messages in thread
From: Greg Kroah-Hartman @ 2023-10-27 15:38 UTC (permalink / raw)
To: Pawan Gupta
Cc: Borislav Petkov, Thomas Gleixner, Ingo Molnar, Dave Hansen, x86,
H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
Jonathan Corbet, Sean Christopherson, Paolo Bonzini, tony.luck,
ak, tim.c.chen, Andrew Cooper, Nikolay Borisov, linux-kernel,
linux-doc, kvm, Alyssa Milburn, Daniel Sneddon,
antonio.gomez.iglesias, Alyssa Milburn, Dave Hansen
On Fri, Oct 27, 2023 at 08:32:42AM -0700, Pawan Gupta wrote:
> On Fri, Oct 27, 2023 at 05:12:26PM +0200, Borislav Petkov wrote:
> > On Fri, Oct 27, 2023 at 08:05:35AM -0700, Pawan Gupta wrote:
> > > I am going on a long vacation next week, I won't be working for the rest
> > > of the year. So I wanted to get this in a good shape quickly. This
> > > patchset addresses some security issues (although theoretical). So there
> > > is some sense of urgency. Sorry for spamming, I'll take you off the To:
> > > list.
> >
> > Even if you're leaving for vacation, I'm sure some colleague of yours or
> > dhansen will take over this for you. So there's no need to keep sending
> > this every day. Imagine everyone who leaves for vacation would start
> > doing that...
>
> I can imagine the amount emails maintainers get. I'll take care of this
> in future. But, its good to get some idea on how much is too much,
> specially for a security issue?
You said it wasn't a security issue (theoretical?)
And are we supposed to drop everything for such things? Again, think of
the people who are on the other end of your patches please...
greg k-h
^ permalink raw reply [flat|nested] 23+ messages in thread