All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Josh Poimboeuf <jpoimboe@kernel.org>,
	Andy Lutomirski <luto@kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	tony.luck@intel.com, ak@linux.intel.com,
	tim.c.chen@linux.intel.com, linux-kernel@vger.kernel.org,
	linux-doc@vger.kernel.org, kvm@vger.kernel.org,
	Alyssa Milburn <alyssa.milburn@linux.intel.com>,
	Daniel Sneddon <daniel.sneddon@linux.intel.com>,
	antonio.gomez.iglesias@linux.intel.com
Subject: Re: [PATCH  6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation
Date: Fri, 20 Oct 2023 15:55:07 -0700	[thread overview]
Message-ID: <ZTMFS8I2s8EroSNe@google.com> (raw)
In-Reply-To: <20231020-delay-verw-v1-6-cff54096326d@linux.intel.com>

On Fri, Oct 20, 2023, Pawan Gupta wrote:
> During VMentry VERW is executed to mitigate MDS. After VERW, any memory
> access like register push onto stack may put host data in MDS affected
> CPU buffers. A guest can then use MDS to sample host data.
> 
> Although likelihood of secrets surviving in registers at current VERW
> callsite is less, but it can't be ruled out. Harden the MDS mitigation
> by moving the VERW mitigation late in VMentry path.
> 
> Note that VERW for MMIO Stale Data mitigation is unchanged because of
> the complexity of per-guest conditional VERW which is not easy to handle
> that late in asm with no GPRs available. If the CPU is also affected by
> MDS, VERW is unconditionally executed late in asm regardless of guest
> having MMIO access.
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
>  arch/x86/kvm/vmx/vmenter.S |  9 +++++++++
>  arch/x86/kvm/vmx/vmx.c     | 10 +++++++---
>  2 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index be275a0410a8..efa716cf4727 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -1,6 +1,7 @@
>  /* SPDX-License-Identifier: GPL-2.0 */
>  #include <linux/linkage.h>
>  #include <asm/asm.h>
> +#include <asm/segment.h>
>  #include <asm/bitsperlong.h>
>  #include <asm/kvm_vcpu_regs.h>
>  #include <asm/nospec-branch.h>
> @@ -31,6 +32,8 @@
>  #define VCPU_R15	__VCPU_REGS_R15 * WORD_SIZE
>  #endif
>  
> +#define GUEST_CLEAR_CPU_BUFFERS		USER_CLEAR_CPU_BUFFERS
> +
>  .macro VMX_DO_EVENT_IRQOFF call_insn call_target
>  	/*
>  	 * Unconditionally create a stack frame, getting the correct RSP on the
> @@ -177,10 +180,16 @@ SYM_FUNC_START(__vmx_vcpu_run)
>   * the 'vmx_vmexit' label below.
>   */
>  .Lvmresume:
> +	/* Mitigate CPU data sampling attacks .e.g. MDS */
> +	GUEST_CLEAR_CPU_BUFFERS

I have a very hard time believing that it's worth duplicating the mitigation
for VMRESUME vs. VMLAUNCH just to land it after a Jcc.

 3b1:   48 8b 00                mov    (%rax),%rax
 3b4:   74 18                   je     3ce <__vmx_vcpu_run+0x9e>
 3b6:   eb 0e                   jmp    3c6 <__vmx_vcpu_run+0x96>
 3b8:   0f 00 2d 05 00 00 00    verw   0x5(%rip)        # 3c4 <__vmx_vcpu_run+0x94>
 3bf:   0f 1f 80 00 00 18 00    nopl   0x180000(%rax)
 3c6:   0f 01 c3                vmresume
 3c9:   e9 c9 00 00 00          jmp    497 <vmx_vmexit+0xa7>
 3ce:   eb 0e                   jmp    3de <__vmx_vcpu_run+0xae>
 3d0:   0f 00 2d 05 00 00 00    verw   0x5(%rip)        # 3dc <__vmx_vcpu_run+0xac>
 3d7:   0f 1f 80 00 00 18 00    nopl   0x180000(%rax)
 3de:   0f 01 c2                vmlaunch

Also, would it'd be better to put the NOP first?  Or even better, out of line?
It'd be quite hilarious if the CPU pulled a stupid and speculated on the operand
of the NOP, i.e. if the user/guest controlled RAX allowed for pulling in data
after the VERW.

  reply	other threads:[~2023-10-20 22:55 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-20 20:44 [PATCH 0/6] Delay VERW Pawan Gupta
2023-10-20 20:44 ` [PATCH 1/6] x86/bugs: Add asm helpers for executing VERW Pawan Gupta
2023-10-20 23:13   ` Sean Christopherson
2023-10-21  1:00     ` Pawan Gupta
2023-10-20 23:55   ` [RESEND][PATCH " Andrew Cooper
2023-10-21  1:18     ` Pawan Gupta
2023-10-21  1:33       ` Andrew Cooper
2023-10-21  2:21         ` Pawan Gupta
2023-10-23 18:08           ` Josh Poimboeuf
2023-10-23 19:09             ` Pawan Gupta
2023-10-25  6:28     ` Pawan Gupta
2023-10-25  7:22       ` Peter Zijlstra
2023-10-25  7:52         ` Andrew Cooper
2023-10-25  8:02           ` Peter Zijlstra
2023-10-25 15:27         ` Pawan Gupta
     [not found]   ` <6439a094-23a6-4de3-aa41-bd033163e044@citrix.com>
2023-10-22 16:16     ` [PATCH " Peter Zijlstra
2023-10-20 20:45 ` [PATCH 2/6] x86/entry_64: Add VERW just before userspace transition Pawan Gupta
2023-10-23 18:22   ` Josh Poimboeuf
2023-10-23 19:13     ` Pawan Gupta
2023-10-23 19:17     ` Dave Hansen
2023-10-23 18:35   ` Josh Poimboeuf
2023-10-23 21:04     ` Pawan Gupta
2023-10-23 21:47       ` Josh Poimboeuf
2023-10-23 22:30         ` Pawan Gupta
2023-10-23 22:45           ` Dave Hansen
2023-10-24  0:00             ` Pawan Gupta
2023-10-20 20:45 ` [PATCH 3/6] x86/entry_32: " Pawan Gupta
2023-10-20 23:49   ` Andi Kleen
2023-10-21  1:28     ` Pawan Gupta
2023-10-20 20:45 ` [PATCH 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key Pawan Gupta
2023-10-23 18:48   ` Josh Poimboeuf
2023-10-23 21:09     ` Pawan Gupta
2023-10-20 20:45 ` [PATCH 5/6] x86/bugs: Cleanup mds_user_clear Pawan Gupta
2023-10-23  8:51   ` Nikolay Borisov
2023-10-23 16:06     ` Pawan Gupta
2023-10-20 20:45 ` [PATCH 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation Pawan Gupta
2023-10-20 22:55   ` Sean Christopherson [this message]
2023-10-21  0:46     ` Pawan Gupta
2023-10-23 14:58       ` Sean Christopherson
2023-10-23 17:05         ` Pawan Gupta
2023-10-23 18:56   ` Josh Poimboeuf
2023-10-23 21:17     ` Pawan Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZTMFS8I2s8EroSNe@google.com \
    --to=seanjc@google.com \
    --cc=ak@linux.intel.com \
    --cc=alyssa.milburn@linux.intel.com \
    --cc=antonio.gomez.iglesias@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=daniel.sneddon@linux.intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jpoimboe@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.