linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
@ 2018-01-10 16:19 Liran Alon
  2018-01-10 16:27 ` Paolo Bonzini
  2018-01-10 16:47 ` David Woodhouse
  0 siblings, 2 replies; 28+ messages in thread
From: Liran Alon @ 2018-01-10 16:19 UTC (permalink / raw)
  To: dwmw
  Cc: konrad.wilk, jmattson, x86, bp, nadav.amit, thomas.lendacky,
	aliguori, arjan, rkrcmar, pbonzini, linux-kernel, kvm


----- dwmw@amazon.co.uk wrote:

> On Wed, 2018-01-10 at 10:41 -0500, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jan 10, 2018 at 03:28:43PM +0100, Paolo Bonzini wrote:
> > > On 10/01/2018 15:06, Arjan van de Ven wrote:
> > > > On 1/10/2018 5:20 AM, Paolo Bonzini wrote:
> > > >> * a simple specification that does "IBRS=1 blocks indirect
> > branch
> > > >> prediction altogether" would actually satisfy the
> specification
> > just as
> > > >> well, and it would be nice to know if that's what the
> processor
> > actually
> > > >> does.
> > > > 
> > > > it doesn't exactly, not for all.
> > > > 
> > > > so you really do need to write ibrs again.
> > > 
> > > Okay, so "always set IBRS=1" does *not* protect against variant
> 2. 
> > Thanks,
> > 
> > And what is the point of this "always set IBRS=1" then? Are there
> > some other things lurking in the shadows?
> 
> Yes. *FUTURE* CPUs will have a mode where you can just set IBRS and
> leave it set for ever and not worry about any of this, and the
> performance won't even suck.
> 
> Quite why it's still an option you have to set in an MSR, and not
> just
> a feature bit that they advertise and do it unconditionally, I have
> no
> idea. But apparently that's the plan.
> 
> But no current hardware will do this; they've done the best they can
> do
> with microcode already.

I'm still a bit confused here (maybe I'm not the only one), but I have the following pending questions:
(I really hope someone from Intel will clarify these)

(1) On VMEntry, Intel recommends to just restore SPEC_CTRL to guest value (using WRMSR or MSR save/load list) and that's it. As I previously said to Jim, I am missing here a mechanism which should be responsible for hiding host's BHB & RSB from guest. Therefore, guest still have the possibility to info-leak host's kernel module addresses (kvm-intel.ko / kvm.ko / vmlinux).
* a) As I currently understand things, the only available solution to this is to issue an IBPB and stuff RSB on every VMEntry before resuming the guest. The IBPB is done to clear host's BTB/BHB and the stuff RSB is done to overwrite all host's RSB entries.
* b) Does "IBRS ALL THE TIME" usage change this in some aspect? What is Intel recommendation to handle this info-leak in that usage?

(2) On VMExit, Intel recommends to always save guest SPEC_CTRL value, set IBRS to 1 (even if it is already set by guest) and stuff RSB. What exactly does this write of 1 to IBRS do? 
* a) Does it keep all currently existing BTB/BHB entries created by less-privileged prediction-mode and marks them as were created in less-privileged prediction-mode such that they won't be used in current prediction-mode?
* b) Or does it, as Paolo has mentioned multiple times, disables the branch predictor to never consult the BTB/BHB at all as long as IBRS=1?
If (b) is true, why is setting IBRS=1 better than just issue an IBPB that clears all entries? At least in that case the host kernel could still benefict branch prediction performance boost.
If (a) is true, does "IBRS ALL THE TIME" usage is basically a CPU change to just create all BTB/BHB entries to be tagged with prediction-mode at creation-time and that tag to be compared to current prediction-mode when CPU attempts to use BTB/BHB?

Regards,
-Liran

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 16:19 [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest Liran Alon
@ 2018-01-10 16:27 ` Paolo Bonzini
  2018-01-10 17:14   ` Jim Mattson
  2018-01-10 16:47 ` David Woodhouse
  1 sibling, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-10 16:27 UTC (permalink / raw)
  To: Liran Alon, dwmw
  Cc: konrad.wilk, jmattson, x86, bp, nadav.amit, thomas.lendacky,
	aliguori, arjan, rkrcmar, linux-kernel, kvm

I can answer (2) only.

On 10/01/2018 17:19, Liran Alon wrote:
> (2) On VMExit, Intel recommends to always save guest SPEC_CTRL value,
> set IBRS to 1 (even if it is already set by guest) and stuff RSB. What
> exactly does this write of 1 to IBRS do?
> * a) Does it keep all currently existing BTB/BHB entries created by
> less-privileged prediction-mode and marks them as were created in
> less-privileged prediction-mode such that they won't be used in current
> prediction-mode?
> * b) Or does it, as Paolo has mentioned multiple times, disables the
> branch predictor to never consult the BTB/BHB at all as long as IBRS=1?
> If (b) is true, why is setting IBRS=1 better than just issue an IBPB that clears all entries? At least in that case the > host kernel could still benefict branch prediction performance boost.

Arjan said (b) is not true on all processor generations.  But even if it
were true, setting IBRS=1 is much, much faster than IBPB.

> If (a) is true, does "IBRS ALL THE TIME" usage is basically a CPU
> change to just create all BTB/BHB entries to be tagged with
> prediction-mode at creation-time and that tag to be compared to current
> prediction-mode when CPU attempts to use BTB/BHB?

I hope so, and I hope said prediction mode includes PCID/VPID too.
While I agree with David that "we have other things to work on for now
before we support hypothetical future hardware", I'd like to make sure
that Intel gets it right...

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 16:19 [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest Liran Alon
  2018-01-10 16:27 ` Paolo Bonzini
@ 2018-01-10 16:47 ` David Woodhouse
  1 sibling, 0 replies; 28+ messages in thread
From: David Woodhouse @ 2018-01-10 16:47 UTC (permalink / raw)
  To: Liran Alon
  Cc: konrad.wilk, jmattson, x86, bp, nadav.amit, thomas.lendacky,
	aliguori, arjan, rkrcmar, pbonzini, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 1064 bytes --]

On Wed, 2018-01-10 at 08:19 -0800, Liran Alon wrote:
> 
> (1) On VMEntry, Intel recommends to just restore SPEC_CTRL to guest
> value (using WRMSR or MSR save/load list) and that's it. As I
> previously said to Jim, I am missing here a mechanism which should be
> responsible for hiding host's BHB & RSB from guest. Therefore, guest
> still have the possibility to info-leak host's kernel module
> addresses (kvm-intel.ko / kvm.ko / vmlinux).

How so?

The host has the capability to attack the guest... but that's not an
interesting observation.

I'm not sure why you consider it an information leak to have host
addresses in the BTB/RSB when the guest is running; it's not like they
can be *read* from there. Perhaps you could mount a really contrived
attack where you might attempt to provide your own spec-leak code at
various candidate addresses that you think might be host BTB targets,
and validate your assumptions... but I suspect basic cache-based
observations were easier than that anyway.

I don't think this is a consideration.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5213 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 16:27 ` Paolo Bonzini
@ 2018-01-10 17:14   ` Jim Mattson
  2018-01-10 17:16     ` Paolo Bonzini
  0 siblings, 1 reply; 28+ messages in thread
From: Jim Mattson @ 2018-01-10 17:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Liran Alon, dwmw, Konrad Rzeszutek Wilk,
	the arch/x86 maintainers, bp, Nadav Amit, Tom Lendacky, aliguori,
	Arjan van de Ven, Radim Krčmář,
	LKML, kvm list

On Wed, Jan 10, 2018 at 8:27 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> I can answer (2) only.
>
> On 10/01/2018 17:19, Liran Alon wrote:
>> (2) On VMExit, Intel recommends to always save guest SPEC_CTRL value,
>> set IBRS to 1 (even if it is already set by guest) and stuff RSB. What
>> exactly does this write of 1 to IBRS do?
>> * a) Does it keep all currently existing BTB/BHB entries created by
>> less-privileged prediction-mode and marks them as were created in
>> less-privileged prediction-mode such that they won't be used in current
>> prediction-mode?
>> * b) Or does it, as Paolo has mentioned multiple times, disables the
>> branch predictor to never consult the BTB/BHB at all as long as IBRS=1?
>> If (b) is true, why is setting IBRS=1 better than just issue an IBPB that clears all entries? At least in that case the > host kernel could still benefict branch prediction performance boost.
>
> Arjan said (b) is not true on all processor generations.  But even if it
> were true, setting IBRS=1 is much, much faster than IBPB.
>
>> If (a) is true, does "IBRS ALL THE TIME" usage is basically a CPU
>> change to just create all BTB/BHB entries to be tagged with
>> prediction-mode at creation-time and that tag to be compared to current
>> prediction-mode when CPU attempts to use BTB/BHB?
>
> I hope so, and I hope said prediction mode includes PCID/VPID too.

Branch prediction entries should probably be tagged with PCID, VPID,
EP4TA, and thread ID...the same things used to tag TLB contexts.

> While I agree with David that "we have other things to work on for now
> before we support hypothetical future hardware", I'd like to make sure
> that Intel gets it right...
>
> Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 17:14   ` Jim Mattson
@ 2018-01-10 17:16     ` Paolo Bonzini
  2018-01-10 17:23       ` Nadav Amit
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-10 17:16 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Liran Alon, dwmw, Konrad Rzeszutek Wilk,
	the arch/x86 maintainers, bp, Nadav Amit, Tom Lendacky, aliguori,
	Arjan van de Ven, Radim Krčmář,
	LKML, kvm list

On 10/01/2018 18:14, Jim Mattson wrote:
>>> If (a) is true, does "IBRS ALL THE TIME" usage is basically a CPU
>>> change to just create all BTB/BHB entries to be tagged with
>>> prediction-mode at creation-time and that tag to be compared to current
>>> prediction-mode when CPU attempts to use BTB/BHB?
>>
>> I hope so, and I hope said prediction mode includes PCID/VPID too.
>
> Branch prediction entries should probably be tagged with PCID, VPID,
> EP4TA, and thread ID...the same things used to tag TLB contexts.

But if so, I don't see the need for IBPB.

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 17:16     ` Paolo Bonzini
@ 2018-01-10 17:23       ` Nadav Amit
  2018-01-10 17:32         ` Jim Mattson
  0 siblings, 1 reply; 28+ messages in thread
From: Nadav Amit @ 2018-01-10 17:23 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jim Mattson, Liran Alon, dwmw, Konrad Rzeszutek Wilk,
	the arch/x86 maintainers, bp, Tom Lendacky, aliguori,
	Arjan van de Ven, Radim Krčmář,
	LKML, kvm list

Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 10/01/2018 18:14, Jim Mattson wrote:
>>>> If (a) is true, does "IBRS ALL THE TIME" usage is basically a CPU
>>>> change to just create all BTB/BHB entries to be tagged with
>>>> prediction-mode at creation-time and that tag to be compared to current
>>>> prediction-mode when CPU attempts to use BTB/BHB?
>>> 
>>> I hope so, and I hope said prediction mode includes PCID/VPID too.
>> 
>> Branch prediction entries should probably be tagged with PCID, VPID,
>> EP4TA, and thread ID...the same things used to tag TLB contexts.
> 
> But if so, I don't see the need for IBPB.

It is highly improbable that a microcode patch can change how prediction
entries are tagged. IIRC, microcode may change the behavior of instructions
and “assists" (e.g., TLB miss). Not much more than that.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 17:23       ` Nadav Amit
@ 2018-01-10 17:32         ` Jim Mattson
  0 siblings, 0 replies; 28+ messages in thread
From: Jim Mattson @ 2018-01-10 17:32 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Paolo Bonzini, Liran Alon, dwmw, Konrad Rzeszutek Wilk,
	the arch/x86 maintainers, bp, Tom Lendacky, aliguori,
	Arjan van de Ven, Radim Krčmář,
	LKML, kvm list

Right. For future CPUs with a well-engineered fix, no extra work
should be necessary on VM-entry. However, for current CPUs, we have to
ensure that host kernel addresses can't be deduced from by the guest.
IBPB may be sufficient, but Intel's slide deck doesn't make that
clear.

On Wed, Jan 10, 2018 at 9:23 AM, Nadav Amit <nadav.amit@gmail.com> wrote:
> Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>> On 10/01/2018 18:14, Jim Mattson wrote:
>>>>> If (a) is true, does "IBRS ALL THE TIME" usage is basically a CPU
>>>>> change to just create all BTB/BHB entries to be tagged with
>>>>> prediction-mode at creation-time and that tag to be compared to current
>>>>> prediction-mode when CPU attempts to use BTB/BHB?
>>>>
>>>> I hope so, and I hope said prediction mode includes PCID/VPID too.
>>>
>>> Branch prediction entries should probably be tagged with PCID, VPID,
>>> EP4TA, and thread ID...the same things used to tag TLB contexts.
>>
>> But if so, I don't see the need for IBPB.
>
> It is highly improbable that a microcode patch can change how prediction
> entries are tagged. IIRC, microcode may change the behavior of instructions
> and “assists" (e.g., TLB miss). Not much more than that.
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-15  9:23     ` Paolo Bonzini
@ 2018-01-15  9:34       ` Thomas Gleixner
  0 siblings, 0 replies; 28+ messages in thread
From: Thomas Gleixner @ 2018-01-15  9:34 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Longpeng (Mike),
	linux-kernel, kvm, rkrcmar, liran.alon, jmattson, aliguori,
	thomas.lendacky, dwmw, bp, x86, Gonglei

On Mon, 15 Jan 2018, Paolo Bonzini wrote:

> On 13/01/2018 11:16, Longpeng (Mike) wrote:
> >> +	/*
> >> +	 * FIXME: this is only needed until SPEC_CTRL is supported
> >> +	 * by upstream Linux in cpufeatures, then it can be replaced
> >> +	 * with static_cpu_has.
> >> +	 */
> >> +	have_spec_ctrl = cpu_has_spec_ctrl();
> >> +	if (have_spec_ctrl)
> >> +		pr_info("kvm: SPEC_CTRL available\n");
> >> +	else
> >> +		pr_info("kvm: SPEC_CTRL not available\n");
> >> +
> > 
> > In this approach, we must reload these modules if we update the microcode later ?
> 
> I strongly suggest using early microcode update anyway.

We had the discussion already and we are not going to support late micro
code loading. It's just not worth the trouble.

Also please do not commit any of this before we have sorted out the bare
metal IBRS/IBPB support. We really don't want to have two variants of that
in tree.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-13 10:16   ` Longpeng (Mike)
@ 2018-01-15  9:23     ` Paolo Bonzini
  2018-01-15  9:34       ` Thomas Gleixner
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-15  9:23 UTC (permalink / raw)
  To: Longpeng (Mike)
  Cc: linux-kernel, kvm, rkrcmar, liran.alon, jmattson, aliguori,
	thomas.lendacky, dwmw, bp, x86, Gonglei

On 13/01/2018 11:16, Longpeng (Mike) wrote:
>> +	/*
>> +	 * FIXME: this is only needed until SPEC_CTRL is supported
>> +	 * by upstream Linux in cpufeatures, then it can be replaced
>> +	 * with static_cpu_has.
>> +	 */
>> +	have_spec_ctrl = cpu_has_spec_ctrl();
>> +	if (have_spec_ctrl)
>> +		pr_info("kvm: SPEC_CTRL available\n");
>> +	else
>> +		pr_info("kvm: SPEC_CTRL not available\n");
>> +
> 
> In this approach, we must reload these modules if we update the microcode later ?

I strongly suggest using early microcode update anyway.

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-09 12:03 ` [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest Paolo Bonzini
@ 2018-01-13 10:16   ` Longpeng (Mike)
  2018-01-15  9:23     ` Paolo Bonzini
  0 siblings, 1 reply; 28+ messages in thread
From: Longpeng (Mike) @ 2018-01-13 10:16 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: linux-kernel, kvm, rkrcmar, liran.alon, jmattson, aliguori,
	thomas.lendacky, dwmw, bp, x86, Gonglei



On 2018/1/9 20:03, Paolo Bonzini wrote:

> Direct access to MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD is important
> for performance.  Allow load/store of MSR_IA32_SPEC_CTRL, restore guest
> IBRS on VM entry and set it to 0 on VM exit (because Linux does not use
> it yet).
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/vmx.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 669f5f74857d..ef603692aa98 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -120,6 +120,8 @@
>  module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);
>  #endif
>  
> +static bool __read_mostly have_spec_ctrl;
> +
>  #define KVM_GUEST_CR0_MASK (X86_CR0_NW | X86_CR0_CD)
>  #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST (X86_CR0_WP | X86_CR0_NE)
>  #define KVM_VM_CR0_ALWAYS_ON						\
> @@ -609,6 +611,8 @@ struct vcpu_vmx {
>  	u64 		      msr_host_kernel_gs_base;
>  	u64 		      msr_guest_kernel_gs_base;
>  #endif
> +	u64		      spec_ctrl;
> +
>  	u32 vm_entry_controls_shadow;
>  	u32 vm_exit_controls_shadow;
>  	u32 secondary_exec_control;
> @@ -3361,6 +3365,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  	case MSR_IA32_TSC:
>  		msr_info->data = guest_read_tsc(vcpu);
>  		break;
> +	case MSR_IA32_SPEC_CTRL:
> +		msr_info->data = to_vmx(vcpu)->spec_ctrl;
> +		break;
>  	case MSR_IA32_SYSENTER_CS:
>  		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
>  		break;
> @@ -3500,6 +3507,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  	case MSR_IA32_TSC:
>  		kvm_write_tsc(vcpu, msr_info);
>  		break;
> +	case MSR_IA32_SPEC_CTRL:
> +		to_vmx(vcpu)->spec_ctrl = data;
> +		break;
>  	case MSR_IA32_CR_PAT:
>  		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
>  			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> @@ -7062,6 +7072,17 @@ static __init int hardware_setup(void)
>  		goto out;
>  	}
>  
> +	/*
> +	 * FIXME: this is only needed until SPEC_CTRL is supported
> +	 * by upstream Linux in cpufeatures, then it can be replaced
> +	 * with static_cpu_has.
> +	 */
> +	have_spec_ctrl = cpu_has_spec_ctrl();
> +	if (have_spec_ctrl)
> +		pr_info("kvm: SPEC_CTRL available\n");
> +	else
> +		pr_info("kvm: SPEC_CTRL not available\n");
> +

In this approach, we must reload these modules if we update the microcode later ?

>  	if (boot_cpu_has(X86_FEATURE_NX))
>  		kvm_enable_efer_bits(EFER_NX);
>  
> @@ -7131,6 +7152,8 @@ static __init int hardware_setup(void)
>  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_CS, false);
>  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_ESP, false);
>  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);
> +	vmx_disable_intercept_for_msr(MSR_IA32_SPEC_CTRL, false);
> +	vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false);
>  
>  	memcpy(vmx_msr_bitmap_legacy_x2apic_apicv,
>  			vmx_msr_bitmap_legacy, PAGE_SIZE);
> @@ -9601,6 +9624,13 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  
>  	vmx_arm_hv_timer(vcpu);
>  
> +	/*
> +	 * MSR_IA32_SPEC_CTRL is restored after the last indirect branch
> +	 * before vmentry.
> +	 */
> +	if (have_spec_ctrl && vmx->spec_ctrl != 0)
> +		wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> +
>  	vmx->__launched = vmx->loaded_vmcs->launched;
>  	asm(
>  		/* Store host registers */
> @@ -9707,6 +9737,18 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  #endif
>  	      );
>  
> +	if (have_spec_ctrl) {
> +		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> +		if (vmx->spec_ctrl != 0)
> +			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
> +	}
> +	/*
> +	 * Speculative execution past the above wrmsrl might encounter
> +	 * an indirect branch and use guest-controlled contents of the
> +	 * indirect branch predictor; block it.
> +	 */
> +	asm("lfence");
> +
>  	/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
>  	if (vmx->host_debugctlmsr)
>  		update_debugctlmsr(vmx->host_debugctlmsr);


-- 
Regards,
Longpeng(Mike)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-12 23:17     ` Jim Mattson
@ 2018-01-12 23:19       ` Nadav Amit
  0 siblings, 0 replies; 28+ messages in thread
From: Nadav Amit @ 2018-01-12 23:19 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Paolo Bonzini, Liran Alon, the arch/x86 maintainers, dwmw,
	Borislav Petkov, Anthony Liguori, Tom Lendacky,
	Radim Krčmář,
	LKML, kvm list

Thanks, Jim. Highly appreciated.

Jim Mattson <jmattson@google.com> wrote:

> Nadav,
> 
> See section 2.5.1.2 (paragraph 3) in
> https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf.
> 
> On Tue, Jan 9, 2018 at 9:03 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
>> Paolo Bonzini <pbonzini@redhat.com> wrote:
>> 
>>> On 09/01/2018 17:48, Liran Alon wrote:
>>>>>> +  if (have_spec_ctrl) {
>>>>>> +          rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
>>>>>> +          if (vmx->spec_ctrl != 0)
>>>>>> +                  wrmsrl(MSR_IA32_SPEC_CTRL, 0);
>>>> 
>>>> As I said also on the AMD patch, I think this is a bug.
>>>> Intel specify that we should set IBRS bit even if it was already set on every #VMExit.
>>> 
>>> That's correct (though I'd like to understand _why_---I'm not inclined
>>> to blindly trust a spec), but for now it's saving a wrmsr of 0.  That is
>>> quite obviously okay, and will be also okay after the bare-metal IBRS
>>> patches.
>>> 
>>> Of course the code will become something like
>>> 
>>>      if (using_ibrs || vmx->spec_ctrl != 0)
>>>              wrmsrl(MSR_IA32_SPEC_CTRL, host_ibrs);
>>> 
>>> optimizing the case where the host is using retpolines.
>> 
>> Excuse my ignorance: Can you point me to the specifications that mention “we
>> should set IBRS bit even if it was already set on every #VMExit” ?
>> 
>> Thanks,
>> Nadav

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10  5:03   ` Nadav Amit
  2018-01-10 13:20     ` Paolo Bonzini
@ 2018-01-12 23:17     ` Jim Mattson
  2018-01-12 23:19       ` Nadav Amit
  1 sibling, 1 reply; 28+ messages in thread
From: Jim Mattson @ 2018-01-12 23:17 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Paolo Bonzini, Liran Alon, the arch/x86 maintainers, dwmw,
	Borislav Petkov, Anthony Liguori, Tom Lendacky,
	Radim Krčmář,
	LKML, kvm list

Nadav,

See section 2.5.1.2 (paragraph 3) in
https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf.

On Tue, Jan 9, 2018 at 9:03 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
> Paolo Bonzini <pbonzini@redhat.com> wrote:
>
>> On 09/01/2018 17:48, Liran Alon wrote:
>>>>> +  if (have_spec_ctrl) {
>>>>> +          rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
>>>>> +          if (vmx->spec_ctrl != 0)
>>>>> +                  wrmsrl(MSR_IA32_SPEC_CTRL, 0);
>>>
>>> As I said also on the AMD patch, I think this is a bug.
>>> Intel specify that we should set IBRS bit even if it was already set on every #VMExit.
>>
>> That's correct (though I'd like to understand _why_---I'm not inclined
>> to blindly trust a spec), but for now it's saving a wrmsr of 0.  That is
>> quite obviously okay, and will be also okay after the bare-metal IBRS
>> patches.
>>
>> Of course the code will become something like
>>
>>       if (using_ibrs || vmx->spec_ctrl != 0)
>>               wrmsrl(MSR_IA32_SPEC_CTRL, host_ibrs);
>>
>> optimizing the case where the host is using retpolines.
>
> Excuse my ignorance: Can you point me to the specifications that mention “we
> should set IBRS bit even if it was already set on every #VMExit” ?
>
> Thanks,
> Nadav

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 16:51 Liran Alon
@ 2018-01-10 17:07 ` David Woodhouse
  0 siblings, 0 replies; 28+ messages in thread
From: David Woodhouse @ 2018-01-10 17:07 UTC (permalink / raw)
  To: Liran Alon
  Cc: jmattson, konrad.wilk, x86, nadav.amit, bp, arjan, aliguori,
	thomas.lendacky, rkrcmar, pbonzini, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 864 bytes --]

On Wed, 2018-01-10 at 08:51 -0800, Liran Alon wrote:
> 
> Hmm... This is exactly how Google Project-Zero PoC leaks kvm-
> intel.ko, kvm.ko & vmlinux...
> See section "Locating the host kernel" here:
> https://googleprojectzero.blogspot.co.il/2018/01/reading-privileged-m
> emory-with-side.html
> 
> This was an important primitive in order for them to actually launch
> the attack of reading host's memory pages. As they needed the
> hypervisor addresses such that they will be able to later poison the
> BTB/BHB to gadgets residing in known host addresses.

Ah, joy. I'm not sure that leak is being plugged. Even setting IBRS=1
when entering the guest isn't guaranteed to plug it, as it's only
defined to prevent predictions from affecting a *more* privileged
prediction mode than they were 'learned' in.

Maybe IBPB would suffice? I'm not sure.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5213 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
@ 2018-01-10 16:51 Liran Alon
  2018-01-10 17:07 ` David Woodhouse
  0 siblings, 1 reply; 28+ messages in thread
From: Liran Alon @ 2018-01-10 16:51 UTC (permalink / raw)
  To: dwmw2
  Cc: jmattson, konrad.wilk, x86, nadav.amit, bp, arjan, aliguori,
	thomas.lendacky, rkrcmar, pbonzini, linux-kernel, kvm


----- dwmw2@infradead.org wrote:

> On Wed, 2018-01-10 at 08:19 -0800, Liran Alon wrote:
> > 
> > (1) On VMEntry, Intel recommends to just restore SPEC_CTRL to guest
> > value (using WRMSR or MSR save/load list) and that's it. As I
> > previously said to Jim, I am missing here a mechanism which should
> be
> > responsible for hiding host's BHB & RSB from guest. Therefore,
> guest
> > still have the possibility to info-leak host's kernel module
> > addresses (kvm-intel.ko / kvm.ko / vmlinux).
> 
> How so?
> 
> The host has the capability to attack the guest... but that's not an
> interesting observation.

That's not the issue I am talking about here.
I'm talking about the guest ability to info-leak addresses in host.

> 
> I'm not sure why you consider it an information leak to have host
> addresses in the BTB/RSB when the guest is running; it's not like
> they
> can be *read* from there. Perhaps you could mount a really contrived
> attack where you might attempt to provide your own spec-leak code at
> various candidate addresses that you think might be host BTB targets,
> and validate your assumptions... but I suspect basic cache-based
> observations were easier than that anyway.
> 
> I don't think this is a consideration.

Hmm... This is exactly how Google Project-Zero PoC leaks kvm-intel.ko, kvm.ko & vmlinux...
See section "Locating the host kernel" here:
https://googleprojectzero.blogspot.co.il/2018/01/reading-privileged-memory-with-side.html

This was an important primitive in order for them to actually launch the attack of reading host's memory pages. As they needed the hypervisor addresses such that they will be able to later poison the BTB/BHB to gadgets residing in known host addresses.

-Liran

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 15:56               ` Paolo Bonzini
@ 2018-01-10 16:05                 ` David Woodhouse
  0 siblings, 0 replies; 28+ messages in thread
From: David Woodhouse @ 2018-01-10 16:05 UTC (permalink / raw)
  To: Paolo Bonzini, konrad.wilk
  Cc: kvm, linux-kernel, jmattson, arjan, Liguori, Anthony, nadav.amit,
	x86, liran.alon, bp, thomas.lendacky, rkrcmar

[-- Attachment #1: Type: text/plain, Size: 1274 bytes --]

On Wed, 2018-01-10 at 16:56 +0100, Paolo Bonzini wrote:
> On 10/01/2018 16:48, Woodhouse, David wrote:
> >>
> >> And what is the point of this "always set IBRS=1" then? Are there
> >> some other things lurking in the shadows?
> > Yes. *FUTURE* CPUs will have a mode where you can just set IBRS and
> > leave it set for ever and not worry about any of this, and the
> > performance won't even suck.
> > 
> > Quite why it's still an option you have to set in an MSR, and not just
> > a feature bit that they advertise and do it unconditionally, I have no
> > idea. But apparently that's the plan.
> 
> And again---why you still need IBPBs.  That also escapes me.  I wouldn't
> be surprised if that's just a trick to sneak it in a generation earlier...

There was some suggestion in early development that guests/processes
with a different CR3 or different VMCS would be considered "mutually
less privileged". It got dropped, and I'm sure we'll have clarification
before IBRS_ATT actually gets released.

I think the current docs do suggest that you need IBPB on context
switch (and vmptrld) still. But I expect that *might* go away if we're
lucky. And really, we have other things to work on for now before we
support hypothetical future hardware :)

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5213 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 15:48             ` Woodhouse, David
@ 2018-01-10 15:56               ` Paolo Bonzini
  2018-01-10 16:05                 ` David Woodhouse
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-10 15:56 UTC (permalink / raw)
  To: Woodhouse, David, konrad.wilk
  Cc: kvm, linux-kernel, jmattson, arjan, Liguori, Anthony, nadav.amit,
	x86, liran.alon, bp, thomas.lendacky, rkrcmar

On 10/01/2018 16:48, Woodhouse, David wrote:
>>
>> And what is the point of this "always set IBRS=1" then? Are there
>> some other things lurking in the shadows?
> Yes. *FUTURE* CPUs will have a mode where you can just set IBRS and
> leave it set for ever and not worry about any of this, and the
> performance won't even suck.
> 
> Quite why it's still an option you have to set in an MSR, and not just
> a feature bit that they advertise and do it unconditionally, I have no
> idea. But apparently that's the plan.

And again---why you still need IBPBs.  That also escapes me.  I wouldn't
be surprised if that's just a trick to sneak it in a generation earlier...

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 15:41           ` Konrad Rzeszutek Wilk
  2018-01-10 15:45             ` Paolo Bonzini
@ 2018-01-10 15:48             ` Woodhouse, David
  2018-01-10 15:56               ` Paolo Bonzini
  1 sibling, 1 reply; 28+ messages in thread
From: Woodhouse, David @ 2018-01-10 15:48 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Paolo Bonzini
  Cc: Arjan van de Ven, Nadav Amit, Liran Alon, jmattson, x86, bp,
	aliguori, thomas.lendacky, rkrcmar, linux-kernel, kvm

[-- Attachment #1: Type: text/plain, Size: 1255 bytes --]

On Wed, 2018-01-10 at 10:41 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 10, 2018 at 03:28:43PM +0100, Paolo Bonzini wrote:
> > On 10/01/2018 15:06, Arjan van de Ven wrote:
> > > On 1/10/2018 5:20 AM, Paolo Bonzini wrote:
> > >> * a simple specification that does "IBRS=1 blocks indirect
> branch
> > >> prediction altogether" would actually satisfy the specification
> just as
> > >> well, and it would be nice to know if that's what the processor
> actually
> > >> does.
> > > 
> > > it doesn't exactly, not for all.
> > > 
> > > so you really do need to write ibrs again.
> > 
> > Okay, so "always set IBRS=1" does *not* protect against variant 2. 
> Thanks,
> 
> And what is the point of this "always set IBRS=1" then? Are there
> some other things lurking in the shadows?

Yes. *FUTURE* CPUs will have a mode where you can just set IBRS and
leave it set for ever and not worry about any of this, and the
performance won't even suck.

Quite why it's still an option you have to set in an MSR, and not just
a feature bit that they advertise and do it unconditionally, I have no
idea. But apparently that's the plan.

But no current hardware will do this; they've done the best they can do
with microcode already.

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5210 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 15:41           ` Konrad Rzeszutek Wilk
@ 2018-01-10 15:45             ` Paolo Bonzini
  2018-01-10 15:48             ` Woodhouse, David
  1 sibling, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-10 15:45 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Arjan van de Ven, Nadav Amit, Liran Alon, jmattson, x86, dwmw,
	bp, aliguori, thomas.lendacky, rkrcmar, linux-kernel, kvm

On 10/01/2018 16:41, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 10, 2018 at 03:28:43PM +0100, Paolo Bonzini wrote:
>> On 10/01/2018 15:06, Arjan van de Ven wrote:
>>> On 1/10/2018 5:20 AM, Paolo Bonzini wrote:
>>>> * a simple specification that does "IBRS=1 blocks indirect branch
>>>> prediction altogether" would actually satisfy the specification just as
>>>> well, and it would be nice to know if that's what the processor actually
>>>> does.
>>>
>>> it doesn't exactly, not for all.
>>>
>>> so you really do need to write ibrs again.
>>
>> Okay, so "always set IBRS=1" does *not* protect against variant 2.  Thanks,
> 
> And what is the point of this "always set IBRS=1" then? Are there some other
> things lurking in the shadows?

The idea was that:

1) for workloads that are very kernel-heavy "always set IBRS=1" would be
faster than flipping it on kernel entry/exit,

2) skipping the IBRS=1 write on vmexit would be a nice optimization
because most vmexits come from guest ring 0.

But apparently it's a non-starter. :(

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 14:28         ` Paolo Bonzini
@ 2018-01-10 15:41           ` Konrad Rzeszutek Wilk
  2018-01-10 15:45             ` Paolo Bonzini
  2018-01-10 15:48             ` Woodhouse, David
  0 siblings, 2 replies; 28+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-01-10 15:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Arjan van de Ven, Nadav Amit, Liran Alon, jmattson, x86, dwmw,
	bp, aliguori, thomas.lendacky, rkrcmar, linux-kernel, kvm

On Wed, Jan 10, 2018 at 03:28:43PM +0100, Paolo Bonzini wrote:
> On 10/01/2018 15:06, Arjan van de Ven wrote:
> > On 1/10/2018 5:20 AM, Paolo Bonzini wrote:
> >> * a simple specification that does "IBRS=1 blocks indirect branch
> >> prediction altogether" would actually satisfy the specification just as
> >> well, and it would be nice to know if that's what the processor actually
> >> does.
> > 
> > it doesn't exactly, not for all.
> > 
> > so you really do need to write ibrs again.
> 
> Okay, so "always set IBRS=1" does *not* protect against variant 2.  Thanks,

And what is the point of this "always set IBRS=1" then? Are there some other
things lurking in the shadows?
> 
> Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 14:06       ` Arjan van de Ven
@ 2018-01-10 14:28         ` Paolo Bonzini
  2018-01-10 15:41           ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-10 14:28 UTC (permalink / raw)
  To: Arjan van de Ven, Nadav Amit
  Cc: Liran Alon, jmattson, x86, dwmw, bp, aliguori, thomas.lendacky,
	rkrcmar, linux-kernel, kvm

On 10/01/2018 15:06, Arjan van de Ven wrote:
> On 1/10/2018 5:20 AM, Paolo Bonzini wrote:
>> * a simple specification that does "IBRS=1 blocks indirect branch
>> prediction altogether" would actually satisfy the specification just as
>> well, and it would be nice to know if that's what the processor actually
>> does.
> 
> it doesn't exactly, not for all.
> 
> so you really do need to write ibrs again.

Okay, so "always set IBRS=1" does *not* protect against variant 2.  Thanks,

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10 13:20     ` Paolo Bonzini
@ 2018-01-10 14:06       ` Arjan van de Ven
  2018-01-10 14:28         ` Paolo Bonzini
  0 siblings, 1 reply; 28+ messages in thread
From: Arjan van de Ven @ 2018-01-10 14:06 UTC (permalink / raw)
  To: Paolo Bonzini, Nadav Amit
  Cc: Liran Alon, jmattson, x86, dwmw, bp, aliguori, thomas.lendacky,
	rkrcmar, linux-kernel, kvm

On 1/10/2018 5:20 AM, Paolo Bonzini wrote:
> * a simple specification that does "IBRS=1 blocks indirect branch
> prediction altogether" would actually satisfy the specification just as
> well, and it would be nice to know if that's what the processor actually
> does.

it doesn't exactly, not for all.

so you really do need to write ibrs again.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-10  5:03   ` Nadav Amit
@ 2018-01-10 13:20     ` Paolo Bonzini
  2018-01-10 14:06       ` Arjan van de Ven
  2018-01-12 23:17     ` Jim Mattson
  1 sibling, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-10 13:20 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Liran Alon, jmattson, x86, dwmw, bp, aliguori, thomas.lendacky,
	rkrcmar, linux-kernel, kvm, Arjan van de Ven

On 10/01/2018 06:03, Nadav Amit wrote:
>>
>> Of course the code will become something like
>>
>> 	if (using_ibrs || vmx->spec_ctrl != 0)
>> 		wrmsrl(MSR_IA32_SPEC_CTRL, host_ibrs);
>>
>> optimizing the case where the host is using retpolines.
> Excuse my ignorance: Can you point me to the specifications that mention “we
> should set IBRS bit even if it was already set on every #VMExit” ?

All I have is some PowerPoint slides from Intel. :(  They say:

---
A near indirect jump/call/return may be affected by code in a less
privileged prediction mode that executed AFTER IBRS mode was last
written with a value of 1. There is no need to clear IBRS before writing
it with a value of 1. Unconditionally writing it with a value of 1 after
the prediction mode change is sufficient.

VMX non-root is considered a less privileged prediction mode than VM
root.  CPL 3 is considered a less privileged prediction mode than CPL0,
1, 2.

Some processors may enhance IBRS such that it isolates prediction modes
effectively and at higher performance if left set instead of being set
when enter OS and VMM and cleared when entering applications.  [This is]
enumerated by IA32_ARCH_CAPABILITIES[1].
---

(Yes, it literally says VM root, not VMX root).

But I think this is an awful specification.  For two reasons:

* a simple specification that does "IBRS=1 blocks indirect branch
prediction altogether" would actually satisfy the specification just as
well, and it would be nice to know if that's what the processor actually
does.

* the future case with enhanced IBRS still requires the expensive IBPB
when switching between applications or between guests, where the
PCID/VPID (and PCID/VPID invalidation) could be used to remove that need.

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-09 16:57 ` Paolo Bonzini
@ 2018-01-10  5:03   ` Nadav Amit
  2018-01-10 13:20     ` Paolo Bonzini
  2018-01-12 23:17     ` Jim Mattson
  0 siblings, 2 replies; 28+ messages in thread
From: Nadav Amit @ 2018-01-10  5:03 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Liran Alon, jmattson, x86, dwmw, bp, aliguori, thomas.lendacky,
	rkrcmar, linux-kernel, kvm

Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 09/01/2018 17:48, Liran Alon wrote:
>>>> +	if (have_spec_ctrl) {
>>>> +		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
>>>> +		if (vmx->spec_ctrl != 0)
>>>> +			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
>> 
>> As I said also on the AMD patch, I think this is a bug.
>> Intel specify that we should set IBRS bit even if it was already set on every #VMExit.
> 
> That's correct (though I'd like to understand _why_---I'm not inclined
> to blindly trust a spec), but for now it's saving a wrmsr of 0.  That is
> quite obviously okay, and will be also okay after the bare-metal IBRS
> patches.
> 
> Of course the code will become something like
> 
> 	if (using_ibrs || vmx->spec_ctrl != 0)
> 		wrmsrl(MSR_IA32_SPEC_CTRL, host_ibrs);
> 
> optimizing the case where the host is using retpolines.

Excuse my ignorance: Can you point me to the specifications that mention “we
should set IBRS bit even if it was already set on every #VMExit” ?

Thanks,
Nadav

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
@ 2018-01-10  0:33 Liran Alon
  0 siblings, 0 replies; 28+ messages in thread
From: Liran Alon @ 2018-01-10  0:33 UTC (permalink / raw)
  To: pbonzini
  Cc: jmattson, x86, dwmw, bp, thomas.lendacky, aliguori, rkrcmar,
	linux-kernel, kvm


----- pbonzini@redhat.com wrote:

> On 09/01/2018 17:48, Liran Alon wrote:
> >>>  
> >>> +	if (have_spec_ctrl) {
> >>> +		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> >>> +		if (vmx->spec_ctrl != 0)
> >>> +			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
> >
> > As I said also on the AMD patch, I think this is a bug.
> > Intel specify that we should set IBRS bit even if it was already set
> on every #VMExit.
> 
> That's correct (though I'd like to understand _why_---I'm not
> inclined
> to blindly trust a spec), but for now it's saving a wrmsr of 0.  That
> is
> quite obviously okay, and will be also okay after the bare-metal IBRS
> patches.
> 
> Of course the code will become something like
> 
> 	if (using_ibrs || vmx->spec_ctrl != 0)
> 		wrmsrl(MSR_IA32_SPEC_CTRL, host_ibrs);
> 
> optimizing the case where the host is using retpolines.
> 
> Paolo

I agree with all the above.

-Liran

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-09 16:48 Liran Alon
@ 2018-01-09 16:57 ` Paolo Bonzini
  2018-01-10  5:03   ` Nadav Amit
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-09 16:57 UTC (permalink / raw)
  To: Liran Alon
  Cc: jmattson, x86, dwmw, bp, aliguori, thomas.lendacky, rkrcmar,
	linux-kernel, kvm

On 09/01/2018 17:48, Liran Alon wrote:
>>>  
>>> +	if (have_spec_ctrl) {
>>> +		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
>>> +		if (vmx->spec_ctrl != 0)
>>> +			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
>
> As I said also on the AMD patch, I think this is a bug.
> Intel specify that we should set IBRS bit even if it was already set on every #VMExit.

That's correct (though I'd like to understand _why_---I'm not inclined
to blindly trust a spec), but for now it's saving a wrmsr of 0.  That is
quite obviously okay, and will be also okay after the bare-metal IBRS
patches.

Of course the code will become something like

	if (using_ibrs || vmx->spec_ctrl != 0)
		wrmsrl(MSR_IA32_SPEC_CTRL, host_ibrs);

optimizing the case where the host is using retpolines.

Paolo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
@ 2018-01-09 16:48 Liran Alon
  2018-01-09 16:57 ` Paolo Bonzini
  0 siblings, 1 reply; 28+ messages in thread
From: Liran Alon @ 2018-01-09 16:48 UTC (permalink / raw)
  To: pbonzini
  Cc: jmattson, x86, dwmw, bp, aliguori, thomas.lendacky, rkrcmar,
	linux-kernel, kvm


----- liran.alon@oracle.com wrote:

> ----- pbonzini@redhat.com wrote:
> 
> > Direct access to MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD is
> > important
> > for performance.  Allow load/store of MSR_IA32_SPEC_CTRL, restore
> > guest
> > IBRS on VM entry and set it to 0 on VM exit (because Linux does not
> > use
> > it yet).
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> > ---
> >  arch/x86/kvm/vmx.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 42 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> > index 669f5f74857d..ef603692aa98 100644
> > --- a/arch/x86/kvm/vmx.c
> > +++ b/arch/x86/kvm/vmx.c
> > @@ -120,6 +120,8 @@
> >  module_param_named(preemption_timer, enable_preemption_timer,
> bool,
> > S_IRUGO);
> >  #endif
> >  
> > +static bool __read_mostly have_spec_ctrl;
> > +
> >  #define KVM_GUEST_CR0_MASK (X86_CR0_NW | X86_CR0_CD)
> >  #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST (X86_CR0_WP |
> > X86_CR0_NE)
> >  #define KVM_VM_CR0_ALWAYS_ON						\
> > @@ -609,6 +611,8 @@ struct vcpu_vmx {
> >  	u64 		      msr_host_kernel_gs_base;
> >  	u64 		      msr_guest_kernel_gs_base;
> >  #endif
> > +	u64		      spec_ctrl;
> > +
> >  	u32 vm_entry_controls_shadow;
> >  	u32 vm_exit_controls_shadow;
> >  	u32 secondary_exec_control;
> > @@ -3361,6 +3365,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu,
> > struct msr_data *msr_info)
> >  	case MSR_IA32_TSC:
> >  		msr_info->data = guest_read_tsc(vcpu);
> >  		break;
> > +	case MSR_IA32_SPEC_CTRL:
> > +		msr_info->data = to_vmx(vcpu)->spec_ctrl;
> > +		break;
> >  	case MSR_IA32_SYSENTER_CS:
> >  		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
> >  		break;
> > @@ -3500,6 +3507,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu,
> > struct msr_data *msr_info)
> >  	case MSR_IA32_TSC:
> >  		kvm_write_tsc(vcpu, msr_info);
> >  		break;
> > +	case MSR_IA32_SPEC_CTRL:
> > +		to_vmx(vcpu)->spec_ctrl = data;
> > +		break;
> >  	case MSR_IA32_CR_PAT:
> >  		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
> >  			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> > @@ -7062,6 +7072,17 @@ static __init int hardware_setup(void)
> >  		goto out;
> >  	}
> >  
> > +	/*
> > +	 * FIXME: this is only needed until SPEC_CTRL is supported
> > +	 * by upstream Linux in cpufeatures, then it can be replaced
> > +	 * with static_cpu_has.
> > +	 */
> > +	have_spec_ctrl = cpu_has_spec_ctrl();
> > +	if (have_spec_ctrl)
> > +		pr_info("kvm: SPEC_CTRL available\n");
> > +	else
> > +		pr_info("kvm: SPEC_CTRL not available\n");
> > +
> >  	if (boot_cpu_has(X86_FEATURE_NX))
> >  		kvm_enable_efer_bits(EFER_NX);
> >  
> > @@ -7131,6 +7152,8 @@ static __init int hardware_setup(void)
> >  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_CS, false);
> >  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_ESP, false);
> >  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);
> > +	vmx_disable_intercept_for_msr(MSR_IA32_SPEC_CTRL, false);
> > +	vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false);
> >  
> >  	memcpy(vmx_msr_bitmap_legacy_x2apic_apicv,
> >  			vmx_msr_bitmap_legacy, PAGE_SIZE);
> > @@ -9601,6 +9624,13 @@ static void __noclone vmx_vcpu_run(struct
> > kvm_vcpu *vcpu)
> >  
> >  	vmx_arm_hv_timer(vcpu);
> >  
> > +	/*
> > +	 * MSR_IA32_SPEC_CTRL is restored after the last indirect branch
> > +	 * before vmentry.
> > +	 */
> > +	if (have_spec_ctrl && vmx->spec_ctrl != 0)
> > +		wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> > +
> >  	vmx->__launched = vmx->loaded_vmcs->launched;
> >  	asm(
> >  		/* Store host registers */
> > @@ -9707,6 +9737,18 @@ static void __noclone vmx_vcpu_run(struct
> > kvm_vcpu *vcpu)
> >  #endif
> >  	      );
> >  
> > +	if (have_spec_ctrl) {
> > +		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> > +		if (vmx->spec_ctrl != 0)

As I said also on the AMD patch, I think this is a bug.
Intel specify that we should set IBRS bit even if it was already set on every #VMExit.

-Liran

> > +			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
> > +	}
> > +	/*
> > +	 * Speculative execution past the above wrmsrl might encounter
> > +	 * an indirect branch and use guest-controlled contents of the
> > +	 * indirect branch predictor; block it.
> > +	 */
> > +	asm("lfence");
> > +
> >  	/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed
> > */
> >  	if (vmx->host_debugctlmsr)
> >  		update_debugctlmsr(vmx->host_debugctlmsr);
> > -- 
> > 1.8.3.1
> 
> Reviewed-by: Liran Alon <liran.alon@oracle.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
@ 2018-01-09 16:06 Liran Alon
  0 siblings, 0 replies; 28+ messages in thread
From: Liran Alon @ 2018-01-09 16:06 UTC (permalink / raw)
  To: pbonzini
  Cc: jmattson, x86, dwmw, bp, thomas.lendacky, aliguori, rkrcmar,
	linux-kernel, kvm


----- pbonzini@redhat.com wrote:

> Direct access to MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD is
> important
> for performance.  Allow load/store of MSR_IA32_SPEC_CTRL, restore
> guest
> IBRS on VM entry and set it to 0 on VM exit (because Linux does not
> use
> it yet).
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/vmx.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 669f5f74857d..ef603692aa98 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -120,6 +120,8 @@
>  module_param_named(preemption_timer, enable_preemption_timer, bool,
> S_IRUGO);
>  #endif
>  
> +static bool __read_mostly have_spec_ctrl;
> +
>  #define KVM_GUEST_CR0_MASK (X86_CR0_NW | X86_CR0_CD)
>  #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST (X86_CR0_WP |
> X86_CR0_NE)
>  #define KVM_VM_CR0_ALWAYS_ON						\
> @@ -609,6 +611,8 @@ struct vcpu_vmx {
>  	u64 		      msr_host_kernel_gs_base;
>  	u64 		      msr_guest_kernel_gs_base;
>  #endif
> +	u64		      spec_ctrl;
> +
>  	u32 vm_entry_controls_shadow;
>  	u32 vm_exit_controls_shadow;
>  	u32 secondary_exec_control;
> @@ -3361,6 +3365,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu,
> struct msr_data *msr_info)
>  	case MSR_IA32_TSC:
>  		msr_info->data = guest_read_tsc(vcpu);
>  		break;
> +	case MSR_IA32_SPEC_CTRL:
> +		msr_info->data = to_vmx(vcpu)->spec_ctrl;
> +		break;
>  	case MSR_IA32_SYSENTER_CS:
>  		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
>  		break;
> @@ -3500,6 +3507,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu,
> struct msr_data *msr_info)
>  	case MSR_IA32_TSC:
>  		kvm_write_tsc(vcpu, msr_info);
>  		break;
> +	case MSR_IA32_SPEC_CTRL:
> +		to_vmx(vcpu)->spec_ctrl = data;
> +		break;
>  	case MSR_IA32_CR_PAT:
>  		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
>  			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> @@ -7062,6 +7072,17 @@ static __init int hardware_setup(void)
>  		goto out;
>  	}
>  
> +	/*
> +	 * FIXME: this is only needed until SPEC_CTRL is supported
> +	 * by upstream Linux in cpufeatures, then it can be replaced
> +	 * with static_cpu_has.
> +	 */
> +	have_spec_ctrl = cpu_has_spec_ctrl();
> +	if (have_spec_ctrl)
> +		pr_info("kvm: SPEC_CTRL available\n");
> +	else
> +		pr_info("kvm: SPEC_CTRL not available\n");
> +
>  	if (boot_cpu_has(X86_FEATURE_NX))
>  		kvm_enable_efer_bits(EFER_NX);
>  
> @@ -7131,6 +7152,8 @@ static __init int hardware_setup(void)
>  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_CS, false);
>  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_ESP, false);
>  	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);
> +	vmx_disable_intercept_for_msr(MSR_IA32_SPEC_CTRL, false);
> +	vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false);
>  
>  	memcpy(vmx_msr_bitmap_legacy_x2apic_apicv,
>  			vmx_msr_bitmap_legacy, PAGE_SIZE);
> @@ -9601,6 +9624,13 @@ static void __noclone vmx_vcpu_run(struct
> kvm_vcpu *vcpu)
>  
>  	vmx_arm_hv_timer(vcpu);
>  
> +	/*
> +	 * MSR_IA32_SPEC_CTRL is restored after the last indirect branch
> +	 * before vmentry.
> +	 */
> +	if (have_spec_ctrl && vmx->spec_ctrl != 0)
> +		wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> +
>  	vmx->__launched = vmx->loaded_vmcs->launched;
>  	asm(
>  		/* Store host registers */
> @@ -9707,6 +9737,18 @@ static void __noclone vmx_vcpu_run(struct
> kvm_vcpu *vcpu)
>  #endif
>  	      );
>  
> +	if (have_spec_ctrl) {
> +		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
> +		if (vmx->spec_ctrl != 0)
> +			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
> +	}
> +	/*
> +	 * Speculative execution past the above wrmsrl might encounter
> +	 * an indirect branch and use guest-controlled contents of the
> +	 * indirect branch predictor; block it.
> +	 */
> +	asm("lfence");
> +
>  	/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed
> */
>  	if (vmx->host_debugctlmsr)
>  		update_debugctlmsr(vmx->host_debugctlmsr);
> -- 
> 1.8.3.1

Reviewed-by: Liran Alon <liran.alon@oracle.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest
  2018-01-09 12:03 [PATCH v2 0/8] KVM: x86: expose CVE-2017-5715 ("Spectre variant 2") mitigations to guest Paolo Bonzini
@ 2018-01-09 12:03 ` Paolo Bonzini
  2018-01-13 10:16   ` Longpeng (Mike)
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2018-01-09 12:03 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: rkrcmar, liran.alon, jmattson, aliguori, thomas.lendacky, dwmw, bp, x86

Direct access to MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD is important
for performance.  Allow load/store of MSR_IA32_SPEC_CTRL, restore guest
IBRS on VM entry and set it to 0 on VM exit (because Linux does not use
it yet).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 669f5f74857d..ef603692aa98 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -120,6 +120,8 @@
 module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);
 #endif
 
+static bool __read_mostly have_spec_ctrl;
+
 #define KVM_GUEST_CR0_MASK (X86_CR0_NW | X86_CR0_CD)
 #define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST (X86_CR0_WP | X86_CR0_NE)
 #define KVM_VM_CR0_ALWAYS_ON						\
@@ -609,6 +611,8 @@ struct vcpu_vmx {
 	u64 		      msr_host_kernel_gs_base;
 	u64 		      msr_guest_kernel_gs_base;
 #endif
+	u64		      spec_ctrl;
+
 	u32 vm_entry_controls_shadow;
 	u32 vm_exit_controls_shadow;
 	u32 secondary_exec_control;
@@ -3361,6 +3365,9 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_TSC:
 		msr_info->data = guest_read_tsc(vcpu);
 		break;
+	case MSR_IA32_SPEC_CTRL:
+		msr_info->data = to_vmx(vcpu)->spec_ctrl;
+		break;
 	case MSR_IA32_SYSENTER_CS:
 		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
 		break;
@@ -3500,6 +3507,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_TSC:
 		kvm_write_tsc(vcpu, msr_info);
 		break;
+	case MSR_IA32_SPEC_CTRL:
+		to_vmx(vcpu)->spec_ctrl = data;
+		break;
 	case MSR_IA32_CR_PAT:
 		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
 			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
@@ -7062,6 +7072,17 @@ static __init int hardware_setup(void)
 		goto out;
 	}
 
+	/*
+	 * FIXME: this is only needed until SPEC_CTRL is supported
+	 * by upstream Linux in cpufeatures, then it can be replaced
+	 * with static_cpu_has.
+	 */
+	have_spec_ctrl = cpu_has_spec_ctrl();
+	if (have_spec_ctrl)
+		pr_info("kvm: SPEC_CTRL available\n");
+	else
+		pr_info("kvm: SPEC_CTRL not available\n");
+
 	if (boot_cpu_has(X86_FEATURE_NX))
 		kvm_enable_efer_bits(EFER_NX);
 
@@ -7131,6 +7152,8 @@ static __init int hardware_setup(void)
 	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_CS, false);
 	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_ESP, false);
 	vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);
+	vmx_disable_intercept_for_msr(MSR_IA32_SPEC_CTRL, false);
+	vmx_disable_intercept_for_msr(MSR_IA32_PRED_CMD, false);
 
 	memcpy(vmx_msr_bitmap_legacy_x2apic_apicv,
 			vmx_msr_bitmap_legacy, PAGE_SIZE);
@@ -9601,6 +9624,13 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 
 	vmx_arm_hv_timer(vcpu);
 
+	/*
+	 * MSR_IA32_SPEC_CTRL is restored after the last indirect branch
+	 * before vmentry.
+	 */
+	if (have_spec_ctrl && vmx->spec_ctrl != 0)
+		wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
+
 	vmx->__launched = vmx->loaded_vmcs->launched;
 	asm(
 		/* Store host registers */
@@ -9707,6 +9737,18 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 #endif
 	      );
 
+	if (have_spec_ctrl) {
+		rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
+		if (vmx->spec_ctrl != 0)
+			wrmsrl(MSR_IA32_SPEC_CTRL, 0);
+	}
+	/*
+	 * Speculative execution past the above wrmsrl might encounter
+	 * an indirect branch and use guest-controlled contents of the
+	 * indirect branch predictor; block it.
+	 */
+	asm("lfence");
+
 	/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
 	if (vmx->host_debugctlmsr)
 		update_debugctlmsr(vmx->host_debugctlmsr);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-01-15  9:35 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-10 16:19 [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest Liran Alon
2018-01-10 16:27 ` Paolo Bonzini
2018-01-10 17:14   ` Jim Mattson
2018-01-10 17:16     ` Paolo Bonzini
2018-01-10 17:23       ` Nadav Amit
2018-01-10 17:32         ` Jim Mattson
2018-01-10 16:47 ` David Woodhouse
  -- strict thread matches above, loose matches on Subject: below --
2018-01-10 16:51 Liran Alon
2018-01-10 17:07 ` David Woodhouse
2018-01-10  0:33 Liran Alon
2018-01-09 16:48 Liran Alon
2018-01-09 16:57 ` Paolo Bonzini
2018-01-10  5:03   ` Nadav Amit
2018-01-10 13:20     ` Paolo Bonzini
2018-01-10 14:06       ` Arjan van de Ven
2018-01-10 14:28         ` Paolo Bonzini
2018-01-10 15:41           ` Konrad Rzeszutek Wilk
2018-01-10 15:45             ` Paolo Bonzini
2018-01-10 15:48             ` Woodhouse, David
2018-01-10 15:56               ` Paolo Bonzini
2018-01-10 16:05                 ` David Woodhouse
2018-01-12 23:17     ` Jim Mattson
2018-01-12 23:19       ` Nadav Amit
2018-01-09 16:06 Liran Alon
2018-01-09 12:03 [PATCH v2 0/8] KVM: x86: expose CVE-2017-5715 ("Spectre variant 2") mitigations to guest Paolo Bonzini
2018-01-09 12:03 ` [PATCH 3/8] kvm: vmx: pass MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD down to the guest Paolo Bonzini
2018-01-13 10:16   ` Longpeng (Mike)
2018-01-15  9:23     ` Paolo Bonzini
2018-01-15  9:34       ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).