From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from localhost ([127.0.0.1] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1fDB2s-0008K5-CB for speck@linutronix.de; Mon, 30 Apr 2018 17:48:54 +0200 Message-Id: <20180430151232.121481393@linutronix.de> Date: Mon, 30 Apr 2018 17:04:27 +0200 From: Thomas Gleixner References: <20180430150423.180110059@linutronix.de> MIME-Version: 1.0 Subject: [patch V8 04/15] SSB 4 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: A guest may modify the SPEC_CTRL MSR from the value used by the kernel. Since the kernel doesn't use IBRS, this means a value of zero is what is needed in the host. But the 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to the other bits as reserved so the kernel should respect the boot time SPEC_CTRL value and use that. This allows to deal with future extensions to the SPEC_CTRL interface if any at all. Note: This uses wrmsrl instead of native_wrmsl. I does not make any difference as paravirt will over-write the callq *0xfff.. with the wrmsrl assembler code. A copy of this document is available at https://bugzilla.kernel.org/show_bug.cgi?id=199511 [ tglx: Added a paranoia check for IBRS into the functions and simplified them ] Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov --- v2: New patch v3: Use the two accessory functions instead of poking at the global variable. v4: Use x86_get_spec_ctrl instead of global variable. v5: Use x86_get_default_spec_ctrl instead of x86_get_spec_ctrl --- arch/x86/include/asm/nospec-branch.h | 10 ++++++++++ arch/x86/kernel/cpu/bugs.c | 18 ++++++++++++++++++ arch/x86/kvm/svm.c | 6 ++---- arch/x86/kvm/vmx.c | 6 ++---- 4 files changed, 32 insertions(+), 8 deletions(-) --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -228,6 +228,16 @@ enum spectre_v2_mitigation { extern void x86_set_spec_ctrl(u64); extern u64 x86_get_default_spec_ctrl(void); +/* + * On VMENTER we must preserve whatever view of the SPEC_CTRL MSR + * the guest has, while on VMEXIT we restore the host view. This + * would be easier if SPEC_CTRL were architecturally maskable or + * shadowable for guests but this is not (currently) the case. + * Takes the guest view of SPEC_CTRL MSR as a parameter. + */ +extern void x86_set_guest_spec_ctrl(u64); +extern void x86_restore_host_spec_ctrl(u64); + extern char __indirect_thunk_start[]; extern char __indirect_thunk_end[]; --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -123,6 +123,24 @@ u64 x86_get_default_spec_ctrl(void) } EXPORT_SYMBOL_GPL(x86_get_default_spec_ctrl); +void x86_set_guest_spec_ctrl(u64 guest_spec_ctrl) +{ + if (!boot_cpu_has(X86_FEATURE_IBRS)) + return; + if (x86_spec_ctrl_base != guest_spec_ctrl) + wrmsrl(MSR_IA32_SPEC_CTRL, guest_spec_ctrl); +} +EXPORT_SYMBOL_GPL(x86_set_guest_spec_ctrl); + +void x86_restore_host_spec_ctrl(u64 guest_spec_ctrl) +{ + if (!boot_cpu_has(X86_FEATURE_IBRS)) + return; + if (x86_spec_ctrl_base != guest_spec_ctrl) + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); +} +EXPORT_SYMBOL_GPL(x86_restore_host_spec_ctrl); + #ifdef RETPOLINE static bool spectre_v2_bad_module; --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -5557,8 +5557,7 @@ static void svm_vcpu_run(struct kvm_vcpu * is no need to worry about the conditional branch over the wrmsr * being speculatively taken. */ - if (svm->spec_ctrl) - native_wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); + x86_set_guest_spec_ctrl(svm->spec_ctrl); asm volatile ( "push %%" _ASM_BP "; \n\t" @@ -5670,8 +5669,7 @@ static void svm_vcpu_run(struct kvm_vcpu if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - if (svm->spec_ctrl) - native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); + x86_restore_host_spec_ctrl(svm->spec_ctrl); /* Eliminate branch target predictions from guest mode */ vmexit_fill_RSB(); --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9726,8 +9726,7 @@ static void __noclone vmx_vcpu_run(struc * is no need to worry about the conditional branch over the wrmsr * being speculatively taken. */ - if (vmx->spec_ctrl) - native_wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); + x86_set_guest_spec_ctrl(vmx->spec_ctrl); vmx->__launched = vmx->loaded_vmcs->launched; @@ -9875,8 +9874,7 @@ static void __noclone vmx_vcpu_run(struc if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - if (vmx->spec_ctrl) - native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); + x86_restore_host_spec_ctrl(vmx->spec_ctrl); /* Eliminate branch target predictions from guest mode */ vmexit_fill_RSB();