kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups
@ 2020-12-31  0:26 Sean Christopherson
  2020-12-31  0:26 ` [PATCH 1/9] x86/virt: Eat faults on VMXOFF in reboot flows Sean Christopherson
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

This series is a conglomeration of three previous series/patches and a bit
of new code.  None of the previous series are directly related, but they
are all needed to achieve the overarching goal of nuking
__kvm_handle_fault_on_reboot(), which is a rather ugly inline asm macro
that has the unfortunate side effect of inserting in-line JMP+CALL
sequences.

Patches 1-3 are resurrected from a series by David Reed[1] to fix VMXOFF
bugs in the reboot flows.

Patch 4 is a patch from Uros Bizjak to get rid of custom inline asm in
nested VMX.  This already received Paolo's "Queued, thanks." blessing,
but has not been pushed to kvm.git.  It's included here as there is an
indirect dependency in patch 8.

Patches 5-6 are minor tweaks to KVM's VMX{ON/OFF} paths to use the
kernel's now-fault-tolerant VMXOFF instead of KVM's custom asm.

Patch 7 replaces SVM's __ex()/__kvm_handle_fault_on_reboot() with more
tailored asm goto macros, similar to the existing VMX asm_vmx*() macros.
This is largely an excuse to get rid of __kvm_handle_fault_on_reboot();
the actual benefits of removing JMP+CALL are likely negligible as SVM only
has a few uses of the macro (versus VMX's bajillion VMREADs/VMWRITEs).

Patch 8 removes __ex()/__kvm_handle_fault_on_reboot().

Patch 9 is a very trimmed down version of a different patch from Uros[3],
which cleaned up the __ex()/__kvm_handle_fault_on_reboot() code, as
opposed to zapping them entirely.

[1] https://lkml.kernel.org/r/20200704203809.76391-1-dpreed@deepplum.com
[2] https://lkml.kernel.org/r/20201029134145.107560-1-ubizjak@gmail.com
[3] https://lkml.kernel.org/r/20201221194800.46962-1-ubizjak@gmail.com

David P. Reed (1):
  x86/virt: Mark flags and memory as clobbered by VMXOFF

Sean Christopherson (6):
  x86/virt: Eat faults on VMXOFF in reboot flows
  x86/reboot: Force all cpus to exit VMX root if VMX is supported
  KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows
  KVM: VMX: Use the kernel's version of VMXOFF
  KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions
  KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot()

Uros Bizjak (2):
  KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw
  KVM: x86: Move declaration of kvm_spurious_fault() to x86.h

 arch/x86/include/asm/kvm_host.h | 25 --------------
 arch/x86/include/asm/virtext.h  | 25 ++++++++++----
 arch/x86/kernel/reboot.c        | 30 ++++++-----------
 arch/x86/kvm/svm/sev.c          |  5 ++-
 arch/x86/kvm/svm/svm.c          | 18 +---------
 arch/x86/kvm/svm/svm_ops.h      | 59 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/nested.c       | 32 ++----------------
 arch/x86/kvm/vmx/vmenter.S      |  2 +-
 arch/x86/kvm/vmx/vmx.c          | 28 ++++++----------
 arch/x86/kvm/vmx/vmx.h          |  1 +
 arch/x86/kvm/vmx/vmx_ops.h      |  4 +--
 arch/x86/kvm/x86.c              |  9 ++++-
 arch/x86/kvm/x86.h              |  2 ++
 13 files changed, 117 insertions(+), 123 deletions(-)
 create mode 100644 arch/x86/kvm/svm/svm_ops.h

-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/9] x86/virt: Eat faults on VMXOFF in reboot flows
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
@ 2020-12-31  0:26 ` Sean Christopherson
  2020-12-31  0:26 ` [PATCH 2/9] x86/reboot: Force all cpus to exit VMX root if VMX is supported Sean Christopherson
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

Silently ignore all faults on VMXOFF in the reboot flows as such faults
are all but guaranteed to be due to the CPU not being in VMX root.
Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but
before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX
state without faulting, and (c) the whole point is to get out of VMX
root, eating faults is the simplest way to achieve the desired behaior.

Technically, VMXOFF can fault (or fail) for other reasons, but all other
fault and failure scenarios are mode related, i.e. the kernel would have
to magically end up in RM, V86, compat mode, at CPL>0, or running with
the SMI Transfer Monitor active.  The kernel is beyond hosed if any of
those scenarios are encountered; trying to do something fancy in the
error path to handle them cleanly is pointless.

Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function")
Reported-by: David P. Reed <dpreed@deepplum.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/virtext.h | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index 9aad0e0876fb..fda3e7747c22 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -30,15 +30,22 @@ static inline int cpu_has_vmx(void)
 }
 
 
-/** Disable VMX on the current CPU
+/**
+ * cpu_vmxoff() - Disable VMX on the current CPU
  *
- * vmxoff causes a undefined-opcode exception if vmxon was not run
- * on the CPU previously. Only call this function if you know VMX
- * is enabled.
+ * Disable VMX and clear CR4.VMXE (even if VMXOFF faults)
+ *
+ * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to
+ * atomically track post-VMXON state, e.g. this may be called in NMI context.
+ * Eat all faults as all other faults on VMXOFF faults are mode related, i.e.
+ * faults are guaranteed to be due to the !post-VMXON check unless the CPU is
+ * magically in RM, VM86, compat mode, or at CPL>0.
  */
 static inline void cpu_vmxoff(void)
 {
-	asm volatile ("vmxoff");
+	asm_volatile_goto("1: vmxoff\n\t"
+			  _ASM_EXTABLE(1b, %l[fault]) :::: fault);
+fault:
 	cr4_clear_bits(X86_CR4_VMXE);
 }
 
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/9] x86/reboot: Force all cpus to exit VMX root if VMX is supported
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
  2020-12-31  0:26 ` [PATCH 1/9] x86/virt: Eat faults on VMXOFF in reboot flows Sean Christopherson
@ 2020-12-31  0:26 ` Sean Christopherson
  2020-12-31  0:26 ` [PATCH 3/9] x86/virt: Mark flags and memory as clobbered by VMXOFF Sean Christopherson
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

Force all CPUs to do VMXOFF (via NMI shootdown) during an emergency
reboot if VMX is _supported_, as VMX being off on the current CPU does
not prevent other CPUs from being in VMX root (post-VMXON).  This fixes
a bug where a crash/panic reboot could leave other CPUs in VMX root and
prevent them from being woken via INIT-SIPI-SIPI in the new kernel.

Fixes: d176720d34c7 ("x86: disable VMX on all CPUs on reboot")
Cc: stable@vger.kernel.org
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: David P. Reed <dpreed@deepplum.com>
[sean: reworked changelog and further tweaked comment]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kernel/reboot.c | 30 ++++++++++--------------------
 1 file changed, 10 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index db115943e8bd..efbaef8b4de9 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -538,31 +538,21 @@ static void emergency_vmx_disable_all(void)
 	local_irq_disable();
 
 	/*
-	 * We need to disable VMX on all CPUs before rebooting, otherwise
-	 * we risk hanging up the machine, because the CPU ignores INIT
-	 * signals when VMX is enabled.
+	 * Disable VMX on all CPUs before rebooting, otherwise we risk hanging
+	 * the machine, because the CPU blocks INIT when it's in VMX root.
 	 *
-	 * We can't take any locks and we may be on an inconsistent
-	 * state, so we use NMIs as IPIs to tell the other CPUs to disable
-	 * VMX and halt.
+	 * We can't take any locks and we may be on an inconsistent state, so
+	 * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt.
 	 *
-	 * For safety, we will avoid running the nmi_shootdown_cpus()
-	 * stuff unnecessarily, but we don't have a way to check
-	 * if other CPUs have VMX enabled. So we will call it only if the
-	 * CPU we are running on has VMX enabled.
-	 *
-	 * We will miss cases where VMX is not enabled on all CPUs. This
-	 * shouldn't do much harm because KVM always enable VMX on all
-	 * CPUs anyway. But we can miss it on the small window where KVM
-	 * is still enabling VMX.
+	 * Do the NMI shootdown even if VMX if off on _this_ CPU, as that
+	 * doesn't prevent a different CPU from being in VMX root operation.
 	 */
-	if (cpu_has_vmx() && cpu_vmx_enabled()) {
-		/* Disable VMX on this CPU. */
-		cpu_vmxoff();
+	if (cpu_has_vmx()) {
+		/* Safely force _this_ CPU out of VMX root operation. */
+		__cpu_emergency_vmxoff();
 
-		/* Halt and disable VMX on the other CPUs */
+		/* Halt and exit VMX root operation on the other CPUs. */
 		nmi_shootdown_cpus(vmxoff_nmi);
-
 	}
 }
 
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/9] x86/virt: Mark flags and memory as clobbered by VMXOFF
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
  2020-12-31  0:26 ` [PATCH 1/9] x86/virt: Eat faults on VMXOFF in reboot flows Sean Christopherson
  2020-12-31  0:26 ` [PATCH 2/9] x86/reboot: Force all cpus to exit VMX root if VMX is supported Sean Christopherson
@ 2020-12-31  0:26 ` Sean Christopherson
  2020-12-31  0:26 ` [PATCH 4/9] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw Sean Christopherson
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

From: David P. Reed <dpreed@deepplum.com>

Explicitly tell the compiler that VMXOFF modifies flags (like all VMX
instructions), and mark memory as clobbered since VMXOFF must not be
reordered and also may have memory side effects (though the kernel
really shouldn't be accessing the root VMCS anyways).

Practically speaking, adding the clobbers is most likely a nop; the
primary motivation is to properly document VMXOFF's behavior.

For the flags clobber, both Clang and GCC automatically mark flags as
clobbered; this is noted in commit 4b1e54786e48 ("KVM/x86: Use assembly
instruction mnemonics instead of .byte streams"), which intentionally
removed the previous clobber.  But, neither Clang nor GCC documents
this behavior, and there's no downside to including the clobber.

For the memory clobber, the RFLAGS.IF and CR4.VMXE manipulations that
immediately follow VMXOFF have compiler barriers of their own, i.e.
VMXOFF can't get reordered after clearing CR4.VMXE, which is really
what's of interest.

Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David P. Reed <dpreed@deepplum.com>
[sean: rewrote changelog, dropped comment adjustments]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/virtext.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index fda3e7747c22..2cc585467667 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -44,7 +44,8 @@ static inline int cpu_has_vmx(void)
 static inline void cpu_vmxoff(void)
 {
 	asm_volatile_goto("1: vmxoff\n\t"
-			  _ASM_EXTABLE(1b, %l[fault]) :::: fault);
+			  _ASM_EXTABLE(1b, %l[fault])
+			  ::: "cc", "memory" : fault);
 fault:
 	cr4_clear_bits(X86_CR4_VMXE);
 }
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/9] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (2 preceding siblings ...)
  2020-12-31  0:26 ` [PATCH 3/9] x86/virt: Mark flags and memory as clobbered by VMXOFF Sean Christopherson
@ 2020-12-31  0:26 ` Sean Christopherson
  2020-12-31  0:26 ` [PATCH 5/9] KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows Sean Christopherson
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

From: Uros Bizjak <ubizjak@gmail.com>

Replace inline assembly in nested_vmx_check_vmentry_hw
with a call to __vmx_vcpu_run.  The function is not
performance critical, so (double) GPR save/restore
in __vmx_vcpu_run can be tolerated, as far as performance
effects are concerned.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Reviewed-and-tested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
[sean: dropped versioning info from changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/nested.c  | 32 +++-----------------------------
 arch/x86/kvm/vmx/vmenter.S |  2 +-
 arch/x86/kvm/vmx/vmx.c     |  2 --
 arch/x86/kvm/vmx/vmx.h     |  1 +
 4 files changed, 5 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index e2f26564a12d..5bbb4d667370 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -12,6 +12,7 @@
 #include "nested.h"
 #include "pmu.h"
 #include "trace.h"
+#include "vmx.h"
 #include "x86.h"
 
 static bool __read_mostly enable_shadow_vmcs = 1;
@@ -3057,35 +3058,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
 		vmx->loaded_vmcs->host_state.cr4 = cr4;
 	}
 
-	asm(
-		"sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */
-		"cmp %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t"
-		"je 1f \n\t"
-		__ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t"
-		"mov %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t"
-		"1: \n\t"
-		"add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */
-
-		/* Check if vmlaunch or vmresume is needed */
-		"cmpb $0, %c[launched](%[loaded_vmcs])\n\t"
-
-		/*
-		 * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set
-		 * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail
-		 * Valid.  vmx_vmenter() directly "returns" RFLAGS, and so the
-		 * results of VM-Enter is captured via CC_{SET,OUT} to vm_fail.
-		 */
-		"call vmx_vmenter\n\t"
-
-		CC_SET(be)
-	      : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail)
-	      :	[HOST_RSP]"r"((unsigned long)HOST_RSP),
-		[loaded_vmcs]"r"(vmx->loaded_vmcs),
-		[launched]"i"(offsetof(struct loaded_vmcs, launched)),
-		[host_state_rsp]"i"(offsetof(struct loaded_vmcs, host_state.rsp)),
-		[wordsize]"i"(sizeof(ulong))
-	      : "memory"
-	);
+	vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs,
+				 vmx->loaded_vmcs->launched);
 
 	if (vmx->msr_autoload.host.nr)
 		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index e85aa5faa22d..3a6461694fc2 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -44,7 +44,7 @@
  * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump
  * to vmx_vmexit.
  */
-SYM_FUNC_START(vmx_vmenter)
+SYM_FUNC_START_LOCAL(vmx_vmenter)
 	/* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */
 	je 2f
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 75c9c6a0a3a4..65b5f02b199f 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6577,8 +6577,6 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
 	}
 }
 
-bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
-
 static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 					struct vcpu_vmx *vmx)
 {
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 9d3a557949ac..03fc90569ae1 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -339,6 +339,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
 struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr);
 void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu);
 void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp);
+bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
 int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr);
 void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu);
 
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/9] KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (3 preceding siblings ...)
  2020-12-31  0:26 ` [PATCH 4/9] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw Sean Christopherson
@ 2020-12-31  0:26 ` Sean Christopherson
  2020-12-31  0:26 ` [PATCH 6/9] KVM: VMX: Use the kernel's version of VMXOFF Sean Christopherson
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

Move the Intel PT tracking outside of the VMXON/VMXOFF helpers so that
a future patch can drop KVM's kvm_cpu_vmxoff() in favor of the kernel's
cpu_vmxoff() without an associated PT functional change, and without
losing symmetry between the VMXON and VMXOFF flows.

Barring undocumented behavior, this should have no meaningful effects
as Intel PT behavior does not interact with CR4.VMXE.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 65b5f02b199f..131f390ade24 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2265,7 +2265,6 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer)
 	u64 msr;
 
 	cr4_set_bits(X86_CR4_VMXE);
-	intel_pt_handle_vmx(1);
 
 	asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
 			  _ASM_EXTABLE(1b, %l[fault])
@@ -2276,7 +2275,6 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer)
 fault:
 	WARN_ONCE(1, "VMXON faulted, MSR_IA32_FEAT_CTL (0x3a) = 0x%llx\n",
 		  rdmsrl_safe(MSR_IA32_FEAT_CTL, &msr) ? 0xdeadbeef : msr);
-	intel_pt_handle_vmx(0);
 	cr4_clear_bits(X86_CR4_VMXE);
 
 	return -EFAULT;
@@ -2299,9 +2297,13 @@ static int hardware_enable(void)
 	    !hv_get_vp_assist_page(cpu))
 		return -EFAULT;
 
+	intel_pt_handle_vmx(1);
+
 	r = kvm_cpu_vmxon(phys_addr);
-	if (r)
+	if (r) {
+		intel_pt_handle_vmx(0);
 		return r;
+	}
 
 	if (enable_ept)
 		ept_sync_global();
@@ -2327,7 +2329,6 @@ static void kvm_cpu_vmxoff(void)
 {
 	asm volatile (__ex("vmxoff"));
 
-	intel_pt_handle_vmx(0);
 	cr4_clear_bits(X86_CR4_VMXE);
 }
 
@@ -2335,6 +2336,8 @@ static void hardware_disable(void)
 {
 	vmclear_local_loaded_vmcss();
 	kvm_cpu_vmxoff();
+
+	intel_pt_handle_vmx(0);
 }
 
 /*
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/9] KVM: VMX: Use the kernel's version of VMXOFF
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (4 preceding siblings ...)
  2020-12-31  0:26 ` [PATCH 5/9] KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows Sean Christopherson
@ 2020-12-31  0:26 ` Sean Christopherson
  2020-12-31  0:27 ` [PATCH 7/9] KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions Sean Christopherson
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:26 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

Drop kvm_cpu_vmxoff() in favor of the kernel's cpu_vmxoff().  Modify the
latter to return -EIO on fault so that KVM can invoke
kvm_spurious_fault() when appropriate.  In addition to the obvious code
reuse, dropping kvm_cpu_vmxoff() also eliminates VMX's last usage of the
__ex()/__kvm_handle_fault_on_reboot() macros, thus helping pave the way
toward dropping them entirely.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/virtext.h |  7 ++++++-
 arch/x86/kvm/vmx/vmx.c         | 15 +++------------
 2 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index 2cc585467667..8757078d4442 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -41,13 +41,18 @@ static inline int cpu_has_vmx(void)
  * faults are guaranteed to be due to the !post-VMXON check unless the CPU is
  * magically in RM, VM86, compat mode, or at CPL>0.
  */
-static inline void cpu_vmxoff(void)
+static inline int cpu_vmxoff(void)
 {
 	asm_volatile_goto("1: vmxoff\n\t"
 			  _ASM_EXTABLE(1b, %l[fault])
 			  ::: "cc", "memory" : fault);
+
+	cr4_clear_bits(X86_CR4_VMXE);
+	return 0;
+
 fault:
 	cr4_clear_bits(X86_CR4_VMXE);
+	return -EIO;
 }
 
 static inline int cpu_vmx_enabled(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 131f390ade24..1a3b508ba8c1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2321,21 +2321,12 @@ static void vmclear_local_loaded_vmcss(void)
 		__loaded_vmcs_clear(v);
 }
 
-
-/* Just like cpu_vmxoff(), but with the __kvm_handle_fault_on_reboot()
- * tricks.
- */
-static void kvm_cpu_vmxoff(void)
-{
-	asm volatile (__ex("vmxoff"));
-
-	cr4_clear_bits(X86_CR4_VMXE);
-}
-
 static void hardware_disable(void)
 {
 	vmclear_local_loaded_vmcss();
-	kvm_cpu_vmxoff();
+
+	if (cpu_vmxoff())
+		kvm_spurious_fault();
 
 	intel_pt_handle_vmx(0);
 }
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/9] KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (5 preceding siblings ...)
  2020-12-31  0:26 ` [PATCH 6/9] KVM: VMX: Use the kernel's version of VMXOFF Sean Christopherson
@ 2020-12-31  0:27 ` Sean Christopherson
  2020-12-31  0:27 ` [PATCH 8/9] KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot() Sean Christopherson
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:27 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

Add svm_asm*() macros, a la the existing vmx_asm*() macros, to handle
faults on SVM instructions instead of using the generic __ex(), a.k.a.
__kvm_handle_fault_on_reboot().  Using asm goto generates slightly
better code as it eliminates the in-line JMP+CALL sequences that are
needed by __kvm_handle_fault_on_reboot() to avoid triggering BUG()
from fixup (which generates bad stack traces).

Using SVM specific macros also drops the last user of __ex() and the
the last asm linkage to kvm_spurious_fault(), and adds a helper for
VMSAVE, which may gain an addition call site in the future (as part
of optimizing the SVM context switching).

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/sev.c     |  3 +-
 arch/x86/kvm/svm/svm.c     | 16 +----------
 arch/x86/kvm/svm/svm_ops.h | 59 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 62 insertions(+), 16 deletions(-)
 create mode 100644 arch/x86/kvm/svm/svm_ops.h

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 9858d5ae9ddd..4511d7ccdb19 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -22,6 +22,7 @@
 
 #include "x86.h"
 #include "svm.h"
+#include "svm_ops.h"
 #include "cpuid.h"
 #include "trace.h"
 
@@ -2001,7 +2002,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
 	 * of which one step is to perform a VMLOAD. Since hardware does not
 	 * perform a VMSAVE on VMRUN, the host savearea must be updated.
 	 */
-	asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
+	vmsave(__sme_page_pa(sd->save_area));
 
 	/*
 	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cce0143a6f80..4308ab5ca27e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -41,6 +41,7 @@
 #include "trace.h"
 
 #include "svm.h"
+#include "svm_ops.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
 
@@ -246,21 +247,6 @@ u32 svm_msrpm_offset(u32 msr)
 
 #define MAX_INST_SIZE 15
 
-static inline void clgi(void)
-{
-	asm volatile (__ex("clgi"));
-}
-
-static inline void stgi(void)
-{
-	asm volatile (__ex("stgi"));
-}
-
-static inline void invlpga(unsigned long addr, u32 asid)
-{
-	asm volatile (__ex("invlpga %1, %0") : : "c"(asid), "a"(addr));
-}
-
 static int get_max_npt_level(void)
 {
 #ifdef CONFIG_X86_64
diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
new file mode 100644
index 000000000000..0c8377aee52c
--- /dev/null
+++ b/arch/x86/kvm/svm/svm_ops.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __KVM_X86_SVM_OPS_H
+#define __KVM_X86_SVM_OPS_H
+
+#include <linux/compiler_types.h>
+
+#include <asm/kvm_host.h>
+
+#define svm_asm(insn, clobber...)				\
+do {								\
+	asm_volatile_goto("1: " __stringify(insn) "\n\t"	\
+			  _ASM_EXTABLE(1b, %l[fault])		\
+			  ::: clobber : fault);			\
+	return;							\
+fault:								\
+	kvm_spurious_fault();					\
+} while (0)
+
+#define svm_asm1(insn, op1, clobber...)				\
+do {								\
+	asm_volatile_goto("1: "  __stringify(insn) " %0\n\t"	\
+			  _ASM_EXTABLE(1b, %l[fault])		\
+			  :: op1 : clobber : fault);		\
+	return;							\
+fault:								\
+	kvm_spurious_fault();					\
+} while (0)
+
+#define svm_asm2(insn, op1, op2, clobber...)				\
+do {									\
+	asm_volatile_goto("1: "  __stringify(insn) " %1, %0\n\t"	\
+			  _ASM_EXTABLE(1b, %l[fault])			\
+			  :: op1, op2 : clobber : fault);		\
+	return;								\
+fault:									\
+	kvm_spurious_fault();						\
+} while (0)
+
+static inline void clgi(void)
+{
+	svm_asm(clgi);
+}
+
+static inline void stgi(void)
+{
+	svm_asm(stgi);
+}
+
+static inline void invlpga(unsigned long addr, u32 asid)
+{
+	svm_asm2(invlpga, "c"(asid), "a"(addr));
+}
+
+static inline void vmsave(hpa_t pa)
+{
+	svm_asm1(vmsave, "a" (pa), "memory");
+}
+
+#endif /* __KVM_X86_SVM_OPS_H */
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 8/9] KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot()
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (6 preceding siblings ...)
  2020-12-31  0:27 ` [PATCH 7/9] KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions Sean Christopherson
@ 2020-12-31  0:27 ` Sean Christopherson
  2020-12-31  0:27 ` [PATCH 9/9] KVM: x86: Move declaration of kvm_spurious_fault() to x86.h Sean Christopherson
  2021-01-27 17:26 ` [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Paolo Bonzini
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:27 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

Remove the __kvm_handle_fault_on_reboot() and __ex() macros now that all
VMX and SVM instructions use asm goto to handle the fault (or in the
case of VMREAD, completely custom logic).  Drop kvm_spurious_fault()'s
asmlinkage annotation as __kvm_handle_fault_on_reboot() was the only
flow that invoked it from assembly code.

Cc: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm_host.h | 25 +------------------------
 arch/x86/kvm/svm/sev.c          |  2 --
 arch/x86/kvm/svm/svm.c          |  2 --
 arch/x86/kvm/vmx/vmx_ops.h      |  2 --
 arch/x86/kvm/x86.c              |  9 ++++++++-
 5 files changed, 9 insertions(+), 31 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3ab7b46087b7..51ba20ffaedb 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1634,30 +1634,7 @@ enum {
 #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0)
 #define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm)
 
-asmlinkage void kvm_spurious_fault(void);
-
-/*
- * Hardware virtualization extension instructions may fault if a
- * reboot turns off virtualization while processes are running.
- * Usually after catching the fault we just panic; during reboot
- * instead the instruction is ignored.
- */
-#define __kvm_handle_fault_on_reboot(insn)				\
-	"666: \n\t"							\
-	insn "\n\t"							\
-	"jmp	668f \n\t"						\
-	"667: \n\t"							\
-	"1: \n\t"							\
-	".pushsection .discard.instr_begin \n\t"			\
-	".long 1b - . \n\t"						\
-	".popsection \n\t"						\
-	"call	kvm_spurious_fault \n\t"				\
-	"1: \n\t"							\
-	".pushsection .discard.instr_end \n\t"				\
-	".long 1b - . \n\t"						\
-	".popsection \n\t"						\
-	"668: \n\t"							\
-	_ASM_EXTABLE(666b, 667b)
+void kvm_spurious_fault(void);
 
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 4511d7ccdb19..e7080e5056a4 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -26,8 +26,6 @@
 #include "cpuid.h"
 #include "trace.h"
 
-#define __ex(x) __kvm_handle_fault_on_reboot(x)
-
 static u8 sev_enc_bit;
 static int sev_flush_asids(void);
 static DECLARE_RWSEM(sev_deactivate_lock);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4308ab5ca27e..e4907e490c24 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -43,8 +43,6 @@
 #include "svm.h"
 #include "svm_ops.h"
 
-#define __ex(x) __kvm_handle_fault_on_reboot(x)
-
 MODULE_AUTHOR("Qumranet");
 MODULE_LICENSE("GPL");
 
diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h
index 692b0c31c9c8..7b6fbe103c61 100644
--- a/arch/x86/kvm/vmx/vmx_ops.h
+++ b/arch/x86/kvm/vmx/vmx_ops.h
@@ -10,8 +10,6 @@
 #include "evmcs.h"
 #include "vmcs.h"
 
-#define __ex(x) __kvm_handle_fault_on_reboot(x)
-
 asmlinkage void vmread_error(unsigned long field, bool fault);
 __attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field,
 							 bool fault);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3f7c1fc7a3ce..836912b42030 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -412,7 +412,14 @@ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 }
 EXPORT_SYMBOL_GPL(kvm_set_apic_base);
 
-asmlinkage __visible noinstr void kvm_spurious_fault(void)
+/*
+ * Handle a fault on a hardware virtualization (VMX or SVM) instruction.
+ *
+ * Hardware virtualization extension instructions may fault if a reboot turns
+ * off virtualization while processes are running.  Usually after catching the
+ * fault we just panic; during reboot instead the instruction is ignored.
+ */
+noinstr void kvm_spurious_fault(void)
 {
 	/* Fault while not rebooting.  We want the trace. */
 	BUG_ON(!kvm_rebooting);
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 9/9] KVM: x86: Move declaration of kvm_spurious_fault() to x86.h
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (7 preceding siblings ...)
  2020-12-31  0:27 ` [PATCH 8/9] KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot() Sean Christopherson
@ 2020-12-31  0:27 ` Sean Christopherson
  2021-01-27 17:26 ` [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Paolo Bonzini
  9 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2020-12-31  0:27 UTC (permalink / raw)
  To: Paolo Bonzini, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, H. Peter Anvin, kvm, linux-kernel, David P . Reed,
	Randy Dunlap, Uros Bizjak

From: Uros Bizjak <ubizjak@gmail.com>

Move the declaration of kvm_spurious_fault() to KVM's "private" x86.h,
it should never be called by anything other than low level KVM code.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
[sean: rebased to a series without __ex()/__kvm_handle_fault_on_reboot()]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/kvm_host.h | 2 --
 arch/x86/kvm/svm/svm_ops.h      | 2 +-
 arch/x86/kvm/vmx/vmx_ops.h      | 2 +-
 arch/x86/kvm/x86.h              | 2 ++
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 51ba20ffaedb..feba0ec5474b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1634,8 +1634,6 @@ enum {
 #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0)
 #define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm)
 
-void kvm_spurious_fault(void);
-
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
 			unsigned flags);
diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
index 0c8377aee52c..aa028ef5b1e9 100644
--- a/arch/x86/kvm/svm/svm_ops.h
+++ b/arch/x86/kvm/svm/svm_ops.h
@@ -4,7 +4,7 @@
 
 #include <linux/compiler_types.h>
 
-#include <asm/kvm_host.h>
+#include "x86.h"
 
 #define svm_asm(insn, clobber...)				\
 do {								\
diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h
index 7b6fbe103c61..7e3cb53c413f 100644
--- a/arch/x86/kvm/vmx/vmx_ops.h
+++ b/arch/x86/kvm/vmx/vmx_ops.h
@@ -4,11 +4,11 @@
 
 #include <linux/nospec.h>
 
-#include <asm/kvm_host.h>
 #include <asm/vmx.h>
 
 #include "evmcs.h"
 #include "vmcs.h"
+#include "x86.h"
 
 asmlinkage void vmread_error(unsigned long field, bool fault);
 __attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field,
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index c5ee0f5ce0f1..0d830945ae38 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -8,6 +8,8 @@
 #include "kvm_cache_regs.h"
 #include "kvm_emulate.h"
 
+void kvm_spurious_fault(void);
+
 #define KVM_DEFAULT_PLE_GAP		128
 #define KVM_VMX_DEFAULT_PLE_WINDOW	4096
 #define KVM_DEFAULT_PLE_WINDOW_GROW	2
-- 
2.29.2.729.g45daf8777d-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups
  2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
                   ` (8 preceding siblings ...)
  2020-12-31  0:27 ` [PATCH 9/9] KVM: x86: Move declaration of kvm_spurious_fault() to x86.h Sean Christopherson
@ 2021-01-27 17:26 ` Paolo Bonzini
  9 siblings, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2021-01-27 17:26 UTC (permalink / raw)
  To: Sean Christopherson, Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	H. Peter Anvin, kvm, linux-kernel, David P . Reed, Randy Dunlap,
	Uros Bizjak

On 31/12/20 01:26, Sean Christopherson wrote:
> This series is a conglomeration of three previous series/patches and a bit
> of new code.  None of the previous series are directly related, but they
> are all needed to achieve the overarching goal of nuking
> __kvm_handle_fault_on_reboot(), which is a rather ugly inline asm macro
> that has the unfortunate side effect of inserting in-line JMP+CALL
> sequences.
> 
> Patches 1-3 are resurrected from a series by David Reed[1] to fix VMXOFF
> bugs in the reboot flows.
> 
> Patch 4 is a patch from Uros Bizjak to get rid of custom inline asm in
> nested VMX.  This already received Paolo's "Queued, thanks." blessing,
> but has not been pushed to kvm.git.  It's included here as there is an
> indirect dependency in patch 8.
> 
> Patches 5-6 are minor tweaks to KVM's VMX{ON/OFF} paths to use the
> kernel's now-fault-tolerant VMXOFF instead of KVM's custom asm.
> 
> Patch 7 replaces SVM's __ex()/__kvm_handle_fault_on_reboot() with more
> tailored asm goto macros, similar to the existing VMX asm_vmx*() macros.
> This is largely an excuse to get rid of __kvm_handle_fault_on_reboot();
> the actual benefits of removing JMP+CALL are likely negligible as SVM only
> has a few uses of the macro (versus VMX's bajillion VMREADs/VMWRITEs).
> 
> Patch 8 removes __ex()/__kvm_handle_fault_on_reboot().
> 
> Patch 9 is a very trimmed down version of a different patch from Uros[3],
> which cleaned up the __ex()/__kvm_handle_fault_on_reboot() code, as
> opposed to zapping them entirely.
> 
> [1] https://lkml.kernel.org/r/20200704203809.76391-1-dpreed@deepplum.com
> [2] https://lkml.kernel.org/r/20201029134145.107560-1-ubizjak@gmail.com
> [3] https://lkml.kernel.org/r/20201221194800.46962-1-ubizjak@gmail.com
> 
> David P. Reed (1):
>    x86/virt: Mark flags and memory as clobbered by VMXOFF
> 
> Sean Christopherson (6):
>    x86/virt: Eat faults on VMXOFF in reboot flows
>    x86/reboot: Force all cpus to exit VMX root if VMX is supported
>    KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows
>    KVM: VMX: Use the kernel's version of VMXOFF
>    KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions
>    KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot()
> 
> Uros Bizjak (2):
>    KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw
>    KVM: x86: Move declaration of kvm_spurious_fault() to x86.h
> 
>   arch/x86/include/asm/kvm_host.h | 25 --------------
>   arch/x86/include/asm/virtext.h  | 25 ++++++++++----
>   arch/x86/kernel/reboot.c        | 30 ++++++-----------
>   arch/x86/kvm/svm/sev.c          |  5 ++-
>   arch/x86/kvm/svm/svm.c          | 18 +---------
>   arch/x86/kvm/svm/svm_ops.h      | 59 +++++++++++++++++++++++++++++++++
>   arch/x86/kvm/vmx/nested.c       | 32 ++----------------
>   arch/x86/kvm/vmx/vmenter.S      |  2 +-
>   arch/x86/kvm/vmx/vmx.c          | 28 ++++++----------
>   arch/x86/kvm/vmx/vmx.h          |  1 +
>   arch/x86/kvm/vmx/vmx_ops.h      |  4 +--
>   arch/x86/kvm/x86.c              |  9 ++++-
>   arch/x86/kvm/x86.h              |  2 ++
>   13 files changed, 117 insertions(+), 123 deletions(-)
>   create mode 100644 arch/x86/kvm/svm/svm_ops.h
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-01-27 17:51 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-31  0:26 [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Sean Christopherson
2020-12-31  0:26 ` [PATCH 1/9] x86/virt: Eat faults on VMXOFF in reboot flows Sean Christopherson
2020-12-31  0:26 ` [PATCH 2/9] x86/reboot: Force all cpus to exit VMX root if VMX is supported Sean Christopherson
2020-12-31  0:26 ` [PATCH 3/9] x86/virt: Mark flags and memory as clobbered by VMXOFF Sean Christopherson
2020-12-31  0:26 ` [PATCH 4/9] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw Sean Christopherson
2020-12-31  0:26 ` [PATCH 5/9] KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows Sean Christopherson
2020-12-31  0:26 ` [PATCH 6/9] KVM: VMX: Use the kernel's version of VMXOFF Sean Christopherson
2020-12-31  0:27 ` [PATCH 7/9] KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions Sean Christopherson
2020-12-31  0:27 ` [PATCH 8/9] KVM: x86: Kill off __ex() and __kvm_handle_fault_on_reboot() Sean Christopherson
2020-12-31  0:27 ` [PATCH 9/9] KVM: x86: Move declaration of kvm_spurious_fault() to x86.h Sean Christopherson
2021-01-27 17:26 ` [PATCH 0/9] x86/virt: KVM: x86: Exception handling fixes/cleanups Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).