kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs
@ 2019-07-19 17:25 Sean Christopherson
  2019-07-19 17:25 ` [PATCH 1/4] objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault() Sean Christopherson
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-07-19 17:25 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

An in-flight patch[1] to make __kvm_handle_fault_on_reboot() play nice
with objtool will add a JMP after most VMX instructions so that the reboot
macro can use an actual CALL to kvm_spurious_fault() instead of a funky
PUSH+JMP facsimile.

Rework the low level VMX instruction helpers to handle unexpected faults
manually instead of relying on the "fault on reboot" macro.  By using
asm-goto, most helpers can branch directly to an in-function call to
kvm_spurious_fault(), which can then be optimized by compilers to reside
out-of-line at the end of the function instead of inline as done by
"fault on reboot".

The net impact relative to the current code base is more or less a nop
when building with a compiler that supports __GCC_ASM_FLAG_OUTPUTS__.
A bunch of code that was previously in .fixup gets moved into the slow
paths of functions, but the fast paths are more basically unchanged.

Without __GCC_ASM_FLAG_OUTPUTS__, manually coding the Jcc is a net
positive as CC_SET() without compiler support almost always generates a
SETcc+CMP+Jcc sequence, which is now replaced with a single Jcc.

A small bonus is that the Jcc instrs are hinted to predict that the VMX
instr will be successful.

[1] https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com

Sean Christopherson (4):
  objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault()
  KVM: VMX: Optimize VMX instruction error and fault handling
  KVM: VMX: Add error handling to VMREAD helper
  KVM: x86: Drop ____kvm_handle_fault_on_reboot()

 arch/x86/include/asm/kvm_host.h |  6 +--
 arch/x86/kvm/vmx/ops.h          | 93 ++++++++++++++++++++-------------
 arch/x86/kvm/vmx/vmx.c          | 42 +++++++++++++++
 arch/x86/kvm/x86.c              |  3 +-
 tools/objtool/check.c           |  1 -
 5 files changed, 102 insertions(+), 43 deletions(-)

-- 
2.22.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/4] objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault()
  2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
@ 2019-07-19 17:25 ` Sean Christopherson
  2019-07-19 17:25 ` [PATCH 2/4] KVM: VMX: Optimize VMX instruction error and fault handling Sean Christopherson
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-07-19 17:25 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

Explicitly check kvm_reboot in kvm_spurious_fault() prior to invoking
BUG(), as opposed to assuming the caller has already done so.  Letting
kvm_spurious_fault() be called "directly" will allow VMX to better
optimize its low level assembly flows.

As a happy side effect, kvm_spurious_fault() no longer needs to be
marked as a dead end since it doesn't unconditionally BUG().

Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/x86.c    | 3 ++-
 tools/objtool/check.c | 1 -
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9d7b9e6a0939..7bbfed3877bd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -356,7 +356,8 @@ EXPORT_SYMBOL_GPL(kvm_set_apic_base);
 asmlinkage __visible void kvm_spurious_fault(void)
 {
 	/* Fault while not rebooting.  We want the trace. */
-	BUG();
+	if (!kvm_rebooting)
+		BUG();
 }
 EXPORT_SYMBOL_GPL(kvm_spurious_fault);
 
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 172f99195726..bc3b405acce9 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -127,7 +127,6 @@ static int __dead_end_function(struct objtool_file *file, struct symbol *func,
 		"do_task_dead",
 		"__module_put_and_exit",
 		"complete_and_exit",
-		"kvm_spurious_fault",
 		"__reiserfs_panic",
 		"lbug_with_loc",
 		"fortify_panic",
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/4] KVM: VMX: Optimize VMX instruction error and fault handling
  2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
  2019-07-19 17:25 ` [PATCH 1/4] objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault() Sean Christopherson
@ 2019-07-19 17:25 ` Sean Christopherson
  2019-07-19 17:25 ` [PATCH 3/4] KVM: VMX: Add error handling to VMREAD helper Sean Christopherson
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-07-19 17:25 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

An inflight patch[1] to tweak  ____kvm_handle_fault_on_reboot() will
result in an extra JMP being inserted after every VMX instruction.
Rework the VMX instruction helpers using asm-goto to branch directly
to error/fault "handlers" in order to eliminate the taken branch in the
happy path.

Opportunistically clean up the helpers so that they all have consistent
error handling and messages.  Somewhat arbitrarily WARN_ONCE and then
fall back to ratelimited warnings, i.e. yell loudly that something has
gone wrong and do a best effort capture of unique failures, but don't
spam the kernel log.

Leave the usage of ____kvm_handle_fault_on_reboot()/__ex() in
kvm_cpu_vmxoff() and nested_vmx_check_vmentry_hw() as is.  The VMXOFF
case is not a fast path, i.e. the cleanliness of __ex() is worth the
JMP, and an extra JMP in nested_vmx_check_vmentry_hw() is unavoidable.

Note, VMREAD cannot get the asm-goto treatment as output operands aren't
compatible with GCC's asm-goto due to internal compiler restrictions.

[1] https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com

Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/ops.h | 72 +++++++++++++++++++++++-------------------
 arch/x86/kvm/vmx/vmx.c | 34 ++++++++++++++++++++
 2 files changed, 74 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index 2200fb698dd0..79e25d49d4d9 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -14,6 +14,12 @@
 #define __ex_clear(x, reg) \
 	____kvm_handle_fault_on_reboot(x, "xor " reg ", " reg)
 
+void vmwrite_error(unsigned long field, unsigned long value);
+void vmclear_error(struct vmcs *vmcs, u64 phys_addr);
+void vmptrld_error(struct vmcs *vmcs, u64 phys_addr);
+void invvpid_error(unsigned long ext, u16 vpid, gva_t gva);
+void invept_error(unsigned long ext, u64 eptp, gpa_t gpa);
+
 static __always_inline void vmcs_check16(unsigned long field)
 {
 	BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6001) == 0x2000,
@@ -103,21 +109,39 @@ static __always_inline unsigned long vmcs_readl(unsigned long field)
 	return __vmcs_readl(field);
 }
 
-static noinline void vmwrite_error(unsigned long field, unsigned long value)
-{
-	printk(KERN_ERR "vmwrite error: reg %lx value %lx (err %d)\n",
-	       field, value, vmcs_read32(VM_INSTRUCTION_ERROR));
-	dump_stack();
-}
+#define vmx_asm1(insn, op1, error_args...)				\
+do {									\
+	asm_volatile_goto("1: " __stringify(insn) " %0\n\t"		\
+			  ".byte 0x2e\n\t" /* branch not taken hint */	\
+			  "jna %l[error]\n\t"				\
+			  _ASM_EXTABLE(1b, %l[fault])			\
+			  : : op1 : "cc" : error, fault);		\
+	return;								\
+error:									\
+	insn##_error(error_args);					\
+	return;								\
+fault:									\
+	kvm_spurious_fault();						\
+} while (0)
+
+#define vmx_asm2(insn, op1, op2, error_args...)				\
+do {									\
+	asm_volatile_goto("1: "  __stringify(insn) " %1, %0\n\t"	\
+			  ".byte 0x2e\n\t" /* branch not taken hint */	\
+			  "jna %l[error]\n\t"				\
+			  _ASM_EXTABLE(1b, %l[fault])			\
+			  : : op1, op2 : "cc" : error, fault);		\
+	return;								\
+error:									\
+	insn##_error(error_args);					\
+	return;								\
+fault:									\
+	kvm_spurious_fault();						\
+} while (0)
 
 static __always_inline void __vmcs_writel(unsigned long field, unsigned long value)
 {
-	bool error;
-
-	asm volatile (__ex("vmwrite %2, %1") CC_SET(na)
-		      : CC_OUT(na) (error) : "r"(field), "rm"(value));
-	if (unlikely(error))
-		vmwrite_error(field, value);
+	vmx_asm2(vmwrite, "r"(field), "rm"(value), field, value);
 }
 
 static __always_inline void vmcs_write16(unsigned long field, u16 value)
@@ -182,28 +206,18 @@ static __always_inline void vmcs_set_bits(unsigned long field, u32 mask)
 static inline void vmcs_clear(struct vmcs *vmcs)
 {
 	u64 phys_addr = __pa(vmcs);
-	bool error;
 
-	asm volatile (__ex("vmclear %1") CC_SET(na)
-		      : CC_OUT(na) (error) : "m"(phys_addr));
-	if (unlikely(error))
-		printk(KERN_ERR "kvm: vmclear fail: %p/%llx\n",
-		       vmcs, phys_addr);
+	vmx_asm1(vmclear, "m"(phys_addr), vmcs, phys_addr);
 }
 
 static inline void vmcs_load(struct vmcs *vmcs)
 {
 	u64 phys_addr = __pa(vmcs);
-	bool error;
 
 	if (static_branch_unlikely(&enable_evmcs))
 		return evmcs_load(phys_addr);
 
-	asm volatile (__ex("vmptrld %1") CC_SET(na)
-		      : CC_OUT(na) (error) : "m"(phys_addr));
-	if (unlikely(error))
-		printk(KERN_ERR "kvm: vmptrld %p/%llx failed\n",
-		       vmcs, phys_addr);
+	vmx_asm1(vmptrld, "m"(phys_addr), vmcs, phys_addr);
 }
 
 static inline void __invvpid(unsigned long ext, u16 vpid, gva_t gva)
@@ -213,11 +227,8 @@ static inline void __invvpid(unsigned long ext, u16 vpid, gva_t gva)
 		u64 rsvd : 48;
 		u64 gva;
 	} operand = { vpid, 0, gva };
-	bool error;
 
-	asm volatile (__ex("invvpid %2, %1") CC_SET(na)
-		      : CC_OUT(na) (error) : "r"(ext), "m"(operand));
-	BUG_ON(error);
+	vmx_asm2(invvpid, "r"(ext), "m"(operand), ext, vpid, gva);
 }
 
 static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
@@ -225,11 +236,8 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
 	struct {
 		u64 eptp, gpa;
 	} operand = {eptp, gpa};
-	bool error;
 
-	asm volatile (__ex("invept %2, %1") CC_SET(na)
-		      : CC_OUT(na) (error) : "r"(ext), "m"(operand));
-	BUG_ON(error);
+	vmx_asm2(invept, "r"(ext), "m"(operand), ext, eptp, gpa);
 }
 
 static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 84f8d49a2fd2..1770622e9608 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -343,6 +343,40 @@ static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bit
 
 void vmx_vmexit(void);
 
+#define vmx_insn_failed(fmt...)		\
+do {					\
+	WARN_ONCE(1, fmt);		\
+	pr_warn_ratelimited(fmt);	\
+} while (0)
+
+noinline void vmwrite_error(unsigned long field, unsigned long value)
+{
+	vmx_insn_failed("kvm: vmwrite failed: field=%lx val=%lx err=%d\n",
+			field, value, vmcs_read32(VM_INSTRUCTION_ERROR));
+}
+
+noinline void vmclear_error(struct vmcs *vmcs, u64 phys_addr)
+{
+	vmx_insn_failed("kvm: vmclear failed: %p/%llx\n", vmcs, phys_addr);
+}
+
+noinline void vmptrld_error(struct vmcs *vmcs, u64 phys_addr)
+{
+	vmx_insn_failed("kvm: vmptrld failed: %p/%llx\n", vmcs, phys_addr);
+}
+
+noinline void invvpid_error(unsigned long ext, u16 vpid, gva_t gva)
+{
+	vmx_insn_failed("kvm: invvpid failed: ext=0x%lx vpid=%u gva=0x%lx\n",
+			ext, vpid, gva);
+}
+
+noinline void invept_error(unsigned long ext, u64 eptp, gpa_t gpa)
+{
+	vmx_insn_failed("kvm: invept failed: ext=0x%lx eptp=%llx gpa=0x%llx\n",
+			ext, eptp, gpa);
+}
+
 static DEFINE_PER_CPU(struct vmcs *, vmxarea);
 DEFINE_PER_CPU(struct vmcs *, current_vmcs);
 /*
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/4] KVM: VMX: Add error handling to VMREAD helper
  2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
  2019-07-19 17:25 ` [PATCH 1/4] objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault() Sean Christopherson
  2019-07-19 17:25 ` [PATCH 2/4] KVM: VMX: Optimize VMX instruction error and fault handling Sean Christopherson
@ 2019-07-19 17:25 ` Sean Christopherson
  2019-07-19 17:25 ` [PATCH 4/4] KVM: x86: Drop ____kvm_handle_fault_on_reboot() Sean Christopherson
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-07-19 17:25 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

An inflight patch[1] to tweak  ____kvm_handle_fault_on_reboot() will
result in an extra JMP being inserted after every VMX instruction.
Since VMREAD flows will soon have a taken branch, bite the bullet and
add full error handling to VMREAD, i.e. preemptively replace the
to-be-added JMP with a hinted Jcc.

To minimize the code footprint, add a helper function, vmread_error(),
to handle both faults and failures so that the inline flow has a single
CALL.

[1] https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com

Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/ops.h | 21 +++++++++++++++++----
 arch/x86/kvm/vmx/vmx.c |  8 ++++++++
 2 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index 79e25d49d4d9..45eaedee2ac0 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -11,9 +11,8 @@
 #include "vmcs.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
-#define __ex_clear(x, reg) \
-	____kvm_handle_fault_on_reboot(x, "xor " reg ", " reg)
 
+asmlinkage void vmread_error(unsigned long field, bool fault);
 void vmwrite_error(unsigned long field, unsigned long value);
 void vmclear_error(struct vmcs *vmcs, u64 phys_addr);
 void vmptrld_error(struct vmcs *vmcs, u64 phys_addr);
@@ -68,8 +67,22 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field)
 {
 	unsigned long value;
 
-	asm volatile (__ex_clear("vmread %1, %0", "%k0")
-		      : "=r"(value) : "r"(field));
+	asm volatile("1: vmread %2, %1\n\t"
+		     ".byte 0x3e\n\t" /* branch taken hint */
+		     "ja 3f\n\t"
+		     "mov %2, %%" _ASM_ARG1 "\n\t"
+		     "xor %%" _ASM_ARG2 ", %%" _ASM_ARG2 "\n\t"
+		     "2: call vmread_error\n\t"
+		     "xor %k1, %k1\n\t"
+		     "3:\n\t"
+
+		     ".pushsection .fixup, \"ax\"\n\t"
+		     "4: mov %2, %%" _ASM_ARG1 "\n\t"
+		     "mov $1, %%" _ASM_ARG2 "\n\t"
+		     "jmp 2b\n\t"
+		     ".popsection\n\t"
+		     _ASM_EXTABLE(1b, 4b)
+		     : ASM_CALL_CONSTRAINT, "=r"(value) : "r"(field) : "cc");
 	return value;
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1770622e9608..67028b0ff99a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -349,6 +349,14 @@ do {					\
 	pr_warn_ratelimited(fmt);	\
 } while (0)
 
+asmlinkage void vmread_error(unsigned long field, bool fault)
+{
+	if (fault)
+		kvm_spurious_fault();
+	else
+		vmx_insn_failed("kvm: vmread failed: field=%lx\n", field);
+}
+
 noinline void vmwrite_error(unsigned long field, unsigned long value)
 {
 	vmx_insn_failed("kvm: vmwrite failed: field=%lx val=%lx err=%d\n",
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/4] KVM: x86: Drop ____kvm_handle_fault_on_reboot()
  2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-07-19 17:25 ` [PATCH 3/4] KVM: VMX: Add error handling to VMREAD helper Sean Christopherson
@ 2019-07-19 17:25 ` Sean Christopherson
  2019-07-19 18:01 ` [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Paolo Bonzini
  2019-07-19 18:01 ` Paolo Bonzini
  5 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-07-19 17:25 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

Remove the variation of __kvm_handle_fault_on_reboot() that accepts a
post-fault cleanup instruction now that its previous sole user, VMREAD,
uses a different method for handling faults.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0cc5b611a113..fefc5c4b3cad 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1503,12 +1503,11 @@ enum {
  */
 asmlinkage void kvm_spurious_fault(void);
 
-#define ____kvm_handle_fault_on_reboot(insn, cleanup_insn)	\
+#define __kvm_handle_fault_on_reboot(insn)	\
 	"666: " insn "\n\t" \
 	"668: \n\t"                           \
 	".pushsection .fixup, \"ax\" \n" \
 	"667: \n\t" \
-	cleanup_insn "\n\t"		      \
 	"cmpb $0, kvm_rebooting \n\t"	      \
 	"jne 668b \n\t"      		      \
 	__ASM_SIZE(push) " $666b \n\t"	      \
@@ -1516,9 +1515,6 @@ asmlinkage void kvm_spurious_fault(void);
 	".popsection \n\t" \
 	_ASM_EXTABLE(666b, 667b)
 
-#define __kvm_handle_fault_on_reboot(insn)		\
-	____kvm_handle_fault_on_reboot(insn, "")
-
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end);
 int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs
  2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
                   ` (3 preceding siblings ...)
  2019-07-19 17:25 ` [PATCH 4/4] KVM: x86: Drop ____kvm_handle_fault_on_reboot() Sean Christopherson
@ 2019-07-19 18:01 ` Paolo Bonzini
  2019-07-19 18:27   ` Josh Poimboeuf
  2019-07-19 18:01 ` Paolo Bonzini
  5 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2019-07-19 18:01 UTC (permalink / raw)
  To: Sean Christopherson, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

On 19/07/19 19:25, Sean Christopherson wrote:
> An in-flight patch[1] to make __kvm_handle_fault_on_reboot() play nice
> with objtool will add a JMP after most VMX instructions so that the reboot
> macro can use an actual CALL to kvm_spurious_fault() instead of a funky
> PUSH+JMP facsimile.
> 
> Rework the low level VMX instruction helpers to handle unexpected faults
> manually instead of relying on the "fault on reboot" macro.  By using
> asm-goto, most helpers can branch directly to an in-function call to
> kvm_spurious_fault(), which can then be optimized by compilers to reside
> out-of-line at the end of the function instead of inline as done by
> "fault on reboot".
> 
> The net impact relative to the current code base is more or less a nop
> when building with a compiler that supports __GCC_ASM_FLAG_OUTPUTS__.
> A bunch of code that was previously in .fixup gets moved into the slow
> paths of functions, but the fast paths are more basically unchanged.
> 
> Without __GCC_ASM_FLAG_OUTPUTS__, manually coding the Jcc is a net
> positive as CC_SET() without compiler support almost always generates a
> SETcc+CMP+Jcc sequence, which is now replaced with a single Jcc.
> 
> A small bonus is that the Jcc instrs are hinted to predict that the VMX
> instr will be successful.
> 
> [1] https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com
> 
> Sean Christopherson (4):
>   objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault()
>   KVM: VMX: Optimize VMX instruction error and fault handling
>   KVM: VMX: Add error handling to VMREAD helper
>   KVM: x86: Drop ____kvm_handle_fault_on_reboot()
> 
>  arch/x86/include/asm/kvm_host.h |  6 +--
>  arch/x86/kvm/vmx/ops.h          | 93 ++++++++++++++++++++-------------
>  arch/x86/kvm/vmx/vmx.c          | 42 +++++++++++++++
>  arch/x86/kvm/x86.c              |  3 +-
>  tools/objtool/check.c           |  1 -
>  5 files changed, 102 insertions(+), 43 deletions(-)
> 

Sean, would you mind basing these on top of Josh's patches, so that
Peter can add them to his tree?

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs
  2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
                   ` (4 preceding siblings ...)
  2019-07-19 18:01 ` [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Paolo Bonzini
@ 2019-07-19 18:01 ` Paolo Bonzini
  5 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2019-07-19 18:01 UTC (permalink / raw)
  To: Sean Christopherson, Radim Krčmář,
	Josh Poimboeuf, Peter Zijlstra
  Cc: kvm

On 19/07/19 19:25, Sean Christopherson wrote:
> An in-flight patch[1] to make __kvm_handle_fault_on_reboot() play nice
> with objtool will add a JMP after most VMX instructions so that the reboot
> macro can use an actual CALL to kvm_spurious_fault() instead of a funky
> PUSH+JMP facsimile.
> 
> Rework the low level VMX instruction helpers to handle unexpected faults
> manually instead of relying on the "fault on reboot" macro.  By using
> asm-goto, most helpers can branch directly to an in-function call to
> kvm_spurious_fault(), which can then be optimized by compilers to reside
> out-of-line at the end of the function instead of inline as done by
> "fault on reboot".
> 
> The net impact relative to the current code base is more or less a nop
> when building with a compiler that supports __GCC_ASM_FLAG_OUTPUTS__.
> A bunch of code that was previously in .fixup gets moved into the slow
> paths of functions, but the fast paths are more basically unchanged.
> 
> Without __GCC_ASM_FLAG_OUTPUTS__, manually coding the Jcc is a net
> positive as CC_SET() without compiler support almost always generates a
> SETcc+CMP+Jcc sequence, which is now replaced with a single Jcc.
> 
> A small bonus is that the Jcc instrs are hinted to predict that the VMX
> instr will be successful.
> 
> [1] https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com
> 
> Sean Christopherson (4):
>   objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault()
>   KVM: VMX: Optimize VMX instruction error and fault handling
>   KVM: VMX: Add error handling to VMREAD helper
>   KVM: x86: Drop ____kvm_handle_fault_on_reboot()
> 
>  arch/x86/include/asm/kvm_host.h |  6 +--
>  arch/x86/kvm/vmx/ops.h          | 93 ++++++++++++++++++++-------------
>  arch/x86/kvm/vmx/vmx.c          | 42 +++++++++++++++
>  arch/x86/kvm/x86.c              |  3 +-
>  tools/objtool/check.c           |  1 -
>  5 files changed, 102 insertions(+), 43 deletions(-)
> 

Very nice - series

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs
  2019-07-19 18:01 ` [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Paolo Bonzini
@ 2019-07-19 18:27   ` Josh Poimboeuf
  0 siblings, 0 replies; 8+ messages in thread
From: Josh Poimboeuf @ 2019-07-19 18:27 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Radim Krčmář, Peter Zijlstra, kvm

On Fri, Jul 19, 2019 at 08:01:23PM +0200, Paolo Bonzini wrote:
> On 19/07/19 19:25, Sean Christopherson wrote:
> > An in-flight patch[1] to make __kvm_handle_fault_on_reboot() play nice
> > with objtool will add a JMP after most VMX instructions so that the reboot
> > macro can use an actual CALL to kvm_spurious_fault() instead of a funky
> > PUSH+JMP facsimile.
> > 
> > Rework the low level VMX instruction helpers to handle unexpected faults
> > manually instead of relying on the "fault on reboot" macro.  By using
> > asm-goto, most helpers can branch directly to an in-function call to
> > kvm_spurious_fault(), which can then be optimized by compilers to reside
> > out-of-line at the end of the function instead of inline as done by
> > "fault on reboot".
> > 
> > The net impact relative to the current code base is more or less a nop
> > when building with a compiler that supports __GCC_ASM_FLAG_OUTPUTS__.
> > A bunch of code that was previously in .fixup gets moved into the slow
> > paths of functions, but the fast paths are more basically unchanged.
> > 
> > Without __GCC_ASM_FLAG_OUTPUTS__, manually coding the Jcc is a net
> > positive as CC_SET() without compiler support almost always generates a
> > SETcc+CMP+Jcc sequence, which is now replaced with a single Jcc.
> > 
> > A small bonus is that the Jcc instrs are hinted to predict that the VMX
> > instr will be successful.
> > 
> > [1] https://lkml.kernel.org/r/64a9b64d127e87b6920a97afde8e96ea76f6524e.1563413318.git.jpoimboe@redhat.com
> > 
> > Sean Christopherson (4):
> >   objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault()
> >   KVM: VMX: Optimize VMX instruction error and fault handling
> >   KVM: VMX: Add error handling to VMREAD helper
> >   KVM: x86: Drop ____kvm_handle_fault_on_reboot()
> > 
> >  arch/x86/include/asm/kvm_host.h |  6 +--
> >  arch/x86/kvm/vmx/ops.h          | 93 ++++++++++++++++++++-------------
> >  arch/x86/kvm/vmx/vmx.c          | 42 +++++++++++++++
> >  arch/x86/kvm/x86.c              |  3 +-
> >  tools/objtool/check.c           |  1 -
> >  5 files changed, 102 insertions(+), 43 deletions(-)
> > 
> 
> Sean, would you mind basing these on top of Josh's patches, so that
> Peter can add them to his tree?

Sean, FYI, my patches are already in tip/master so these can be based on
that.  I'm guessing the commit IDs are presumably stable, so the commits
can be referenced instead of the lkml link.

-- 
Josh

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-07-19 18:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-19 17:25 [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Sean Christopherson
2019-07-19 17:25 ` [PATCH 1/4] objtool: KVM: x86: Check kvm_rebooting in kvm_spurious_fault() Sean Christopherson
2019-07-19 17:25 ` [PATCH 2/4] KVM: VMX: Optimize VMX instruction error and fault handling Sean Christopherson
2019-07-19 17:25 ` [PATCH 3/4] KVM: VMX: Add error handling to VMREAD helper Sean Christopherson
2019-07-19 17:25 ` [PATCH 4/4] KVM: x86: Drop ____kvm_handle_fault_on_reboot() Sean Christopherson
2019-07-19 18:01 ` [PATCH 0/4] KVM: VMX: Preemptivly optimize VMX instrs Paolo Bonzini
2019-07-19 18:27   ` Josh Poimboeuf
2019-07-19 18:01 ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).