linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code
@ 2020-02-20 20:43 Sean Christopherson
  2020-02-20 20:43 ` [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible Sean Christopherson
                   ` (10 more replies)
  0 siblings, 11 replies; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

This series is technically x86 wide, but it only superficially affects
SVM, the motivation and primary touchpoints are all about VMX.

The goal of this series to ultimately clean up __vmx_flush_tlb(), which,
for me, manages to be extremely confusing despite being only ten lines of
code.

The most confusing aspect of __vmx_flush_tlb() is that it is overloaded
for multiple uses:

 1) TLB flushes in response to a change in KVM's MMU

 2) TLB flushes during nested VM-Enter/VM-Exit when VPID is enabled

 3) Guest-scoped TLB flushes for paravirt TLB flushing

Handling (2) and (3) in the same flow as (1) is kludgy, because the rules
for (1) are quite different than the rules for (2) and (3).  They're all
squeezed into __vmx_flush_tlb() via the @invalidate_gpa param, which means
"invalidate gpa mappings", not "invalidate a specific gpa"; it took me
forever and a day to realize that.

To clean things up, handle (2) by directly calling vpid_sync_context()
instead of bouncing through __vmx_flush_tlb(), and handle (3) via a
dedicated kvm_x86_ops hook.  This allows for a less tricky implementation
of vmx_flush_tlb() for (1), and (hopefully) clarifies the rules for what
mappings must be invalidated when.

Sean Christopherson (10):
  KVM: VMX: Use vpid_sync_context() directly when possible
  KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines
  KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
  KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into
    vpid_sync_context()
  KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address
  KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook
  KVM: VMX: Clean up vmx_flush_tlb_gva()
  KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush()
  KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb()
  KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb()

 arch/x86/include/asm/kvm_host.h |  8 +++++++-
 arch/x86/kvm/mmu/mmu.c          |  2 +-
 arch/x86/kvm/svm.c              | 14 ++++++++++----
 arch/x86/kvm/vmx/nested.c       | 12 ++++--------
 arch/x86/kvm/vmx/ops.h          | 32 +++++++++-----------------------
 arch/x86/kvm/vmx/vmx.c          | 26 +++++++++++++++++---------
 arch/x86/kvm/vmx/vmx.h          | 19 ++++++++++---------
 arch/x86/kvm/x86.c              |  8 ++++----
 8 files changed, 62 insertions(+), 59 deletions(-)

-- 
2.24.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:17   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 02/10] KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines Sean Christopherson
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use vpid_sync_context() directly for flows that run if and only if
enable_vpid=1, or more specifically, nested VMX flows that are gated by
vmx->nested.msrs.secondary_ctls_high.SECONDARY_EXEC_ENABLE_VPID being
set, which is allowed if and only if enable_vpid=1.  Because these flows
call __vmx_flush_tlb() with @invalidate_gpa=false, the if-statement that
decides between INVEPT and INVVPID will always go down the INVVPID path,
i.e. call vpid_sync_context() because
"enable_ept && (invalidate_gpa || !enable_vpid)" always evaluates false.

This helps pave the way toward removing @invalidate_gpa and @vpid from
__vmx_flush_tlb() and its callers.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 657c2eda357c..19ac4083667f 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2466,7 +2466,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 		if (nested_cpu_has_vpid(vmcs12) && nested_has_guest_tlb_tag(vcpu)) {
 			if (vmcs12->virtual_processor_id != vmx->nested.last_vpid) {
 				vmx->nested.last_vpid = vmcs12->virtual_processor_id;
-				__vmx_flush_tlb(vcpu, nested_get_vpid02(vcpu), false);
+				vpid_sync_context(nested_get_vpid02(vcpu));
 			}
 		} else {
 			/*
@@ -5154,17 +5154,17 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
 			__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR,
 				vpid02, operand.gla);
 		} else
-			__vmx_flush_tlb(vcpu, vpid02, false);
+			vpid_sync_context(vpid02);
 		break;
 	case VMX_VPID_EXTENT_SINGLE_CONTEXT:
 	case VMX_VPID_EXTENT_SINGLE_NON_GLOBAL:
 		if (!operand.vpid)
 			return nested_vmx_failValid(vcpu,
 				VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
-		__vmx_flush_tlb(vcpu, vpid02, false);
+		vpid_sync_context(vpid02);
 		break;
 	case VMX_VPID_EXTENT_ALL_CONTEXT:
-		__vmx_flush_tlb(vcpu, vpid02, false);
+		vpid_sync_context(vpid02);
 		break;
 	default:
 		WARN_ON_ONCE(1);
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 02/10] KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
  2020-02-20 20:43 ` [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:19   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Move vpid_sync_vcpu_addr() below vpid_sync_context() so that it can be
refactored in a future patch to call vpid_sync_context() directly when
the "individual address" INVVPID variant isn't supported.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/ops.h | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index 45eaedee2ac0..a2b0689e65e3 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -253,19 +253,6 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
 	vmx_asm2(invept, "r"(ext), "m"(operand), ext, eptp, gpa);
 }
 
-static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
-{
-	if (vpid == 0)
-		return true;
-
-	if (cpu_has_vmx_invvpid_individual_addr()) {
-		__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
-		return true;
-	}
-
-	return false;
-}
-
 static inline void vpid_sync_vcpu_single(int vpid)
 {
 	if (vpid == 0)
@@ -289,6 +276,19 @@ static inline void vpid_sync_context(int vpid)
 		vpid_sync_vcpu_global();
 }
 
+static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
+{
+	if (vpid == 0)
+		return true;
+
+	if (cpu_has_vmx_invvpid_individual_addr()) {
+		__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
+		return true;
+	}
+
+	return false;
+}
+
 static inline void ept_sync_global(void)
 {
 	__invept(VMX_EPT_EXTENT_GLOBAL, 0, 0);
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
  2020-02-20 20:43 ` [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible Sean Christopherson
  2020-02-20 20:43 ` [PATCH 02/10] KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:26   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context() Sean Christopherson
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Directly invoke vpid_sync_context() to do a global INVVPID when the
individual address variant is not supported instead of deferring such
behavior to the caller.  This allows for additional consolidation of
code as the logic is basically identical to the emulation of the
individual address variant in handle_invvpid().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/ops.h | 12 +++++-------
 arch/x86/kvm/vmx/vmx.c |  3 +--
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index a2b0689e65e3..612df1bdb26b 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -276,17 +276,15 @@ static inline void vpid_sync_context(int vpid)
 		vpid_sync_vcpu_global();
 }
 
-static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
+static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
 {
 	if (vpid == 0)
-		return true;
+		return;
 
-	if (cpu_has_vmx_invvpid_individual_addr()) {
+	if (cpu_has_vmx_invvpid_individual_addr())
 		__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
-		return true;
-	}
-
-	return false;
+	else
+		vpid_sync_context(vpid);
 }
 
 static inline void ept_sync_global(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9a6664886f2e..349a6e054e0e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2826,8 +2826,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 {
 	int vpid = to_vmx(vcpu)->vpid;
 
-	if (!vpid_sync_vcpu_addr(vpid, addr))
-		vpid_sync_context(vpid);
+	vpid_sync_vcpu_addr(vpid, addr);
 
 	/*
 	 * If VPIDs are not supported or enabled, then the above is a no-op.
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context()
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (2 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:39   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 05/10] KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address Sean Christopherson
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Fold vpid_sync_vcpu_global() and vpid_sync_vcpu_single() into their sole
caller.  KVM should always prefer the single variant, i.e. the only
reason to use the global variant is if the CPU doesn't support
invalidating a single VPID, which is the entire purpose of wrapping the
calls with vpid_sync_context().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/ops.h | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index 612df1bdb26b..eb6adc77a55d 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -253,29 +253,17 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
 	vmx_asm2(invept, "r"(ext), "m"(operand), ext, eptp, gpa);
 }
 
-static inline void vpid_sync_vcpu_single(int vpid)
+static inline void vpid_sync_context(int vpid)
 {
 	if (vpid == 0)
 		return;
 
 	if (cpu_has_vmx_invvpid_single())
 		__invvpid(VMX_VPID_EXTENT_SINGLE_CONTEXT, vpid, 0);
-}
-
-static inline void vpid_sync_vcpu_global(void)
-{
-	if (cpu_has_vmx_invvpid_global())
+	else
 		__invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0);
 }
 
-static inline void vpid_sync_context(int vpid)
-{
-	if (cpu_has_vmx_invvpid_single())
-		vpid_sync_vcpu_single(vpid);
-	else
-		vpid_sync_vcpu_global();
-}
-
 static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
 {
 	if (vpid == 0)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 05/10] KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (3 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context() Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:43   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook Sean Christopherson
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Use vpid_sync_vcpu_addr() to emulate the "individual address" variant of
INVVPID now that said function handles the fallback case of the (host)
CPU not supporting "individual address".

Note, the "vpid == 0" checks in the vpid_sync_*() helpers aren't
actually redundant with the "!operand.vpid" check in handle_invvpid(),
as the vpid passed to vpid_sync_vcpu_addr() is a KVM (host) controlled
value, i.e. vpid02 can be zero even if operand.vpid is non-zero.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 19ac4083667f..5a174be314e5 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -5150,11 +5150,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
 		    is_noncanonical_address(operand.gla, vcpu))
 			return nested_vmx_failValid(vcpu,
 				VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
-		if (cpu_has_vmx_invvpid_individual_addr()) {
-			__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR,
-				vpid02, operand.gla);
-		} else
-			vpid_sync_context(vpid02);
+		vpid_sync_vcpu_addr(vpid02, operand.gla);
 		break;
 	case VMX_VPID_EXTENT_SINGLE_CONTEXT:
 	case VMX_VPID_EXTENT_SINGLE_NON_GLOBAL:
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (4 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 05/10] KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
       [not found]   ` <87tv3krqta.fsf@vitty.brq.redhat.com>
  2020-02-21 17:32   ` Paolo Bonzini
  2020-02-20 20:43 ` [PATCH 07/10] KVM: VMX: Clean up vmx_flush_tlb_gva() Sean Christopherson
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Add a dedicated hook to handle flushing TLB entries on behalf of the
guest, i.e. for a paravirtualized TLB flush, and use it directly instead
of bouncing through kvm_vcpu_flush_tlb().  Change the effective VMX
implementation to never do INVEPT, i.e. to always flush via INVVPID.
The INVEPT performed by __vmx_flush_tlb() when @invalidate_gpa=false and
enable_vpid=0 is unnecessary, as it will only flush GPA->HPA mappings;
GVA->GPA and GVA->HPA translations are flushed by VM-Enter when VPID is
disabled, and changes in the guest pages tables only affect GVA->*PA
mappings.

When EPT and VPID are enabled, doing INVVPID is not required (by Intel's
architecture) to invalidate GPA mappings, i.e. TLB entries that cache
GPA->HPA translations can live across INVVPID as GPA->HPA mappings are
associated with an EPTP, not a VPID.  The intent of @invalidate_gpa is
to inform vmx_flush_tlb() that it needs to "invalidate gpa mappings",
i.e. do INVEPT and not simply INVVPID.  Other than nested VPID handling,
which now calls vpid_sync_context() directly, the only scenario where
KVM can safely do INVVPID instead of INVEPT (when EPT is enabled) is if
KVM is flushing TLB entries from the guest's perspective, i.e. is
invalidating GLA->GPA mappings.

Adding a dedicated ->tlb_flush_guest() paves the way toward removing
@invalidate_gpa, which is a potentially dangerous control flag as its
meaning is not exactly crystal clear, even for those who are familiar
with the subtleties of what mappings Intel CPUs are/aren't allowed to
keep across various invalidation scenarios.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  6 ++++++
 arch/x86/kvm/svm.c              |  6 ++++++
 arch/x86/kvm/vmx/vmx.c          | 13 +++++++++++++
 arch/x86/kvm/x86.c              |  2 +-
 4 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4dffbc10d3f8..86aed64b9a88 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1108,6 +1108,12 @@ struct kvm_x86_ops {
 	 */
 	void (*tlb_flush_gva)(struct kvm_vcpu *vcpu, gva_t addr);
 
+	/*
+	 * Flush any TLB entries created by the guest.  Like tlb_flush_gva(),
+	 * does not need to flush GPA->HPA mappings.
+	 */
+	void (*tlb_flush_guest)(struct kvm_vcpu *vcpu);
+
 	void (*run)(struct kvm_vcpu *vcpu);
 	int (*handle_exit)(struct kvm_vcpu *vcpu,
 		enum exit_fastpath_completion exit_fastpath);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index a3e32d61d60c..e549811f51c6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5608,6 +5608,11 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
 	invlpga(gva, svm->vmcb->control.asid);
 }
 
+static void svm_flush_tlb_guest(struct kvm_vcpu *vcpu)
+{
+	svm_flush_tlb(vcpu, true);
+}
+
 static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
 {
 }
@@ -7429,6 +7434,7 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
 
 	.tlb_flush = svm_flush_tlb,
 	.tlb_flush_gva = svm_flush_tlb_gva,
+	.tlb_flush_guest = svm_flush_tlb_guest,
 
 	.run = svm_vcpu_run,
 	.handle_exit = handle_exit,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 349a6e054e0e..5372a93e1727 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2835,6 +2835,18 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 	 */
 }
 
+static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
+{
+	/*
+	 * vpid_sync_context() is a nop if vmx->vpid==0, e.g. if enable_vpid==0
+	 * or a vpid couldn't be allocated for this vCPU.  VM-Enter and VM-Exit
+	 * are required to flush GVA->{G,H}PA mappings from the TLB if vpid is
+	 * disabled (VM-Enter with vpid enabled and vpid==0 is disallowed),
+	 * i.e. no explicit INVVPID is necessary.
+	 */
+	vpid_sync_context(to_vmx(vcpu)->vpid);
+}
+
 static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu)
 {
 	ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits;
@@ -7779,6 +7791,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 
 	.tlb_flush = vmx_flush_tlb,
 	.tlb_flush_gva = vmx_flush_tlb_gva,
+	.tlb_flush_guest = vmx_flush_tlb_guest,
 
 	.run = vmx_vcpu_run,
 	.handle_exit = vmx_handle_exit,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fbabb2f06273..72f7ca4baa6d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2675,7 +2675,7 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
 	trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
 		st->preempted & KVM_VCPU_FLUSH_TLB);
 	if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
-		kvm_vcpu_flush_tlb(vcpu, false);
+		kvm_x86_ops->tlb_flush_guest(vcpu);
 
 	vcpu->arch.st.preempted = 0;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 07/10] KVM: VMX: Clean up vmx_flush_tlb_gva()
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (5 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:54   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 08/10] KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush() Sean Christopherson
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Refactor vmx_flush_tlb_gva() to remove a superfluous local variable and
clean up its comment, which is oddly located below the code it is
commenting.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5372a93e1727..906e9d9aa09e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2824,15 +2824,11 @@ static void exit_lmode(struct kvm_vcpu *vcpu)
 
 static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
 {
-	int vpid = to_vmx(vcpu)->vpid;
-
-	vpid_sync_vcpu_addr(vpid, addr);
-
 	/*
-	 * If VPIDs are not supported or enabled, then the above is a no-op.
-	 * But we don't really need a TLB flush in that case anyway, because
-	 * each VM entry/exit includes an implicit flush when VPID is 0.
+	 * vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in
+	 * vmx_flush_tlb_guest() for an explanation of why this is ok.
 	 */
+	vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr);
 }
 
 static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 08/10] KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush()
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (6 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 07/10] KVM: VMX: Clean up vmx_flush_tlb_gva() Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:56   ` Vitaly Kuznetsov
  2020-02-20 20:43 ` [PATCH 09/10] KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb() Sean Christopherson
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop @invalidate_gpa from ->tlb_flush() and kvm_vcpu_flush_tlb() now
that all callers pass %true for said param.

Note, vmx_flush_tlb() now unconditionally passes %true to
__vmx_flush_tlb(), the less straightforward VMX change will be handled
in a future patch.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/mmu/mmu.c          |  2 +-
 arch/x86/kvm/svm.c              | 10 +++++-----
 arch/x86/kvm/vmx/vmx.c          |  4 ++--
 arch/x86/kvm/vmx/vmx.h          |  4 ++--
 arch/x86/kvm/x86.c              |  6 +++---
 6 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 86aed64b9a88..2d5ef0081d50 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1095,7 +1095,7 @@ struct kvm_x86_ops {
 	unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
 	void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
 
-	void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa);
+	void (*tlb_flush)(struct kvm_vcpu *vcpu);
 	int  (*tlb_remote_flush)(struct kvm *kvm);
 	int  (*tlb_remote_flush_with_range)(struct kvm *kvm,
 			struct kvm_tlb_range *range);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 7011a4e54866..7fefe58dd7ab 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5186,7 +5186,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 	if (r)
 		goto out;
 	kvm_mmu_load_cr3(vcpu);
-	kvm_x86_ops->tlb_flush(vcpu, true);
+	kvm_x86_ops->tlb_flush(vcpu);
 out:
 	return r;
 }
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index e549811f51c6..16d58ffc7aff 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -385,7 +385,7 @@ module_param(dump_invalid_vmcb, bool, 0644);
 static u8 rsm_ins_bytes[] = "\x0f\xaa";
 
 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
-static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa);
+static void svm_flush_tlb(struct kvm_vcpu *vcpu);
 static void svm_complete_interrupts(struct vcpu_svm *svm);
 static void svm_toggle_avic_for_irq_window(struct kvm_vcpu *vcpu, bool activate);
 static inline void avic_post_state_restore(struct kvm_vcpu *vcpu);
@@ -2634,7 +2634,7 @@ static int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 		return 1;
 
 	if (npt_enabled && ((old_cr4 ^ cr4) & X86_CR4_PGE))
-		svm_flush_tlb(vcpu, true);
+		svm_flush_tlb(vcpu);
 
 	vcpu->arch.cr4 = cr4;
 	if (!npt_enabled)
@@ -3588,7 +3588,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 	svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions;
 	svm->nested.intercept            = nested_vmcb->control.intercept;
 
-	svm_flush_tlb(&svm->vcpu, true);
+	svm_flush_tlb(&svm->vcpu);
 	svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK;
 	if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK)
 		svm->vcpu.arch.hflags |= HF_VINTR_MASK;
@@ -5591,7 +5591,7 @@ static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
 	return 0;
 }
 
-static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+static void svm_flush_tlb(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 
@@ -5610,7 +5610,7 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
 
 static void svm_flush_tlb_guest(struct kvm_vcpu *vcpu)
 {
-	svm_flush_tlb(vcpu, true);
+	svm_flush_tlb(vcpu);
 }
 
 static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 906e9d9aa09e..8bb380d22dc2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6043,7 +6043,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
 		if (flexpriority_enabled) {
 			sec_exec_control |=
 				SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
-			vmx_flush_tlb(vcpu, true);
+			vmx_flush_tlb(vcpu);
 		}
 		break;
 	case LAPIC_MODE_X2APIC:
@@ -6061,7 +6061,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
 {
 	if (!is_guest_mode(vcpu)) {
 		vmcs_write64(APIC_ACCESS_ADDR, hpa);
-		vmx_flush_tlb(vcpu, true);
+		vmx_flush_tlb(vcpu);
 	}
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 7f42cf3dcd70..6e588d238318 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -514,9 +514,9 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
 	}
 }
 
-static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu)
 {
-	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
+	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, true);
 }
 
 static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 72f7ca4baa6d..e26ffebe6f6e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2646,10 +2646,10 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
 	vcpu->arch.time = 0;
 }
 
-static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
+static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
 {
 	++vcpu->stat.tlb_flush;
-	kvm_x86_ops->tlb_flush(vcpu, invalidate_gpa);
+	kvm_x86_ops->tlb_flush(vcpu);
 }
 
 static void record_steal_time(struct kvm_vcpu *vcpu)
@@ -8166,7 +8166,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		if (kvm_check_request(KVM_REQ_LOAD_CR3, vcpu))
 			kvm_mmu_load_cr3(vcpu);
 		if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
-			kvm_vcpu_flush_tlb(vcpu, true);
+			kvm_vcpu_flush_tlb(vcpu);
 		if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
 			vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
 			r = 0;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 09/10] KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb()
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (7 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 08/10] KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush() Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-20 20:43 ` [PATCH 10/10] KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb() Sean Christopherson
  2020-02-21 13:20 ` [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Paolo Bonzini
  10 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Drop @invalidate_gpa from __vmx_flush_tlb() now that it's sole caller
unconditionally passes %true, i.e. "(invalidate_gpa || !enable_vpid)"
will always evaluate true.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.h | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 6e588d238318..6e0ca57cc41c 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -501,10 +501,9 @@ static inline struct vmcs *alloc_vmcs(bool shadow)
 
 u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa);
 
-static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
-				bool invalidate_gpa)
+static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid)
 {
-	if (enable_ept && (invalidate_gpa || !enable_vpid)) {
+	if (enable_ept) {
 		if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 			return;
 		ept_sync_context(construct_eptp(vcpu,
@@ -516,7 +515,7 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
 
 static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu)
 {
-	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, true);
+	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid);
 }
 
 static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 10/10] KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb()
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (8 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 09/10] KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb() Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
  2020-02-21 13:20 ` [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Paolo Bonzini
  10 siblings, 0 replies; 25+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

Fold __vmx_flush_tlb() into its sole caller, vmx_flush_tlb(), now that
all call sites that previously bounced through __vmx_flush_tlb() to
force the INVVPID path instead call vpid_sync_context() directly.

Opportunistically add a comment to explain why INVEPT is necessary when
EPT is enabled, even if VPID is disabled.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.h | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 6e0ca57cc41c..6204fa5897bb 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -501,23 +501,25 @@ static inline struct vmcs *alloc_vmcs(bool shadow)
 
 u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa);
 
-static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid)
+static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu)
 {
+	/*
+	 * INVEPT must be issued when EPT is enabled, irrespective of VPID, as
+	 * the CPU is not required to invalidate GPA->HPA mappings on VM-Entry,
+	 * even if VPID is disabled.  GPA->HPA mappings are associated with the
+	 * root EPT structure and not any particular VPID (INVVPID is also not
+	 * required to invalidate GPA->HPA mappings).
+	 */
 	if (enable_ept) {
 		if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 			return;
 		ept_sync_context(construct_eptp(vcpu,
 						vcpu->arch.mmu->root_hpa));
 	} else {
-		vpid_sync_context(vpid);
+		vpid_sync_context(to_vmx(vcpu)->vpid);
 	}
 }
 
-static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu)
-{
-	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid);
-}
-
 static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
 {
 	vmx->current_tsc_ratio = vmx->vcpu.arch.tsc_scaling_ratio;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible
  2020-02-20 20:43 ` [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible Sean Christopherson
@ 2020-02-21 13:17   ` Vitaly Kuznetsov
  2020-02-21 15:36     ` Sean Christopherson
  0 siblings, 1 reply; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use vpid_sync_context() directly for flows that run if and only if
> enable_vpid=1, or more specifically, nested VMX flows that are gated by
> vmx->nested.msrs.secondary_ctls_high.SECONDARY_EXEC_ENABLE_VPID being
> set, which is allowed if and only if enable_vpid=1.  Because these flows
> call __vmx_flush_tlb() with @invalidate_gpa=false, the if-statement that
> decides between INVEPT and INVVPID will always go down the INVVPID path,
> i.e. call vpid_sync_context() because
> "enable_ept && (invalidate_gpa || !enable_vpid)" always evaluates false.
>
> This helps pave the way toward removing @invalidate_gpa and @vpid from
> __vmx_flush_tlb() and its callers.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/nested.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 657c2eda357c..19ac4083667f 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -2466,7 +2466,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
>  		if (nested_cpu_has_vpid(vmcs12) && nested_has_guest_tlb_tag(vcpu)) {
>  			if (vmcs12->virtual_processor_id != vmx->nested.last_vpid) {
>  				vmx->nested.last_vpid = vmcs12->virtual_processor_id;
> -				__vmx_flush_tlb(vcpu, nested_get_vpid02(vcpu), false);
> +				vpid_sync_context(nested_get_vpid02(vcpu));
>  			}
>  		} else {
>  			/*
> @@ -5154,17 +5154,17 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
>  			__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR,
>  				vpid02, operand.gla);
>  		} else
> -			__vmx_flush_tlb(vcpu, vpid02, false);
> +			vpid_sync_context(vpid02);

This is a pre-existing condition but coding style requires braces even
for single statements when they were used in another branch.

>  		break;
>  	case VMX_VPID_EXTENT_SINGLE_CONTEXT:
>  	case VMX_VPID_EXTENT_SINGLE_NON_GLOBAL:
>  		if (!operand.vpid)
>  			return nested_vmx_failValid(vcpu,
>  				VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
> -		__vmx_flush_tlb(vcpu, vpid02, false);
> +		vpid_sync_context(vpid02);
>  		break;
>  	case VMX_VPID_EXTENT_ALL_CONTEXT:
> -		__vmx_flush_tlb(vcpu, vpid02, false);
> +		vpid_sync_context(vpid02);
>  		break;
>  	default:
>  		WARN_ON_ONCE(1);

Seems to be no change indeed,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 02/10] KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines
  2020-02-20 20:43 ` [PATCH 02/10] KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines Sean Christopherson
@ 2020-02-21 13:19   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Move vpid_sync_vcpu_addr() below vpid_sync_context() so that it can be
> refactored in a future patch to call vpid_sync_context() directly when
> the "individual address" INVVPID variant isn't supported.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/ops.h | 26 +++++++++++++-------------
>  1 file changed, 13 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
> index 45eaedee2ac0..a2b0689e65e3 100644
> --- a/arch/x86/kvm/vmx/ops.h
> +++ b/arch/x86/kvm/vmx/ops.h
> @@ -253,19 +253,6 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
>  	vmx_asm2(invept, "r"(ext), "m"(operand), ext, eptp, gpa);
>  }
>  
> -static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
> -{
> -	if (vpid == 0)
> -		return true;
> -
> -	if (cpu_has_vmx_invvpid_individual_addr()) {
> -		__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
> -		return true;
> -	}
> -
> -	return false;
> -}
> -
>  static inline void vpid_sync_vcpu_single(int vpid)
>  {
>  	if (vpid == 0)
> @@ -289,6 +276,19 @@ static inline void vpid_sync_context(int vpid)
>  		vpid_sync_vcpu_global();
>  }
>  
> +static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
> +{
> +	if (vpid == 0)
> +		return true;
> +
> +	if (cpu_has_vmx_invvpid_individual_addr()) {
> +		__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
> +		return true;
> +	}
> +
> +	return false;
> +}
> +
>  static inline void ept_sync_global(void)
>  {
>  	__invept(VMX_EPT_EXTENT_GLOBAL, 0, 0);

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code
  2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
                   ` (9 preceding siblings ...)
  2020-02-20 20:43 ` [PATCH 10/10] KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb() Sean Christopherson
@ 2020-02-21 13:20 ` Paolo Bonzini
  10 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-02-21 13:20 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 20/02/20 21:43, Sean Christopherson wrote:
> This series is technically x86 wide, but it only superficially affects
> SVM, the motivation and primary touchpoints are all about VMX.
> 
> The goal of this series to ultimately clean up __vmx_flush_tlb(), which,
> for me, manages to be extremely confusing despite being only ten lines of
> code.
> 
> The most confusing aspect of __vmx_flush_tlb() is that it is overloaded
> for multiple uses:
> 
>  1) TLB flushes in response to a change in KVM's MMU
> 
>  2) TLB flushes during nested VM-Enter/VM-Exit when VPID is enabled
> 
>  3) Guest-scoped TLB flushes for paravirt TLB flushing
> 
> Handling (2) and (3) in the same flow as (1) is kludgy, because the rules
> for (1) are quite different than the rules for (2) and (3).  They're all
> squeezed into __vmx_flush_tlb() via the @invalidate_gpa param, which means
> "invalidate gpa mappings", not "invalidate a specific gpa"; it took me
> forever and a day to realize that.
> 
> To clean things up, handle (2) by directly calling vpid_sync_context()
> instead of bouncing through __vmx_flush_tlb(), and handle (3) via a
> dedicated kvm_x86_ops hook.  This allows for a less tricky implementation
> of vmx_flush_tlb() for (1), and (hopefully) clarifies the rules for what
> mappings must be invalidated when.
> 
> Sean Christopherson (10):
>   KVM: VMX: Use vpid_sync_context() directly when possible
>   KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines
>   KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
>   KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into
>     vpid_sync_context()
>   KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address
>   KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook
>   KVM: VMX: Clean up vmx_flush_tlb_gva()
>   KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush()
>   KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb()
>   KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb()
> 
>  arch/x86/include/asm/kvm_host.h |  8 +++++++-
>  arch/x86/kvm/mmu/mmu.c          |  2 +-
>  arch/x86/kvm/svm.c              | 14 ++++++++++----
>  arch/x86/kvm/vmx/nested.c       | 12 ++++--------
>  arch/x86/kvm/vmx/ops.h          | 32 +++++++++-----------------------
>  arch/x86/kvm/vmx/vmx.c          | 26 +++++++++++++++++---------
>  arch/x86/kvm/vmx/vmx.h          | 19 ++++++++++---------
>  arch/x86/kvm/x86.c              |  8 ++++----
>  8 files changed, 62 insertions(+), 59 deletions(-)
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
  2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
@ 2020-02-21 13:26   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:26 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Directly invoke vpid_sync_context() to do a global INVVPID when the
> individual address variant is not supported instead of deferring such
> behavior to the caller.  This allows for additional consolidation of
> code as the logic is basically identical to the emulation of the
> individual address variant in handle_invvpid().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/ops.h | 12 +++++-------
>  arch/x86/kvm/vmx/vmx.c |  3 +--
>  2 files changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
> index a2b0689e65e3..612df1bdb26b 100644
> --- a/arch/x86/kvm/vmx/ops.h
> +++ b/arch/x86/kvm/vmx/ops.h
> @@ -276,17 +276,15 @@ static inline void vpid_sync_context(int vpid)
>  		vpid_sync_vcpu_global();
>  }
>  
> -static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
> +static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
>  {
>  	if (vpid == 0)
> -		return true;
> +		return;
>  
> -	if (cpu_has_vmx_invvpid_individual_addr()) {
> +	if (cpu_has_vmx_invvpid_individual_addr())
>  		__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
> -		return true;
> -	}
> -
> -	return false;
> +	else
> +		vpid_sync_context(vpid);
>  }
>  
>  static inline void ept_sync_global(void)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 9a6664886f2e..349a6e054e0e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2826,8 +2826,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
>  {
>  	int vpid = to_vmx(vcpu)->vpid;
>  
> -	if (!vpid_sync_vcpu_addr(vpid, addr))
> -		vpid_sync_context(vpid);
> +	vpid_sync_vcpu_addr(vpid, addr);
>  
>  	/*
>  	 * If VPIDs are not supported or enabled, then the above is a no-op.

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context()
  2020-02-20 20:43 ` [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context() Sean Christopherson
@ 2020-02-21 13:39   ` Vitaly Kuznetsov
  2020-02-21 15:32     ` Sean Christopherson
  0 siblings, 1 reply; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:39 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Fold vpid_sync_vcpu_global() and vpid_sync_vcpu_single() into their sole
> caller.  KVM should always prefer the single variant, i.e. the only
> reason to use the global variant is if the CPU doesn't support
> invalidating a single VPID, which is the entire purpose of wrapping the
> calls with vpid_sync_context().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/ops.h | 16 ++--------------
>  1 file changed, 2 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
> index 612df1bdb26b..eb6adc77a55d 100644
> --- a/arch/x86/kvm/vmx/ops.h
> +++ b/arch/x86/kvm/vmx/ops.h
> @@ -253,29 +253,17 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
>  	vmx_asm2(invept, "r"(ext), "m"(operand), ext, eptp, gpa);
>  }
>  
> -static inline void vpid_sync_vcpu_single(int vpid)
> +static inline void vpid_sync_context(int vpid)
>  {
>  	if (vpid == 0)
>  		return;
>  
>  	if (cpu_has_vmx_invvpid_single())
>  		__invvpid(VMX_VPID_EXTENT_SINGLE_CONTEXT, vpid, 0);
> -}
> -
> -static inline void vpid_sync_vcpu_global(void)
> -{
> -	if (cpu_has_vmx_invvpid_global())
> +	else
>  		__invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0);
>  }
>  
> -static inline void vpid_sync_context(int vpid)
> -{
> -	if (cpu_has_vmx_invvpid_single())
> -		vpid_sync_vcpu_single(vpid);
> -	else
> -		vpid_sync_vcpu_global();
> -}
> -
>  static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
>  {
>  	if (vpid == 0)

In the original code it's only vpid_sync_vcpu_single() which has 'vpid
== 0' check, vpid_sync_vcpu_global() doesn't have it. So in the
hypothetical situation when cpu_has_vmx_invvpid_single() is false AND
we've e.g. exhausted our VPID space and allocate_vpid() returned zero,
the new code just won't do anything while the old one would've done
__invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0), right?

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 05/10] KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address
  2020-02-20 20:43 ` [PATCH 05/10] KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address Sean Christopherson
@ 2020-02-21 13:43   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Use vpid_sync_vcpu_addr() to emulate the "individual address" variant of
> INVVPID now that said function handles the fallback case of the (host)
> CPU not supporting "individual address".
>
> Note, the "vpid == 0" checks in the vpid_sync_*() helpers aren't
> actually redundant with the "!operand.vpid" check in handle_invvpid(),
> as the vpid passed to vpid_sync_vcpu_addr() is a KVM (host) controlled
> value, i.e. vpid02 can be zero even if operand.vpid is non-zero.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/nested.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 19ac4083667f..5a174be314e5 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -5150,11 +5150,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
>  		    is_noncanonical_address(operand.gla, vcpu))
>  			return nested_vmx_failValid(vcpu,
>  				VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
> -		if (cpu_has_vmx_invvpid_individual_addr()) {
> -			__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR,
> -				vpid02, operand.gla);
> -		} else
> -			vpid_sync_context(vpid02);
> +		vpid_sync_vcpu_addr(vpid02, operand.gla);
>  		break;
>  	case VMX_VPID_EXTENT_SINGLE_CONTEXT:
>  	case VMX_VPID_EXTENT_SINGLE_NON_GLOBAL:

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 07/10] KVM: VMX: Clean up vmx_flush_tlb_gva()
  2020-02-20 20:43 ` [PATCH 07/10] KVM: VMX: Clean up vmx_flush_tlb_gva() Sean Christopherson
@ 2020-02-21 13:54   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:54 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Refactor vmx_flush_tlb_gva() to remove a superfluous local variable and
> clean up its comment, which is oddly located below the code it is
> commenting.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 5372a93e1727..906e9d9aa09e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2824,15 +2824,11 @@ static void exit_lmode(struct kvm_vcpu *vcpu)
>  
>  static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
>  {
> -	int vpid = to_vmx(vcpu)->vpid;
> -
> -	vpid_sync_vcpu_addr(vpid, addr);
> -
>  	/*
> -	 * If VPIDs are not supported or enabled, then the above is a no-op.
> -	 * But we don't really need a TLB flush in that case anyway, because
> -	 * each VM entry/exit includes an implicit flush when VPID is 0.
> +	 * vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in
> +	 * vmx_flush_tlb_guest() for an explanation of why this is ok.

"OK" :-)

>  	 */
> +	vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr);
>  }
>  
>  static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 08/10] KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush()
  2020-02-20 20:43 ` [PATCH 08/10] KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush() Sean Christopherson
@ 2020-02-21 13:56   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 25+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:56 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> Drop @invalidate_gpa from ->tlb_flush() and kvm_vcpu_flush_tlb() now
> that all callers pass %true for said param.
>
> Note, vmx_flush_tlb() now unconditionally passes %true to
> __vmx_flush_tlb(), the less straightforward VMX change will be handled
> in a future patch.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 +-
>  arch/x86/kvm/mmu/mmu.c          |  2 +-
>  arch/x86/kvm/svm.c              | 10 +++++-----
>  arch/x86/kvm/vmx/vmx.c          |  4 ++--
>  arch/x86/kvm/vmx/vmx.h          |  4 ++--
>  arch/x86/kvm/x86.c              |  6 +++---
>  6 files changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 86aed64b9a88..2d5ef0081d50 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1095,7 +1095,7 @@ struct kvm_x86_ops {
>  	unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
>  	void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
>  
> -	void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa);
> +	void (*tlb_flush)(struct kvm_vcpu *vcpu);
>  	int  (*tlb_remote_flush)(struct kvm *kvm);
>  	int  (*tlb_remote_flush_with_range)(struct kvm *kvm,
>  			struct kvm_tlb_range *range);
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 7011a4e54866..7fefe58dd7ab 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5186,7 +5186,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
>  	if (r)
>  		goto out;
>  	kvm_mmu_load_cr3(vcpu);
> -	kvm_x86_ops->tlb_flush(vcpu, true);
> +	kvm_x86_ops->tlb_flush(vcpu);
>  out:
>  	return r;
>  }
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index e549811f51c6..16d58ffc7aff 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -385,7 +385,7 @@ module_param(dump_invalid_vmcb, bool, 0644);
>  static u8 rsm_ins_bytes[] = "\x0f\xaa";
>  
>  static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
> -static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa);
> +static void svm_flush_tlb(struct kvm_vcpu *vcpu);
>  static void svm_complete_interrupts(struct vcpu_svm *svm);
>  static void svm_toggle_avic_for_irq_window(struct kvm_vcpu *vcpu, bool activate);
>  static inline void avic_post_state_restore(struct kvm_vcpu *vcpu);
> @@ -2634,7 +2634,7 @@ static int svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
>  		return 1;
>  
>  	if (npt_enabled && ((old_cr4 ^ cr4) & X86_CR4_PGE))
> -		svm_flush_tlb(vcpu, true);
> +		svm_flush_tlb(vcpu);
>  
>  	vcpu->arch.cr4 = cr4;
>  	if (!npt_enabled)
> @@ -3588,7 +3588,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
>  	svm->nested.intercept_exceptions = nested_vmcb->control.intercept_exceptions;
>  	svm->nested.intercept            = nested_vmcb->control.intercept;
>  
> -	svm_flush_tlb(&svm->vcpu, true);
> +	svm_flush_tlb(&svm->vcpu);
>  	svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK;
>  	if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK)
>  		svm->vcpu.arch.hflags |= HF_VINTR_MASK;
> @@ -5591,7 +5591,7 @@ static int svm_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
>  	return 0;
>  }
>  
> -static void svm_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
> +static void svm_flush_tlb(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
>  
> @@ -5610,7 +5610,7 @@ static void svm_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t gva)
>  
>  static void svm_flush_tlb_guest(struct kvm_vcpu *vcpu)
>  {
> -	svm_flush_tlb(vcpu, true);
> +	svm_flush_tlb(vcpu);
>  }
>  
>  static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 906e9d9aa09e..8bb380d22dc2 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6043,7 +6043,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
>  		if (flexpriority_enabled) {
>  			sec_exec_control |=
>  				SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
> -			vmx_flush_tlb(vcpu, true);
> +			vmx_flush_tlb(vcpu);
>  		}
>  		break;
>  	case LAPIC_MODE_X2APIC:
> @@ -6061,7 +6061,7 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
>  {
>  	if (!is_guest_mode(vcpu)) {
>  		vmcs_write64(APIC_ACCESS_ADDR, hpa);
> -		vmx_flush_tlb(vcpu, true);
> +		vmx_flush_tlb(vcpu);
>  	}
>  }
>  
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 7f42cf3dcd70..6e588d238318 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -514,9 +514,9 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
>  	}
>  }
>  
> -static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
> +static inline void vmx_flush_tlb(struct kvm_vcpu *vcpu)
>  {
> -	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
> +	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, true);
>  }
>  
>  static inline void decache_tsc_multiplier(struct vcpu_vmx *vmx)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 72f7ca4baa6d..e26ffebe6f6e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -2646,10 +2646,10 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
>  	vcpu->arch.time = 0;
>  }
>  
> -static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
> +static void kvm_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>  {
>  	++vcpu->stat.tlb_flush;
> -	kvm_x86_ops->tlb_flush(vcpu, invalidate_gpa);
> +	kvm_x86_ops->tlb_flush(vcpu);
>  }
>  
>  static void record_steal_time(struct kvm_vcpu *vcpu)
> @@ -8166,7 +8166,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  		if (kvm_check_request(KVM_REQ_LOAD_CR3, vcpu))
>  			kvm_mmu_load_cr3(vcpu);
>  		if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
> -			kvm_vcpu_flush_tlb(vcpu, true);
> +			kvm_vcpu_flush_tlb(vcpu);
>  		if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
>  			vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;
>  			r = 0;

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context()
  2020-02-21 13:39   ` Vitaly Kuznetsov
@ 2020-02-21 15:32     ` Sean Christopherson
  2020-02-21 17:28       ` Paolo Bonzini
  0 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-21 15:32 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Fri, Feb 21, 2020 at 02:39:51PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Fold vpid_sync_vcpu_global() and vpid_sync_vcpu_single() into their sole
> > caller.  KVM should always prefer the single variant, i.e. the only
> > reason to use the global variant is if the CPU doesn't support
> > invalidating a single VPID, which is the entire purpose of wrapping the
> > calls with vpid_sync_context().
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/vmx/ops.h | 16 ++--------------
> >  1 file changed, 2 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
> > index 612df1bdb26b..eb6adc77a55d 100644
> > --- a/arch/x86/kvm/vmx/ops.h
> > +++ b/arch/x86/kvm/vmx/ops.h
> > @@ -253,29 +253,17 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
> >  	vmx_asm2(invept, "r"(ext), "m"(operand), ext, eptp, gpa);
> >  }
> >  
> > -static inline void vpid_sync_vcpu_single(int vpid)
> > +static inline void vpid_sync_context(int vpid)
> >  {
> >  	if (vpid == 0)
> >  		return;
> >  
> >  	if (cpu_has_vmx_invvpid_single())
> >  		__invvpid(VMX_VPID_EXTENT_SINGLE_CONTEXT, vpid, 0);
> > -}
> > -
> > -static inline void vpid_sync_vcpu_global(void)
> > -{
> > -	if (cpu_has_vmx_invvpid_global())
> > +	else
> >  		__invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0);
> >  }
> >  
> > -static inline void vpid_sync_context(int vpid)
> > -{
> > -	if (cpu_has_vmx_invvpid_single())
> > -		vpid_sync_vcpu_single(vpid);
> > -	else
> > -		vpid_sync_vcpu_global();
> > -}
> > -
> >  static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
> >  {
> >  	if (vpid == 0)
> 
> In the original code it's only vpid_sync_vcpu_single() which has 'vpid
> == 0' check, vpid_sync_vcpu_global() doesn't have it. So in the
> hypothetical situation when cpu_has_vmx_invvpid_single() is false AND
> we've e.g. exhausted our VPID space and allocate_vpid() returned zero,
> the new code just won't do anything while the old one would've done
> __invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0), right?

Ah rats.  I lost track of that functional change between making the commit
and writing the changelog.

I'll spin a v2 to rewrite the changelog, and maybe add the "vpid == 0"
check in a separate patch.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible
  2020-02-21 13:17   ` Vitaly Kuznetsov
@ 2020-02-21 15:36     ` Sean Christopherson
  2020-02-21 17:26       ` Paolo Bonzini
  0 siblings, 1 reply; 25+ messages in thread
From: Sean Christopherson @ 2020-02-21 15:36 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On Fri, Feb 21, 2020 at 02:17:46PM +0100, Vitaly Kuznetsov wrote:
> Sean Christopherson <sean.j.christopherson@intel.com> writes:
> 
> > Use vpid_sync_context() directly for flows that run if and only if
> > enable_vpid=1, or more specifically, nested VMX flows that are gated by
> > vmx->nested.msrs.secondary_ctls_high.SECONDARY_EXEC_ENABLE_VPID being
> > set, which is allowed if and only if enable_vpid=1.  Because these flows
> > call __vmx_flush_tlb() with @invalidate_gpa=false, the if-statement that
> > decides between INVEPT and INVVPID will always go down the INVVPID path,
> > i.e. call vpid_sync_context() because
> > "enable_ept && (invalidate_gpa || !enable_vpid)" always evaluates false.
> >
> > This helps pave the way toward removing @invalidate_gpa and @vpid from
> > __vmx_flush_tlb() and its callers.
> >
> > No functional change intended.
> >
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/vmx/nested.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> > index 657c2eda357c..19ac4083667f 100644
> > --- a/arch/x86/kvm/vmx/nested.c
> > +++ b/arch/x86/kvm/vmx/nested.c
> > @@ -2466,7 +2466,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
> >  		if (nested_cpu_has_vpid(vmcs12) && nested_has_guest_tlb_tag(vcpu)) {
> >  			if (vmcs12->virtual_processor_id != vmx->nested.last_vpid) {
> >  				vmx->nested.last_vpid = vmcs12->virtual_processor_id;
> > -				__vmx_flush_tlb(vcpu, nested_get_vpid02(vcpu), false);
> > +				vpid_sync_context(nested_get_vpid02(vcpu));
> >  			}
> >  		} else {
> >  			/*
> > @@ -5154,17 +5154,17 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
> >  			__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR,
> >  				vpid02, operand.gla);
> >  		} else
> > -			__vmx_flush_tlb(vcpu, vpid02, false);
> > +			vpid_sync_context(vpid02);
> 
> This is a pre-existing condition but coding style requires braces even
> for single statements when they were used in another branch.

I'll fix this in v2.

> >  		break;
> >  	case VMX_VPID_EXTENT_SINGLE_CONTEXT:
> >  	case VMX_VPID_EXTENT_SINGLE_NON_GLOBAL:
> >  		if (!operand.vpid)
> >  			return nested_vmx_failValid(vcpu,
> >  				VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
> > -		__vmx_flush_tlb(vcpu, vpid02, false);
> > +		vpid_sync_context(vpid02);
> >  		break;
> >  	case VMX_VPID_EXTENT_ALL_CONTEXT:
> > -		__vmx_flush_tlb(vcpu, vpid02, false);
> > +		vpid_sync_context(vpid02);
> >  		break;
> >  	default:
> >  		WARN_ON_ONCE(1);
> 
> Seems to be no change indeed,

Heh, that's about the same level of confidence I had :-)
 
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> 
> -- 
> Vitaly
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible
  2020-02-21 15:36     ` Sean Christopherson
@ 2020-02-21 17:26       ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-02-21 17:26 UTC (permalink / raw)
  To: Sean Christopherson, Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 21/02/20 16:36, Sean Christopherson wrote:
>>>  				vmx->nested.last_vpid = vmcs12->virtual_processor_id;
>>> -				__vmx_flush_tlb(vcpu, nested_get_vpid02(vcpu), false);
>>> +				vpid_sync_context(nested_get_vpid02(vcpu));
>>>  			}
>>>  		} else {
>>>  			/*
>>> @@ -5154,17 +5154,17 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
>>>  			__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR,
>>>  				vpid02, operand.gla);
>>>  		} else
>>> -			__vmx_flush_tlb(vcpu, vpid02, false);
>>> +			vpid_sync_context(vpid02);
>> This is a pre-existing condition but coding style requires braces even
>> for single statements when they were used in another branch.
> I'll fix this in v2.
> 

Can also remove the braces from the "then" branch.

Paolo


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context()
  2020-02-21 15:32     ` Sean Christopherson
@ 2020-02-21 17:28       ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-02-21 17:28 UTC (permalink / raw)
  To: Sean Christopherson, Vitaly Kuznetsov
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 21/02/20 16:32, Sean Christopherson wrote:
>> In the original code it's only vpid_sync_vcpu_single() which has 'vpid
>> == 0' check, vpid_sync_vcpu_global() doesn't have it. So in the
>> hypothetical situation when cpu_has_vmx_invvpid_single() is false AND
>> we've e.g. exhausted our VPID space and allocate_vpid() returned zero,
>> the new code just won't do anything while the old one would've done
>> __invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0), right?
> Ah rats.  I lost track of that functional change between making the commit
> and writing the changelog.
> 
> I'll spin a v2 to rewrite the changelog, and maybe add the "vpid == 0"
> check in a separate patch.
> 

What about this:

diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index eb6adc77a55d..2ab88984b22f 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -255,13 +255,10 @@ static inline void __invept(unsigned long ext, u64 eptp, gpa_t gpa)
 
 static inline void vpid_sync_context(int vpid)
 {
-	if (vpid == 0)
-		return;
-
-	if (cpu_has_vmx_invvpid_single())
-		__invvpid(VMX_VPID_EXTENT_SINGLE_CONTEXT, vpid, 0);
-	else
+	if (!cpu_has_vmx_invvpid_single())
 		__invvpid(VMX_VPID_EXTENT_ALL_CONTEXT, 0, 0);
+	else if (vpid != 0)
+		__invvpid(VMX_VPID_EXTENT_SINGLE_CONTEXT, vpid, 0);
 }
 
 static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook
       [not found]   ` <87tv3krqta.fsf@vitty.brq.redhat.com>
@ 2020-02-21 17:31     ` Paolo Bonzini
  0 siblings, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-02-21 17:31 UTC (permalink / raw)
  To: Vitaly Kuznetsov, Sean Christopherson
  Cc: Wanpeng Li, Jim Mattson, Joerg Roedel, kvm, linux-kernel

On 21/02/20 14:52, Vitaly Kuznetsov wrote:
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index fbabb2f06273..72f7ca4baa6d 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -2675,7 +2675,7 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
>>  	trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
>>  		st->preempted & KVM_VCPU_FLUSH_TLB);
>>  	if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
>> -		kvm_vcpu_flush_tlb(vcpu, false);
>> +		kvm_x86_ops->tlb_flush_guest(vcpu);
>>  
>>  	vcpu->arch.st.preempted = 0;
> There is one additional place in hyperv.c where we do TLB flush on
> behalf of the guest, kvm_hv_flush_tlb(). Currently, it does
> KVM_REQ_TLB_FLUSH (resulting in kvm_x86_ops->tlb_flush()), do we need
> something like KVM_REQ_TLB_FLUSH_GUEST instead?

Yes, that would be better since INVEPT does not flush linear mappings.
So, when EPT and VPID is enabled, KVM_REQ_TLB_FLUSH would not flush the
guest's translations.

Paolo


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook
  2020-02-20 20:43 ` [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook Sean Christopherson
       [not found]   ` <87tv3krqta.fsf@vitty.brq.redhat.com>
@ 2020-02-21 17:32   ` Paolo Bonzini
  1 sibling, 0 replies; 25+ messages in thread
From: Paolo Bonzini @ 2020-02-21 17:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel

On 20/02/20 21:43, Sean Christopherson wrote:
> Add a dedicated hook to handle flushing TLB entries on behalf of the
> guest, i.e. for a paravirtualized TLB flush, and use it directly instead
> of bouncing through kvm_vcpu_flush_tlb().  Change the effective VMX
> implementation to never do INVEPT, i.e. to always flush via INVVPID.
> The INVEPT performed by __vmx_flush_tlb() when @invalidate_gpa=false and
> enable_vpid=0 is unnecessary, as it will only flush GPA->HPA mappings;
> GVA->GPA and GVA->HPA translations are flushed by VM-Enter when VPID is
> disabled, and changes in the guest pages tables only affect GVA->*PA
> mappings.
> 
> When EPT and VPID are enabled, doing INVVPID is not required (by Intel's
> architecture) to invalidate GPA mappings, i.e. TLB entries that cache
> GPA->HPA translations can live across INVVPID as GPA->HPA mappings are
> associated with an EPTP, not a VPID.  The intent of @invalidate_gpa is
> to inform vmx_flush_tlb() that it needs to "invalidate gpa mappings",
> i.e. do INVEPT and not simply INVVPID.  Other than nested VPID handling,
> which now calls vpid_sync_context() directly, the only scenario where
> KVM can safely do INVVPID instead of INVEPT (when EPT is enabled) is if
> KVM is flushing TLB entries from the guest's perspective, i.e. is
> invalidating GLA->GPA mappings.

Since you need a v2, can you replace the name of mappings with "linear",
"guest-physical" and "combined" as in the SDM?  It takes a little to get
used to them but it avoids three-letter acronym soup.

Paolo


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-02-21 17:33 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
2020-02-20 20:43 ` [PATCH 01/10] KVM: VMX: Use vpid_sync_context() directly when possible Sean Christopherson
2020-02-21 13:17   ` Vitaly Kuznetsov
2020-02-21 15:36     ` Sean Christopherson
2020-02-21 17:26       ` Paolo Bonzini
2020-02-20 20:43 ` [PATCH 02/10] KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines Sean Christopherson
2020-02-21 13:19   ` Vitaly Kuznetsov
2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
2020-02-21 13:26   ` Vitaly Kuznetsov
2020-02-20 20:43 ` [PATCH 04/10] KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into vpid_sync_context() Sean Christopherson
2020-02-21 13:39   ` Vitaly Kuznetsov
2020-02-21 15:32     ` Sean Christopherson
2020-02-21 17:28       ` Paolo Bonzini
2020-02-20 20:43 ` [PATCH 05/10] KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address Sean Christopherson
2020-02-21 13:43   ` Vitaly Kuznetsov
2020-02-20 20:43 ` [PATCH 06/10] KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook Sean Christopherson
     [not found]   ` <87tv3krqta.fsf@vitty.brq.redhat.com>
2020-02-21 17:31     ` Paolo Bonzini
2020-02-21 17:32   ` Paolo Bonzini
2020-02-20 20:43 ` [PATCH 07/10] KVM: VMX: Clean up vmx_flush_tlb_gva() Sean Christopherson
2020-02-21 13:54   ` Vitaly Kuznetsov
2020-02-20 20:43 ` [PATCH 08/10] KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush() Sean Christopherson
2020-02-21 13:56   ` Vitaly Kuznetsov
2020-02-20 20:43 ` [PATCH 09/10] KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb() Sean Christopherson
2020-02-20 20:43 ` [PATCH 10/10] KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb() Sean Christopherson
2020-02-21 13:20 ` [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).