linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2
@ 2018-09-25 17:58 Vitaly Kuznetsov
  2018-09-25 17:58 ` [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU Vitaly Kuznetsov
                   ` (8 more replies)
  0 siblings, 9 replies; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

Changes since v1 [Sean Christopherson]:
- drop now unneded local 'vmx' variable from vmx_free_vcpu_nested()
- Rename:
  kvm_mmu_scache -> kvm_mmu_extended_role
  mmu_role.scache -> mmu_role.ext
  mmu_role.base_role -> mmu_role.base
- Add BUILD_BUG_ONs checking MMU role unions sizes.

Original description:

Currently, when we switch from L1 to L2 (VMX) we do the following:
- Re-initialize L1 MMU as shadow EPT MMU (nested_ept_init_mmu_context())
- Re-initialize 'nested' MMU (nested_vmx_load_cr3() -> init_kvm_nested_mmu())

When we switch back we do:
- Re-initialize L1 MMU (nested_vmx_load_cr3() -> init_kvm_tdp_mmu())

This seems to be sub-optimal. Initializing MMU is expensive (thanks to
update_permission_bitmask(), update_pkru_bitmask(),..) Try solving the
issue by splitting L1-normal and L1-nested MMUs and checking if MMU reset
is really needed. This spares us about 1000 cpu cycles on nested vmexit.

Brief look at SVM makes me think it can be optimized the exact same way.
I'll do this in a separate series if nobody objects.

Paolo Bonzini (1):
  x86/kvm/mmu: get rid of redundant kvm_mmu_setup()

Vitaly Kuznetsov (8):
  x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU
  x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()
  x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots()
  x86/kvm/mmu: introduce guest_mmu
  x86/kvm/mmu: make space for source data caching in struct kvm_mmu
  x86/kvm/nVMX: introduce source data cache for
    kvm_init_shadow_ept_mmu()
  x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed
  x86/kvm/mmu: check if MMU reconfiguration is needed in
    init_kvm_nested_mmu()

 arch/x86/include/asm/kvm_host.h |  44 +++-
 arch/x86/kvm/mmu.c              | 345 +++++++++++++++++++-------------
 arch/x86/kvm/mmu.h              |   8 +-
 arch/x86/kvm/mmu_audit.c        |  12 +-
 arch/x86/kvm/paging_tmpl.h      |  15 +-
 arch/x86/kvm/svm.c              |  14 +-
 arch/x86/kvm/vmx.c              |  46 +++--
 arch/x86/kvm/x86.c              |  22 +-
 8 files changed, 305 insertions(+), 201 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 14:17   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

As a preparation to full MMU split between L1 and L2 make vcpu->arch.mmu
a pointer to the currently used mmu. For now, this is always
vcpu->arch.root_mmu. No functional change.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |   5 +-
 arch/x86/kvm/mmu.c              | 165 ++++++++++++++++----------------
 arch/x86/kvm/mmu.h              |   8 +-
 arch/x86/kvm/mmu_audit.c        |  12 +--
 arch/x86/kvm/paging_tmpl.h      |  15 +--
 arch/x86/kvm/svm.c              |  14 +--
 arch/x86/kvm/vmx.c              |  15 +--
 arch/x86/kvm/x86.c              |  20 ++--
 8 files changed, 130 insertions(+), 124 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 09b2e3e2cf1b..95be30cfb33a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -534,7 +534,10 @@ struct kvm_vcpu_arch {
 	 * the paging mode of the l1 guest. This context is always used to
 	 * handle faults.
 	 */
-	struct kvm_mmu mmu;
+	struct kvm_mmu *mmu;
+
+	/* Non-nested MMU for L1 */
+	struct kvm_mmu root_mmu;
 
 	/*
 	 * Paging state of an L2 guest (used for nested npt)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d7e9bce6ff61..ca79ec0d8060 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2165,7 +2165,7 @@ static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 			    struct list_head *invalid_list)
 {
 	if (sp->role.cr4_pae != !!is_pae(vcpu)
-	    || vcpu->arch.mmu.sync_page(vcpu, sp) == 0) {
+	    || vcpu->arch.mmu->sync_page(vcpu, sp) == 0) {
 		kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);
 		return false;
 	}
@@ -2359,14 +2359,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 	int collisions = 0;
 	LIST_HEAD(invalid_list);
 
-	role = vcpu->arch.mmu.base_role;
+	role = vcpu->arch.mmu->base_role;
 	role.level = level;
 	role.direct = direct;
 	if (role.direct)
 		role.cr4_pae = 0;
 	role.access = access;
-	if (!vcpu->arch.mmu.direct_map
-	    && vcpu->arch.mmu.root_level <= PT32_ROOT_LEVEL) {
+	if (!vcpu->arch.mmu->direct_map
+	    && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
 		quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level));
 		quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1;
 		role.quadrant = quadrant;
@@ -2441,11 +2441,11 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato
 {
 	iterator->addr = addr;
 	iterator->shadow_addr = root;
-	iterator->level = vcpu->arch.mmu.shadow_root_level;
+	iterator->level = vcpu->arch.mmu->shadow_root_level;
 
 	if (iterator->level == PT64_ROOT_4LEVEL &&
-	    vcpu->arch.mmu.root_level < PT64_ROOT_4LEVEL &&
-	    !vcpu->arch.mmu.direct_map)
+	    vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL &&
+	    !vcpu->arch.mmu->direct_map)
 		--iterator->level;
 
 	if (iterator->level == PT32E_ROOT_LEVEL) {
@@ -2453,10 +2453,10 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato
 		 * prev_root is currently only used for 64-bit hosts. So only
 		 * the active root_hpa is valid here.
 		 */
-		BUG_ON(root != vcpu->arch.mmu.root_hpa);
+		BUG_ON(root != vcpu->arch.mmu->root_hpa);
 
 		iterator->shadow_addr
-			= vcpu->arch.mmu.pae_root[(addr >> 30) & 3];
+			= vcpu->arch.mmu->pae_root[(addr >> 30) & 3];
 		iterator->shadow_addr &= PT64_BASE_ADDR_MASK;
 		--iterator->level;
 		if (!iterator->shadow_addr)
@@ -2467,7 +2467,7 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato
 static void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator,
 			     struct kvm_vcpu *vcpu, u64 addr)
 {
-	shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu.root_hpa,
+	shadow_walk_init_using_root(iterator, vcpu, vcpu->arch.mmu->root_hpa,
 				    addr);
 }
 
@@ -3079,7 +3079,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 	int emulate = 0;
 	gfn_t pseudo_gfn;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		return 0;
 
 	for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
@@ -3294,7 +3294,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
 	u64 spte = 0ull;
 	uint retry_count = 0;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		return false;
 
 	if (!page_fault_can_be_fast(error_code))
@@ -3468,7 +3468,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, ulong roots_to_free)
 {
 	int i;
 	LIST_HEAD(invalid_list);
-	struct kvm_mmu *mmu = &vcpu->arch.mmu;
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	bool free_active_root = roots_to_free & KVM_MMU_ROOT_CURRENT;
 
 	BUILD_BUG_ON(KVM_MMU_NUM_PREV_ROOTS >= BITS_PER_LONG);
@@ -3528,20 +3528,20 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 	struct kvm_mmu_page *sp;
 	unsigned i;
 
-	if (vcpu->arch.mmu.shadow_root_level >= PT64_ROOT_4LEVEL) {
+	if (vcpu->arch.mmu->shadow_root_level >= PT64_ROOT_4LEVEL) {
 		spin_lock(&vcpu->kvm->mmu_lock);
 		if(make_mmu_pages_available(vcpu) < 0) {
 			spin_unlock(&vcpu->kvm->mmu_lock);
 			return -ENOSPC;
 		}
 		sp = kvm_mmu_get_page(vcpu, 0, 0,
-				vcpu->arch.mmu.shadow_root_level, 1, ACC_ALL);
+				vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
 		++sp->root_count;
 		spin_unlock(&vcpu->kvm->mmu_lock);
-		vcpu->arch.mmu.root_hpa = __pa(sp->spt);
-	} else if (vcpu->arch.mmu.shadow_root_level == PT32E_ROOT_LEVEL) {
+		vcpu->arch.mmu->root_hpa = __pa(sp->spt);
+	} else if (vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL) {
 		for (i = 0; i < 4; ++i) {
-			hpa_t root = vcpu->arch.mmu.pae_root[i];
+			hpa_t root = vcpu->arch.mmu->pae_root[i];
 
 			MMU_WARN_ON(VALID_PAGE(root));
 			spin_lock(&vcpu->kvm->mmu_lock);
@@ -3554,9 +3554,9 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
 			root = __pa(sp->spt);
 			++sp->root_count;
 			spin_unlock(&vcpu->kvm->mmu_lock);
-			vcpu->arch.mmu.pae_root[i] = root | PT_PRESENT_MASK;
+			vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
 		}
-		vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.pae_root);
+		vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
 	} else
 		BUG();
 
@@ -3570,7 +3570,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 	gfn_t root_gfn;
 	int i;
 
-	root_gfn = vcpu->arch.mmu.get_cr3(vcpu) >> PAGE_SHIFT;
+	root_gfn = vcpu->arch.mmu->get_cr3(vcpu) >> PAGE_SHIFT;
 
 	if (mmu_check_root(vcpu, root_gfn))
 		return 1;
@@ -3579,8 +3579,8 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 	 * Do we shadow a long mode page table? If so we need to
 	 * write-protect the guests page table root.
 	 */
-	if (vcpu->arch.mmu.root_level >= PT64_ROOT_4LEVEL) {
-		hpa_t root = vcpu->arch.mmu.root_hpa;
+	if (vcpu->arch.mmu->root_level >= PT64_ROOT_4LEVEL) {
+		hpa_t root = vcpu->arch.mmu->root_hpa;
 
 		MMU_WARN_ON(VALID_PAGE(root));
 
@@ -3590,11 +3590,11 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 			return -ENOSPC;
 		}
 		sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
-				vcpu->arch.mmu.shadow_root_level, 0, ACC_ALL);
+				vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
 		root = __pa(sp->spt);
 		++sp->root_count;
 		spin_unlock(&vcpu->kvm->mmu_lock);
-		vcpu->arch.mmu.root_hpa = root;
+		vcpu->arch.mmu->root_hpa = root;
 		return 0;
 	}
 
@@ -3604,17 +3604,17 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 	 * the shadow page table may be a PAE or a long mode page table.
 	 */
 	pm_mask = PT_PRESENT_MASK;
-	if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_4LEVEL)
+	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL)
 		pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK;
 
 	for (i = 0; i < 4; ++i) {
-		hpa_t root = vcpu->arch.mmu.pae_root[i];
+		hpa_t root = vcpu->arch.mmu->pae_root[i];
 
 		MMU_WARN_ON(VALID_PAGE(root));
-		if (vcpu->arch.mmu.root_level == PT32E_ROOT_LEVEL) {
-			pdptr = vcpu->arch.mmu.get_pdptr(vcpu, i);
+		if (vcpu->arch.mmu->root_level == PT32E_ROOT_LEVEL) {
+			pdptr = vcpu->arch.mmu->get_pdptr(vcpu, i);
 			if (!(pdptr & PT_PRESENT_MASK)) {
-				vcpu->arch.mmu.pae_root[i] = 0;
+				vcpu->arch.mmu->pae_root[i] = 0;
 				continue;
 			}
 			root_gfn = pdptr >> PAGE_SHIFT;
@@ -3632,16 +3632,16 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 		++sp->root_count;
 		spin_unlock(&vcpu->kvm->mmu_lock);
 
-		vcpu->arch.mmu.pae_root[i] = root | pm_mask;
+		vcpu->arch.mmu->pae_root[i] = root | pm_mask;
 	}
-	vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.pae_root);
+	vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
 
 	/*
 	 * If we shadow a 32 bit page table with a long mode page
 	 * table we enter this path.
 	 */
-	if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_4LEVEL) {
-		if (vcpu->arch.mmu.lm_root == NULL) {
+	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
+		if (vcpu->arch.mmu->lm_root == NULL) {
 			/*
 			 * The additional page necessary for this is only
 			 * allocated on demand.
@@ -3653,12 +3653,12 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 			if (lm_root == NULL)
 				return 1;
 
-			lm_root[0] = __pa(vcpu->arch.mmu.pae_root) | pm_mask;
+			lm_root[0] = __pa(vcpu->arch.mmu->pae_root) | pm_mask;
 
-			vcpu->arch.mmu.lm_root = lm_root;
+			vcpu->arch.mmu->lm_root = lm_root;
 		}
 
-		vcpu->arch.mmu.root_hpa = __pa(vcpu->arch.mmu.lm_root);
+		vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root);
 	}
 
 	return 0;
@@ -3666,7 +3666,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 
 static int mmu_alloc_roots(struct kvm_vcpu *vcpu)
 {
-	if (vcpu->arch.mmu.direct_map)
+	if (vcpu->arch.mmu->direct_map)
 		return mmu_alloc_direct_roots(vcpu);
 	else
 		return mmu_alloc_shadow_roots(vcpu);
@@ -3677,17 +3677,16 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
 	int i;
 	struct kvm_mmu_page *sp;
 
-	if (vcpu->arch.mmu.direct_map)
+	if (vcpu->arch.mmu->direct_map)
 		return;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		return;
 
 	vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY);
 
-	if (vcpu->arch.mmu.root_level >= PT64_ROOT_4LEVEL) {
-		hpa_t root = vcpu->arch.mmu.root_hpa;
-
+	if (vcpu->arch.mmu->root_level >= PT64_ROOT_4LEVEL) {
+		hpa_t root = vcpu->arch.mmu->root_hpa;
 		sp = page_header(root);
 
 		/*
@@ -3718,7 +3717,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
 	kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC);
 
 	for (i = 0; i < 4; ++i) {
-		hpa_t root = vcpu->arch.mmu.pae_root[i];
+		hpa_t root = vcpu->arch.mmu->pae_root[i];
 
 		if (root && VALID_PAGE(root)) {
 			root &= PT64_BASE_ADDR_MASK;
@@ -3792,7 +3791,7 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
 	int root, leaf;
 	bool reserved = false;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		goto exit;
 
 	walk_shadow_page_lockless_begin(vcpu);
@@ -3809,7 +3808,7 @@ walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
 		if (!is_shadow_present_pte(spte))
 			break;
 
-		reserved |= is_shadow_zero_bits_set(&vcpu->arch.mmu, spte,
+		reserved |= is_shadow_zero_bits_set(vcpu->arch.mmu, spte,
 						    iterator.level);
 	}
 
@@ -3888,7 +3887,7 @@ static void shadow_page_table_clear_flood(struct kvm_vcpu *vcpu, gva_t addr)
 	struct kvm_shadow_walk_iterator iterator;
 	u64 spte;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		return;
 
 	walk_shadow_page_lockless_begin(vcpu);
@@ -3915,7 +3914,7 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gva_t gva,
 	if (r)
 		return r;
 
-	MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
+	MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa));
 
 
 	return nonpaging_map(vcpu, gva & PAGE_MASK,
@@ -3928,8 +3927,8 @@ static int kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, gfn_t gfn)
 
 	arch.token = (vcpu->arch.apf.id++ << 12) | vcpu->vcpu_id;
 	arch.gfn = gfn;
-	arch.direct_map = vcpu->arch.mmu.direct_map;
-	arch.cr3 = vcpu->arch.mmu.get_cr3(vcpu);
+	arch.direct_map = vcpu->arch.mmu->direct_map;
+	arch.cr3 = vcpu->arch.mmu->get_cr3(vcpu);
 
 	return kvm_setup_async_pf(vcpu, gva, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch);
 }
@@ -4035,7 +4034,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
 	int write = error_code & PFERR_WRITE_MASK;
 	bool map_writable;
 
-	MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu.root_hpa));
+	MMU_WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa));
 
 	if (page_fault_handle_page_track(vcpu, error_code, gfn))
 		return RET_PF_EMULATE;
@@ -4111,7 +4110,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
 {
 	uint i;
 	struct kvm_mmu_root_info root;
-	struct kvm_mmu *mmu = &vcpu->arch.mmu;
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
 
 	root.cr3 = mmu->get_cr3(vcpu);
 	root.hpa = mmu->root_hpa;
@@ -4134,7 +4133,7 @@ static bool fast_cr3_switch(struct kvm_vcpu *vcpu, gpa_t new_cr3,
 			    union kvm_mmu_page_role new_role,
 			    bool skip_tlb_flush)
 {
-	struct kvm_mmu *mmu = &vcpu->arch.mmu;
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
 
 	/*
 	 * For now, limit the fast switch to 64-bit hosts+VMs in order to avoid
@@ -4203,7 +4202,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu)
 static void inject_page_fault(struct kvm_vcpu *vcpu,
 			      struct x86_exception *fault)
 {
-	vcpu->arch.mmu.inject_page_fault(vcpu, fault);
+	vcpu->arch.mmu->inject_page_fault(vcpu, fault);
 }
 
 static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn,
@@ -4724,7 +4723,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
 
 static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 {
-	struct kvm_mmu *context = &vcpu->arch.mmu;
+	struct kvm_mmu *context = vcpu->arch.mmu;
 
 	context->base_role.word = mmu_base_role_mask.word &
 				  kvm_calc_tdp_mmu_root_page_role(vcpu).word;
@@ -4796,7 +4795,7 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu)
 
 void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 {
-	struct kvm_mmu *context = &vcpu->arch.mmu;
+	struct kvm_mmu *context = vcpu->arch.mmu;
 
 	if (!is_paging(vcpu))
 		nonpaging_init_context(vcpu, context);
@@ -4816,7 +4815,7 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
 static union kvm_mmu_page_role
 kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
 {
-	union kvm_mmu_page_role role = vcpu->arch.mmu.base_role;
+	union kvm_mmu_page_role role = vcpu->arch.mmu->base_role;
 
 	role.level = PT64_ROOT_4LEVEL;
 	role.direct = false;
@@ -4830,7 +4829,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
 void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 			     bool accessed_dirty, gpa_t new_eptp)
 {
-	struct kvm_mmu *context = &vcpu->arch.mmu;
+	struct kvm_mmu *context = vcpu->arch.mmu;
 	union kvm_mmu_page_role root_page_role =
 		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty);
 
@@ -4857,7 +4856,7 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu);
 
 static void init_kvm_softmmu(struct kvm_vcpu *vcpu)
 {
-	struct kvm_mmu *context = &vcpu->arch.mmu;
+	struct kvm_mmu *context = vcpu->arch.mmu;
 
 	kvm_init_shadow_mmu(vcpu);
 	context->set_cr3           = kvm_x86_ops->set_cr3;
@@ -4875,7 +4874,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
 	g_context->inject_page_fault = kvm_inject_page_fault;
 
 	/*
-	 * Note that arch.mmu.gva_to_gpa translates l2_gpa to l1_gpa using
+	 * Note that arch.mmu->gva_to_gpa translates l2_gpa to l1_gpa using
 	 * L1's nested page tables (e.g. EPT12). The nested translation
 	 * of l2_gva to l1_gpa is done by arch.nested_mmu.gva_to_gpa using
 	 * L2's page tables as the first level of translation and L1's
@@ -4914,10 +4913,10 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots)
 	if (reset_roots) {
 		uint i;
 
-		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
+		vcpu->arch.mmu->root_hpa = INVALID_PAGE;
 
 		for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
-			vcpu->arch.mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
+			vcpu->arch.mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
 	}
 
 	if (mmu_is_nested(vcpu))
@@ -4966,7 +4965,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_load);
 void kvm_mmu_unload(struct kvm_vcpu *vcpu)
 {
 	kvm_mmu_free_roots(vcpu, KVM_MMU_ROOTS_ALL);
-	WARN_ON(VALID_PAGE(vcpu->arch.mmu.root_hpa));
+	WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa));
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_unload);
 
@@ -4980,7 +4979,7 @@ static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
         }
 
 	++vcpu->kvm->stat.mmu_pte_updated;
-	vcpu->arch.mmu.update_pte(vcpu, sp, spte, new);
+	vcpu->arch.mmu->update_pte(vcpu, sp, spte, new);
 }
 
 static bool need_remote_flush(u64 old, u64 new)
@@ -5160,7 +5159,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 			entry = *spte;
 			mmu_page_zap_pte(vcpu->kvm, sp, spte);
 			if (gentry &&
-			      !((sp->role.word ^ vcpu->arch.mmu.base_role.word)
+			      !((sp->role.word ^ vcpu->arch.mmu->base_role.word)
 			      & mmu_base_role_mask.word) && rmap_can_add(vcpu))
 				mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
 			if (need_remote_flush(entry, *spte))
@@ -5178,7 +5177,7 @@ int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva)
 	gpa_t gpa;
 	int r;
 
-	if (vcpu->arch.mmu.direct_map)
+	if (vcpu->arch.mmu->direct_map)
 		return 0;
 
 	gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL);
@@ -5214,10 +5213,10 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 {
 	int r, emulation_type = 0;
 	enum emulation_result er;
-	bool direct = vcpu->arch.mmu.direct_map;
+	bool direct = vcpu->arch.mmu->direct_map;
 
 	/* With shadow page tables, fault_address contains a GVA or nGPA.  */
-	if (vcpu->arch.mmu.direct_map) {
+	if (vcpu->arch.mmu->direct_map) {
 		vcpu->arch.gpa_available = true;
 		vcpu->arch.gpa_val = cr2;
 	}
@@ -5230,8 +5229,9 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 	}
 
 	if (r == RET_PF_INVALID) {
-		r = vcpu->arch.mmu.page_fault(vcpu, cr2, lower_32_bits(error_code),
-					      false);
+		r = vcpu->arch.mmu->page_fault(vcpu, cr2,
+					       lower_32_bits(error_code),
+					       false);
 		WARN_ON(r == RET_PF_INVALID);
 	}
 
@@ -5247,7 +5247,7 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 	 * paging in both guests. If true, we simply unprotect the page
 	 * and resume the guest.
 	 */
-	if (vcpu->arch.mmu.direct_map &&
+	if (vcpu->arch.mmu->direct_map &&
 	    (error_code & PFERR_NESTED_GUEST_PAGE) == PFERR_NESTED_GUEST_PAGE) {
 		kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2));
 		return 1;
@@ -5295,7 +5295,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_page_fault);
 
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 {
-	struct kvm_mmu *mmu = &vcpu->arch.mmu;
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	int i;
 
 	/* INVLPG on a * non-canonical address is a NOP according to the SDM.  */
@@ -5326,7 +5326,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);
 
 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
 {
-	struct kvm_mmu *mmu = &vcpu->arch.mmu;
+	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	bool tlb_flush = false;
 	uint i;
 
@@ -5370,8 +5370,8 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp);
 
 static void free_mmu_pages(struct kvm_vcpu *vcpu)
 {
-	free_page((unsigned long)vcpu->arch.mmu.pae_root);
-	free_page((unsigned long)vcpu->arch.mmu.lm_root);
+	free_page((unsigned long)vcpu->arch.mmu->pae_root);
+	free_page((unsigned long)vcpu->arch.mmu->lm_root);
 }
 
 static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
@@ -5391,9 +5391,9 @@ static int alloc_mmu_pages(struct kvm_vcpu *vcpu)
 	if (!page)
 		return -ENOMEM;
 
-	vcpu->arch.mmu.pae_root = page_address(page);
+	vcpu->arch.mmu->pae_root = page_address(page);
 	for (i = 0; i < 4; ++i)
-		vcpu->arch.mmu.pae_root[i] = INVALID_PAGE;
+		vcpu->arch.mmu->pae_root[i] = INVALID_PAGE;
 
 	return 0;
 }
@@ -5402,20 +5402,21 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 {
 	uint i;
 
-	vcpu->arch.walk_mmu = &vcpu->arch.mmu;
-	vcpu->arch.mmu.root_hpa = INVALID_PAGE;
-	vcpu->arch.mmu.translate_gpa = translate_gpa;
+	vcpu->arch.mmu = &vcpu->arch.root_mmu;
+	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
+	vcpu->arch.root_mmu.root_hpa = INVALID_PAGE;
+	vcpu->arch.root_mmu.translate_gpa = translate_gpa;
 	vcpu->arch.nested_mmu.translate_gpa = translate_nested_gpa;
 
 	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
-		vcpu->arch.mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
+		vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
 
 	return alloc_mmu_pages(vcpu);
 }
 
 void kvm_mmu_setup(struct kvm_vcpu *vcpu)
 {
-	MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu.root_hpa));
+	MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa));
 
 	/*
 	 * kvm_mmu_setup() is called only on vCPU initialization.  
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 1fab69c0b2f3..f602b26140a3 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -80,7 +80,7 @@ static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
 
 static inline int kvm_mmu_reload(struct kvm_vcpu *vcpu)
 {
-	if (likely(vcpu->arch.mmu.root_hpa != INVALID_PAGE))
+	if (likely(vcpu->arch.mmu->root_hpa != INVALID_PAGE))
 		return 0;
 
 	return kvm_mmu_load(vcpu);
@@ -102,9 +102,9 @@ static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu)
 
 static inline void kvm_mmu_load_cr3(struct kvm_vcpu *vcpu)
 {
-	if (VALID_PAGE(vcpu->arch.mmu.root_hpa))
-		vcpu->arch.mmu.set_cr3(vcpu, vcpu->arch.mmu.root_hpa |
-					     kvm_get_active_pcid(vcpu));
+	if (VALID_PAGE(vcpu->arch.mmu->root_hpa))
+		vcpu->arch.mmu->set_cr3(vcpu, vcpu->arch.mmu->root_hpa |
+					      kvm_get_active_pcid(vcpu));
 }
 
 /*
diff --git a/arch/x86/kvm/mmu_audit.c b/arch/x86/kvm/mmu_audit.c
index 1272861e77b9..abac7e208853 100644
--- a/arch/x86/kvm/mmu_audit.c
+++ b/arch/x86/kvm/mmu_audit.c
@@ -59,19 +59,19 @@ static void mmu_spte_walk(struct kvm_vcpu *vcpu, inspect_spte_fn fn)
 	int i;
 	struct kvm_mmu_page *sp;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		return;
 
-	if (vcpu->arch.mmu.root_level >= PT64_ROOT_4LEVEL) {
-		hpa_t root = vcpu->arch.mmu.root_hpa;
+	if (vcpu->arch.mmu->root_level >= PT64_ROOT_4LEVEL) {
+		hpa_t root = vcpu->arch.mmu->root_hpa;
 
 		sp = page_header(root);
-		__mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu.root_level);
+		__mmu_spte_walk(vcpu, sp, fn, vcpu->arch.mmu->root_level);
 		return;
 	}
 
 	for (i = 0; i < 4; ++i) {
-		hpa_t root = vcpu->arch.mmu.pae_root[i];
+		hpa_t root = vcpu->arch.mmu->pae_root[i];
 
 		if (root && VALID_PAGE(root)) {
 			root &= PT64_BASE_ADDR_MASK;
@@ -122,7 +122,7 @@ static void audit_mappings(struct kvm_vcpu *vcpu, u64 *sptep, int level)
 	hpa =  pfn << PAGE_SHIFT;
 	if ((*sptep & PT64_BASE_ADDR_MASK) != hpa)
 		audit_printk(vcpu->kvm, "levels %d pfn %llx hpa %llx "
-			     "ent %llxn", vcpu->arch.mmu.root_level, pfn,
+			     "ent %llxn", vcpu->arch.mmu->root_level, pfn,
 			     hpa, *sptep);
 }
 
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 14ffd973df54..7cf2185b7eb5 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -158,14 +158,15 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
 				  struct kvm_mmu_page *sp, u64 *spte,
 				  u64 gpte)
 {
-	if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL))
+	if (is_rsvd_bits_set(vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL))
 		goto no_present;
 
 	if (!FNAME(is_present_gpte)(gpte))
 		goto no_present;
 
 	/* if accessed bit is not supported prefetch non accessed gpte */
-	if (PT_HAVE_ACCESSED_DIRTY(&vcpu->arch.mmu) && !(gpte & PT_GUEST_ACCESSED_MASK))
+	if (PT_HAVE_ACCESSED_DIRTY(vcpu->arch.mmu) &&
+	    !(gpte & PT_GUEST_ACCESSED_MASK))
 		goto no_present;
 
 	return false;
@@ -480,7 +481,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
 static int FNAME(walk_addr)(struct guest_walker *walker,
 			    struct kvm_vcpu *vcpu, gva_t addr, u32 access)
 {
-	return FNAME(walk_addr_generic)(walker, vcpu, &vcpu->arch.mmu, addr,
+	return FNAME(walk_addr_generic)(walker, vcpu, vcpu->arch.mmu, addr,
 					access);
 }
 
@@ -509,7 +510,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 
 	gfn = gpte_to_gfn(gpte);
 	pte_access = sp->role.access & FNAME(gpte_access)(gpte);
-	FNAME(protect_clean_gpte)(&vcpu->arch.mmu, &pte_access, gpte);
+	FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte);
 	pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn,
 			no_dirty_log && (pte_access & ACC_WRITE_MASK));
 	if (is_error_pfn(pfn))
@@ -604,7 +605,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
 	direct_access = gw->pte_access;
 
-	top_level = vcpu->arch.mmu.root_level;
+	top_level = vcpu->arch.mmu->root_level;
 	if (top_level == PT32E_ROOT_LEVEL)
 		top_level = PT32_ROOT_LEVEL;
 	/*
@@ -616,7 +617,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 	if (FNAME(gpte_changed)(vcpu, gw, top_level))
 		goto out_gpte_changed;
 
-	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+	if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 		goto out_gpte_changed;
 
 	for (shadow_walk_init(&it, vcpu, addr);
@@ -1004,7 +1005,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 		gfn = gpte_to_gfn(gpte);
 		pte_access = sp->role.access;
 		pte_access &= FNAME(gpte_access)(gpte);
-		FNAME(protect_clean_gpte)(&vcpu->arch.mmu, &pte_access, gpte);
+		FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte);
 
 		if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access,
 		      &nr_present))
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index d96092b35936..2936c63bcc2f 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2918,18 +2918,18 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu)
 {
 	WARN_ON(mmu_is_nested(vcpu));
 	kvm_init_shadow_mmu(vcpu);
-	vcpu->arch.mmu.set_cr3           = nested_svm_set_tdp_cr3;
-	vcpu->arch.mmu.get_cr3           = nested_svm_get_tdp_cr3;
-	vcpu->arch.mmu.get_pdptr         = nested_svm_get_tdp_pdptr;
-	vcpu->arch.mmu.inject_page_fault = nested_svm_inject_npf_exit;
-	vcpu->arch.mmu.shadow_root_level = get_npt_level(vcpu);
-	reset_shadow_zero_bits_mask(vcpu, &vcpu->arch.mmu);
+	vcpu->arch.mmu->set_cr3           = nested_svm_set_tdp_cr3;
+	vcpu->arch.mmu->get_cr3           = nested_svm_get_tdp_cr3;
+	vcpu->arch.mmu->get_pdptr         = nested_svm_get_tdp_pdptr;
+	vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit;
+	vcpu->arch.mmu->shadow_root_level = get_npt_level(vcpu);
+	reset_shadow_zero_bits_mask(vcpu, vcpu->arch.mmu);
 	vcpu->arch.walk_mmu              = &vcpu->arch.nested_mmu;
 }
 
 static void nested_svm_uninit_mmu_context(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.walk_mmu = &vcpu->arch.mmu;
+	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
 }
 
 static int nested_svm_check_permissions(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 06412ba46aa3..4b8dc85afe34 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5129,9 +5129,10 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
 				bool invalidate_gpa)
 {
 	if (enable_ept && (invalidate_gpa || !enable_vpid)) {
-		if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+		if (!VALID_PAGE(vcpu->arch.mmu->root_hpa))
 			return;
-		ept_sync_context(construct_eptp(vcpu, vcpu->arch.mmu.root_hpa));
+		ept_sync_context(construct_eptp(vcpu,
+						vcpu->arch.mmu->root_hpa));
 	} else {
 		vpid_sync_context(vpid);
 	}
@@ -9180,7 +9181,7 @@ static int handle_invpcid(struct kvm_vcpu *vcpu)
 		}
 
 		for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
-			if (kvm_get_pcid(vcpu, vcpu->arch.mmu.prev_roots[i].cr3)
+			if (kvm_get_pcid(vcpu, vcpu->arch.mmu->prev_roots[i].cr3)
 			    == operand.pcid)
 				roots_to_free |= KVM_MMU_ROOT_PREVIOUS(i);
 
@@ -11335,9 +11336,9 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
 			VMX_EPT_EXECUTE_ONLY_BIT,
 			nested_ept_ad_enabled(vcpu),
 			nested_ept_get_cr3(vcpu));
-	vcpu->arch.mmu.set_cr3           = vmx_set_cr3;
-	vcpu->arch.mmu.get_cr3           = nested_ept_get_cr3;
-	vcpu->arch.mmu.inject_page_fault = nested_ept_inject_page_fault;
+	vcpu->arch.mmu->set_cr3           = vmx_set_cr3;
+	vcpu->arch.mmu->get_cr3           = nested_ept_get_cr3;
+	vcpu->arch.mmu->inject_page_fault = nested_ept_inject_page_fault;
 
 	vcpu->arch.walk_mmu              = &vcpu->arch.nested_mmu;
 	return 0;
@@ -11345,7 +11346,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
 
 static void nested_ept_uninit_mmu_context(struct kvm_vcpu *vcpu)
 {
-	vcpu->arch.walk_mmu = &vcpu->arch.mmu;
+	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
 }
 
 static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index edbf00ec56b3..d58dc530ed84 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -503,7 +503,7 @@ static bool kvm_propagate_fault(struct kvm_vcpu *vcpu, struct x86_exception *fau
 	if (mmu_is_nested(vcpu) && !fault->nested_page_fault)
 		vcpu->arch.nested_mmu.inject_page_fault(vcpu, fault);
 	else
-		vcpu->arch.mmu.inject_page_fault(vcpu, fault);
+		vcpu->arch.mmu->inject_page_fault(vcpu, fault);
 
 	return fault->nested_page_fault;
 }
@@ -602,7 +602,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3)
 	for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
 		if ((pdpte[i] & PT_PRESENT_MASK) &&
 		    (pdpte[i] &
-		     vcpu->arch.mmu.guest_rsvd_check.rsvd_bits_mask[0][2])) {
+		     vcpu->arch.mmu->guest_rsvd_check.rsvd_bits_mask[0][2])) {
 			ret = 0;
 			goto out;
 		}
@@ -4803,7 +4803,7 @@ gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
 
 	/* NPT walks are always user-walks */
 	access |= PFERR_USER_MASK;
-	t_gpa  = vcpu->arch.mmu.gva_to_gpa(vcpu, gpa, access, exception);
+	t_gpa  = vcpu->arch.mmu->gva_to_gpa(vcpu, gpa, access, exception);
 
 	return t_gpa;
 }
@@ -5889,7 +5889,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gva_t cr2,
 	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
 		return false;
 
-	if (!vcpu->arch.mmu.direct_map) {
+	if (!vcpu->arch.mmu->direct_map) {
 		/*
 		 * Write permission should be allowed since only
 		 * write access need to be emulated.
@@ -5922,7 +5922,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gva_t cr2,
 	kvm_release_pfn_clean(pfn);
 
 	/* The instructions are well-emulated on direct mmu. */
-	if (vcpu->arch.mmu.direct_map) {
+	if (vcpu->arch.mmu->direct_map) {
 		unsigned int indirect_shadow_pages;
 
 		spin_lock(&vcpu->kvm->mmu_lock);
@@ -5989,7 +5989,7 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
 	vcpu->arch.last_retry_eip = ctxt->eip;
 	vcpu->arch.last_retry_addr = cr2;
 
-	if (!vcpu->arch.mmu.direct_map)
+	if (!vcpu->arch.mmu->direct_map)
 		gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2, NULL);
 
 	kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
@@ -9327,7 +9327,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
 {
 	int r;
 
-	if ((vcpu->arch.mmu.direct_map != work->arch.direct_map) ||
+	if ((vcpu->arch.mmu->direct_map != work->arch.direct_map) ||
 	      work->wakeup_all)
 		return;
 
@@ -9335,11 +9335,11 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
 	if (unlikely(r))
 		return;
 
-	if (!vcpu->arch.mmu.direct_map &&
-	      work->arch.cr3 != vcpu->arch.mmu.get_cr3(vcpu))
+	if (!vcpu->arch.mmu->direct_map &&
+	      work->arch.cr3 != vcpu->arch.mmu->get_cr3(vcpu))
 		return;
 
-	vcpu->arch.mmu.page_fault(vcpu, work->gva, 0, true);
+	vcpu->arch.mmu->page_fault(vcpu, work->gva, 0, true);
 }
 
 static inline u32 kvm_async_pf_hash_fn(gfn_t gfn)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
  2018-09-25 17:58 ` [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 14:11   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 3/9] x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots() Vitaly Kuznetsov
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

kvm_init_shadow_ept_mmu() doesn't set get_pdptr() hook and is this
not a problem just because MMU context is already initialized and this
hook points to kvm_pdptr_read(). As we're intended to use a dedicated
MMU for shadow EPT MMU set this hook explicitly.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/mmu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ca79ec0d8060..2bdc63f67886 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4846,6 +4846,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 	context->root_level = PT64_ROOT_4LEVEL;
 	context->direct_map = false;
 	context->base_role.word = root_page_role.word & mmu_base_role_mask.word;
+	context->get_pdptr = kvm_pdptr_read;
+
 	update_permission_bitmask(vcpu, context, true);
 	update_pkru_bitmask(vcpu, context, true);
 	update_last_nonleaf_level(vcpu, context);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/9] x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots()
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
  2018-09-25 17:58 ` [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU Vitaly Kuznetsov
  2018-09-25 17:58 ` [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 14:18   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu Vitaly Kuznetsov
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

Add an option to specify which MMU root we want to free. This will
be used when nested and non-nested MMUs for L1 are split.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 3 ++-
 arch/x86/kvm/mmu.c              | 9 +++++----
 arch/x86/kvm/vmx.c              | 2 +-
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 95be30cfb33a..404c3438827b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1327,7 +1327,8 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu);
 int kvm_mmu_load(struct kvm_vcpu *vcpu);
 void kvm_mmu_unload(struct kvm_vcpu *vcpu);
 void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu);
-void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, ulong roots_to_free);
+void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+			ulong roots_to_free);
 gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access,
 			   struct x86_exception *exception);
 gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2bdc63f67886..4491b8894337 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3464,11 +3464,11 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa,
 }
 
 /* roots_to_free must be some combination of the KVM_MMU_ROOT_* flags */
-void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, ulong roots_to_free)
+void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+			ulong roots_to_free)
 {
 	int i;
 	LIST_HEAD(invalid_list);
-	struct kvm_mmu *mmu = vcpu->arch.mmu;
 	bool free_active_root = roots_to_free & KVM_MMU_ROOT_CURRENT;
 
 	BUILD_BUG_ON(KVM_MMU_NUM_PREV_ROOTS >= BITS_PER_LONG);
@@ -4184,7 +4184,8 @@ static void __kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3,
 			      bool skip_tlb_flush)
 {
 	if (!fast_cr3_switch(vcpu, new_cr3, new_role, skip_tlb_flush))
-		kvm_mmu_free_roots(vcpu, KVM_MMU_ROOT_CURRENT);
+		kvm_mmu_free_roots(vcpu, vcpu->arch.mmu,
+				   KVM_MMU_ROOT_CURRENT);
 }
 
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush)
@@ -4966,7 +4967,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_load);
 
 void kvm_mmu_unload(struct kvm_vcpu *vcpu)
 {
-	kvm_mmu_free_roots(vcpu, KVM_MMU_ROOTS_ALL);
+	kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, KVM_MMU_ROOTS_ALL);
 	WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa));
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_unload);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 4b8dc85afe34..2d55adab52de 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -9185,7 +9185,7 @@ static int handle_invpcid(struct kvm_vcpu *vcpu)
 			    == operand.pcid)
 				roots_to_free |= KVM_MMU_ROOT_PREVIOUS(i);
 
-		kvm_mmu_free_roots(vcpu, roots_to_free);
+		kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, roots_to_free);
 		/*
 		 * If neither the current cr3 nor any of the prev_roots use the
 		 * given PCID, then nothing needs to be done here because a
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
                   ` (2 preceding siblings ...)
  2018-09-25 17:58 ` [PATCH v2 3/9] x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots() Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 14:02   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 5/9] x86/kvm/mmu: get rid of redundant kvm_mmu_setup() Vitaly Kuznetsov
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

When EPT is used for nested guest we need to re-init MMU as shadow
EPT MMU (nested_ept_init_mmu_context() does that). When we return back
from L2 to L1 kvm_mmu_reset_context() in nested_vmx_load_cr3() resets
MMU back to normal TDP mode. Add a special 'guest_mmu' so we can use
separate root caches; the improved hit rate is not very important for
single vCPU performance, but it avoids contention on the mmu_lock for
many vCPUs.

On the nested CPUID benchmark, with 16 vCPUs, an L2->L1->L2 vmexit
goes from 42k to 26k cycles.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Changes since v1:
- drop now unneded local vmx variable in vmx_free_vcpu_nested
  [Sean Christopherson]
---
 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/mmu.c              | 15 +++++++++++----
 arch/x86/kvm/vmx.c              | 27 ++++++++++++++++++---------
 3 files changed, 32 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 404c3438827b..a3829869353b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -539,6 +539,9 @@ struct kvm_vcpu_arch {
 	/* Non-nested MMU for L1 */
 	struct kvm_mmu root_mmu;
 
+	/* L1 MMU when running nested */
+	struct kvm_mmu guest_mmu;
+
 	/*
 	 * Paging state of an L2 guest (used for nested npt)
 	 *
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4491b8894337..96c2a0b3eb53 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4967,8 +4967,10 @@ EXPORT_SYMBOL_GPL(kvm_mmu_load);
 
 void kvm_mmu_unload(struct kvm_vcpu *vcpu)
 {
-	kvm_mmu_free_roots(vcpu, vcpu->arch.mmu, KVM_MMU_ROOTS_ALL);
-	WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa));
+	kvm_mmu_free_roots(vcpu, &vcpu->arch.root_mmu, KVM_MMU_ROOTS_ALL);
+	WARN_ON(VALID_PAGE(vcpu->arch.root_mmu.root_hpa));
+	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
+	WARN_ON(VALID_PAGE(vcpu->arch.guest_mmu.root_hpa));
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_unload);
 
@@ -5407,13 +5409,18 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 
 	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
+
 	vcpu->arch.root_mmu.root_hpa = INVALID_PAGE;
 	vcpu->arch.root_mmu.translate_gpa = translate_gpa;
-	vcpu->arch.nested_mmu.translate_gpa = translate_nested_gpa;
-
 	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
 		vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
 
+	vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE;
+	vcpu->arch.guest_mmu.translate_gpa = translate_gpa;
+	for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
+		vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
+
+	vcpu->arch.nested_mmu.translate_gpa = translate_nested_gpa;
 	return alloc_mmu_pages(vcpu);
 }
 
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 2d55adab52de..93ff08136fc1 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -8468,8 +8468,10 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
  * Free whatever needs to be freed from vmx->nested when L1 goes down, or
  * just stops using VMX.
  */
-static void free_nested(struct vcpu_vmx *vmx)
+static void free_nested(struct kvm_vcpu *vcpu)
 {
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
 	if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
 		return;
 
@@ -8502,6 +8504,8 @@ static void free_nested(struct vcpu_vmx *vmx)
 		vmx->nested.pi_desc = NULL;
 	}
 
+	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
+
 	free_loaded_vmcs(&vmx->nested.vmcs02);
 }
 
@@ -8510,7 +8514,7 @@ static int handle_vmoff(struct kvm_vcpu *vcpu)
 {
 	if (!nested_vmx_check_permission(vcpu))
 		return 1;
-	free_nested(to_vmx(vcpu));
+	free_nested(vcpu);
 	nested_vmx_succeed(vcpu);
 	return kvm_skip_emulated_instruction(vcpu);
 }
@@ -8541,6 +8545,8 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
 	if (vmptr == vmx->nested.current_vmptr)
 		nested_release_vmcs12(vmx);
 
+	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
+
 	kvm_vcpu_write_guest(vcpu,
 			vmptr + offsetof(struct vmcs12, launch_state),
 			&zero, sizeof(zero));
@@ -8924,6 +8930,9 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
 		}
 
 		nested_release_vmcs12(vmx);
+
+		kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu,
+				   KVM_MMU_ROOTS_ALL);
 		/*
 		 * Load VMCS12 from guest memory since it is not already
 		 * cached.
@@ -10976,12 +10985,10 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
  */
 static void vmx_free_vcpu_nested(struct kvm_vcpu *vcpu)
 {
-       struct vcpu_vmx *vmx = to_vmx(vcpu);
-
-       vcpu_load(vcpu);
-       vmx_switch_vmcs(vcpu, &vmx->vmcs01);
-       free_nested(vmx);
-       vcpu_put(vcpu);
+	vcpu_load(vcpu);
+	vmx_switch_vmcs(vcpu, &to_vmx(vcpu)->vmcs01);
+	free_nested(vcpu);
+	vcpu_put(vcpu);
 }
 
 static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
@@ -11331,6 +11338,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
 	if (!valid_ept_address(vcpu, nested_ept_get_cr3(vcpu)))
 		return 1;
 
+	vcpu->arch.mmu = &vcpu->arch.guest_mmu;
 	kvm_init_shadow_ept_mmu(vcpu,
 			to_vmx(vcpu)->nested.msrs.ept_caps &
 			VMX_EPT_EXECUTE_ONLY_BIT,
@@ -11346,6 +11354,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
 
 static void nested_ept_uninit_mmu_context(struct kvm_vcpu *vcpu)
 {
+	vcpu->arch.mmu = &vcpu->arch.root_mmu;
 	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
 }
 
@@ -13421,7 +13430,7 @@ static void vmx_leave_nested(struct kvm_vcpu *vcpu)
 		to_vmx(vcpu)->nested.nested_run_pending = 0;
 		nested_vmx_vmexit(vcpu, -1, 0, 0);
 	}
-	free_nested(to_vmx(vcpu));
+	free_nested(vcpu);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 5/9] x86/kvm/mmu: get rid of redundant kvm_mmu_setup()
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
                   ` (3 preceding siblings ...)
  2018-09-25 17:58 ` [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 14:15   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu Vitaly Kuznetsov
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

From: Paolo Bonzini <pbonzini@redhat.com>

Just inline the contents into the sole caller, kvm_init_mmu is now
public.

Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  1 -
 arch/x86/kvm/mmu.c              | 12 ------------
 arch/x86/kvm/x86.c              |  2 +-
 3 files changed, 1 insertion(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a3829869353b..a64da200aed1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1176,7 +1176,6 @@ void kvm_mmu_module_exit(void);
 
 void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
 int kvm_mmu_create(struct kvm_vcpu *vcpu);
-void kvm_mmu_setup(struct kvm_vcpu *vcpu);
 void kvm_mmu_init_vm(struct kvm *kvm);
 void kvm_mmu_uninit_vm(struct kvm *kvm);
 void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 96c2a0b3eb53..e59e5f49c8c2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5424,18 +5424,6 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
 	return alloc_mmu_pages(vcpu);
 }
 
-void kvm_mmu_setup(struct kvm_vcpu *vcpu)
-{
-	MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->root_hpa));
-
-	/*
-	 * kvm_mmu_setup() is called only on vCPU initialization.  
-	 * Therefore, no need to reset mmu roots as they are not yet
-	 * initialized.
-	 */
-	kvm_init_mmu(vcpu, false);
-}
-
 static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
 			struct kvm_memory_slot *slot,
 			struct kvm_page_track_notifier_node *node)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d58dc530ed84..b93b1ef8854b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8478,7 +8478,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 	kvm_vcpu_mtrr_init(vcpu);
 	vcpu_load(vcpu);
 	kvm_vcpu_reset(vcpu, false);
-	kvm_mmu_setup(vcpu);
+	kvm_init_mmu(vcpu, false);
 	vcpu_put(vcpu);
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
                   ` (4 preceding siblings ...)
  2018-09-25 17:58 ` [PATCH v2 5/9] x86/kvm/mmu: get rid of redundant kvm_mmu_setup() Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 14:40   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

In preparation to MMU reconfiguration avoidance we need a space to
cache source data. As this partially intersects with kvm_mmu_page_role,
create 64bit sized union kvm_mmu_role holding both base and extended data.
No functional change.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
Changes since v1:
- Rename:
  kvm_mmu_scache -> kvm_mmu_extended_role
  mmu_role.scache -> mmu_role.ext
  mmu_role.base_role -> mmu_role.base
  [Sean Christopherson]
- Add BUILD_BUG_ONs checking unions sizes.
  [Sean Christopherson]
- Use explicit u32 for kvm_mmu_page_role/kvm_mmu_extended_role.word and
  u64 for kvm_mmu_role.as_u64
---
 arch/x86/include/asm/kvm_host.h | 16 ++++++++++++++--
 arch/x86/kvm/mmu.c              | 29 ++++++++++++++++++++++-------
 arch/x86/kvm/vmx.c              |  2 +-
 3 files changed, 37 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a64da200aed1..1821b0215230 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -247,7 +247,7 @@ struct kvm_mmu_memory_cache {
  * @nxe, @cr0_wp, @smep_andnot_wp and @smap_andnot_wp.
  */
 union kvm_mmu_page_role {
-	unsigned word;
+	u32 word;
 	struct {
 		unsigned level:4;
 		unsigned cr4_pae:1;
@@ -273,6 +273,18 @@ union kvm_mmu_page_role {
 	};
 };
 
+union kvm_mmu_extended_role {
+	u32 word;
+};
+
+union kvm_mmu_role {
+	u64 as_u64;
+	struct {
+		union kvm_mmu_page_role base;
+		union kvm_mmu_extended_role ext;
+	};
+};
+
 struct kvm_rmap_head {
 	unsigned long val;
 };
@@ -360,7 +372,7 @@ struct kvm_mmu {
 	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 			   u64 *spte, const void *pte);
 	hpa_t root_hpa;
-	union kvm_mmu_page_role base_role;
+	union kvm_mmu_role mmu_role;
 	u8 root_level;
 	u8 shadow_root_level;
 	u8 ept_ad;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e59e5f49c8c2..bb1ef0f68f8e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2359,7 +2359,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 	int collisions = 0;
 	LIST_HEAD(invalid_list);
 
-	role = vcpu->arch.mmu->base_role;
+	role = vcpu->arch.mmu->mmu_role.base;
 	role.level = level;
 	role.direct = direct;
 	if (role.direct)
@@ -4407,7 +4407,8 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
 void
 reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
 {
-	bool uses_nx = context->nx || context->base_role.smep_andnot_wp;
+	bool uses_nx = context->nx ||
+		context->mmu_role.base.smep_andnot_wp;
 	struct rsvd_bits_validate *shadow_zero_check;
 	int i;
 
@@ -4726,7 +4727,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 {
 	struct kvm_mmu *context = vcpu->arch.mmu;
 
-	context->base_role.word = mmu_base_role_mask.word &
+	context->mmu_role.base.word = mmu_base_role_mask.word &
 				  kvm_calc_tdp_mmu_root_page_role(vcpu).word;
 	context->page_fault = tdp_page_fault;
 	context->sync_page = nonpaging_sync_page;
@@ -4807,7 +4808,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 	else
 		paging32_init_context(vcpu, context);
 
-	context->base_role.word = mmu_base_role_mask.word &
+	context->mmu_role.base.word = mmu_base_role_mask.word &
 				  kvm_calc_shadow_mmu_root_page_role(vcpu).word;
 	reset_shadow_zero_bits_mask(vcpu, context);
 }
@@ -4816,7 +4817,7 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
 static union kvm_mmu_page_role
 kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
 {
-	union kvm_mmu_page_role role = vcpu->arch.mmu->base_role;
+	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
 
 	role.level = PT64_ROOT_4LEVEL;
 	role.direct = false;
@@ -4846,7 +4847,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 	context->update_pte = ept_update_pte;
 	context->root_level = PT64_ROOT_4LEVEL;
 	context->direct_map = false;
-	context->base_role.word = root_page_role.word & mmu_base_role_mask.word;
+	context->mmu_role.base.word =
+		root_page_role.word & mmu_base_role_mask.word;
 	context->get_pdptr = kvm_pdptr_read;
 
 	update_permission_bitmask(vcpu, context, true);
@@ -5161,10 +5163,13 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 
 		local_flush = true;
 		while (npte--) {
+			unsigned int base_role =
+				vcpu->arch.mmu->mmu_role.base.word;
+
 			entry = *spte;
 			mmu_page_zap_pte(vcpu->kvm, sp, spte);
 			if (gentry &&
-			      !((sp->role.word ^ vcpu->arch.mmu->base_role.word)
+			      !((sp->role.word ^ base_role)
 			      & mmu_base_role_mask.word) && rmap_can_add(vcpu))
 				mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
 			if (need_remote_flush(entry, *spte))
@@ -5861,6 +5866,16 @@ int kvm_mmu_module_init(void)
 {
 	int ret = -ENOMEM;
 
+	/*
+	 * MMU roles use union aliasing which is, generally speaking, an
+	 * undefined behavior. However, we supposedly know how compilers behave
+	 * and the current status quo is unlikely to change. Guardians below are
+	 * supposed to let us know if the assumption becomes false.
+	 */
+	BUILD_BUG_ON(sizeof(union kvm_mmu_page_role) != sizeof(u32));
+	BUILD_BUG_ON(sizeof(union kvm_mmu_extended_role) != sizeof(u32));
+	BUILD_BUG_ON(sizeof(union kvm_mmu_role) != sizeof(u64));
+
 	kvm_mmu_reset_all_pte_masks();
 
 	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 93ff08136fc1..c56a80c15c4f 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -9321,7 +9321,7 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
 
 		kvm_mmu_unload(vcpu);
 		mmu->ept_ad = accessed_dirty;
-		mmu->base_role.ad_disabled = !accessed_dirty;
+		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
 		vmcs12->ept_pointer = address;
 		/*
 		 * TODO: Check what's the correct approach in case
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu()
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
                   ` (5 preceding siblings ...)
  2018-09-25 17:58 ` [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 15:06   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 8/9] x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed Vitaly Kuznetsov
  2018-09-25 17:58 ` [PATCH v2 9/9] x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu() Vitaly Kuznetsov
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

MMU re-initialization is expensive, in particular,
update_permission_bitmask() and update_pkru_bitmask() are.

Cache the data used to setup shadow EPT MMU and avoid full re-init when
it is unchanged.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 14 +++++++++
 arch/x86/kvm/mmu.c              | 51 ++++++++++++++++++++++++---------
 2 files changed, 52 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 1821b0215230..87ddaa1579e7 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -274,7 +274,21 @@ union kvm_mmu_page_role {
 };
 
 union kvm_mmu_extended_role {
+/*
+ * This structure complements kvm_mmu_page_role caching everything needed for
+ * MMU configuration. If nothing in both these structures changed, MMU
+ * re-configuration can be skipped. @valid bit is set on first usage so we don't
+ * treat all-zero structure as valid data.
+ */
 	u32 word;
+	struct {
+		unsigned int valid:1;
+		unsigned int execonly:1;
+		unsigned int cr4_pse:1;
+		unsigned int cr4_pke:1;
+		unsigned int cr4_smap:1;
+		unsigned int cr4_smep:1;
+	};
 };
 
 union kvm_mmu_role {
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index bb1ef0f68f8e..d8611914544a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4708,6 +4708,24 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu,
 	paging64_init_context_common(vcpu, context, PT32E_ROOT_LEVEL);
 }
 
+static union kvm_mmu_role
+kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu)
+{
+	union kvm_mmu_role role = {0};
+
+	role.base.access = ACC_ALL;
+	role.base.cr0_wp = is_write_protection(vcpu);
+
+	role.ext.cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0;
+	role.ext.cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP) != 0;
+	role.ext.cr4_pse = !!is_pse(vcpu);
+	role.ext.cr4_pke = kvm_read_cr4_bits(vcpu, X86_CR4_PKE) != 0;
+
+	role.ext.valid = 1;
+
+	return role;
+}
+
 static union kvm_mmu_page_role
 kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
 {
@@ -4814,16 +4832,18 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
 
-static union kvm_mmu_page_role
-kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
+static union kvm_mmu_role
+kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
+				   bool execonly)
 {
-	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
+	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu);
 
-	role.level = PT64_ROOT_4LEVEL;
-	role.direct = false;
-	role.ad_disabled = !accessed_dirty;
-	role.guest_mode = true;
-	role.access = ACC_ALL;
+	role.base.level = PT64_ROOT_4LEVEL;
+	role.base.direct = false;
+	role.base.ad_disabled = !accessed_dirty;
+	role.base.guest_mode = true;
+
+	role.ext.execonly = execonly;
 
 	return role;
 }
@@ -4832,10 +4852,16 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 			     bool accessed_dirty, gpa_t new_eptp)
 {
 	struct kvm_mmu *context = vcpu->arch.mmu;
-	union kvm_mmu_page_role root_page_role =
-		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty);
+	union kvm_mmu_role new_role =
+		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
+						   execonly);
+
+	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, false);
+
+	new_role.base.word &= mmu_base_role_mask.word;
+	if (new_role.as_u64 == context->mmu_role.as_u64)
+		return;
 
-	__kvm_mmu_new_cr3(vcpu, new_eptp, root_page_role, false);
 	context->shadow_root_level = PT64_ROOT_4LEVEL;
 
 	context->nx = true;
@@ -4847,8 +4873,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
 	context->update_pte = ept_update_pte;
 	context->root_level = PT64_ROOT_4LEVEL;
 	context->direct_map = false;
-	context->mmu_role.base.word =
-		root_page_role.word & mmu_base_role_mask.word;
+	context->mmu_role.as_u64 = new_role.as_u64;
 	context->get_pdptr = kvm_pdptr_read;
 
 	update_permission_bitmask(vcpu, context, true);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 8/9] x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
                   ` (6 preceding siblings ...)
  2018-09-25 17:58 ` [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 15:15   ` Sean Christopherson
  2018-09-25 17:58 ` [PATCH v2 9/9] x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu() Vitaly Kuznetsov
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

MMU reconfiguration in init_kvm_tdp_mmu()/kvm_init_shadow_mmu() can be
avoided if the source data used to configure it didn't change; enhance
kvm_mmu_scache with the required fields and consolidate common code in
kvm_calc_mmu_role_common().

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  2 +
 arch/x86/kvm/mmu.c              | 86 +++++++++++++++++++--------------
 2 files changed, 52 insertions(+), 36 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 87ddaa1579e7..609811066580 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -284,10 +284,12 @@ union kvm_mmu_extended_role {
 	struct {
 		unsigned int valid:1;
 		unsigned int execonly:1;
+		unsigned int cr0_pg:1;
 		unsigned int cr4_pse:1;
 		unsigned int cr4_pke:1;
 		unsigned int cr4_smap:1;
 		unsigned int cr4_smep:1;
+		unsigned int cr4_la57:1;
 	};
 };
 
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d8611914544a..f676c14d5c62 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4709,34 +4709,40 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu,
 }
 
 static union kvm_mmu_role
-kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu)
+kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, bool mmu_init)
 {
 	union kvm_mmu_role role = {0};
 
 	role.base.access = ACC_ALL;
+	role.base.nxe = !!is_nx(vcpu);
+	role.base.cr4_pae = !!is_pae(vcpu);
 	role.base.cr0_wp = is_write_protection(vcpu);
+	role.base.smm = is_smm(vcpu);
+	role.base.guest_mode = is_guest_mode(vcpu);
 
+	if (!mmu_init)
+		return role;
+
+	role.ext.cr0_pg = !!is_paging(vcpu);
 	role.ext.cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0;
 	role.ext.cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP) != 0;
 	role.ext.cr4_pse = !!is_pse(vcpu);
 	role.ext.cr4_pke = kvm_read_cr4_bits(vcpu, X86_CR4_PKE) != 0;
+	role.ext.cr4_la57 = kvm_read_cr4_bits(vcpu, X86_CR4_LA57) != 0;
 
 	role.ext.valid = 1;
 
 	return role;
 }
 
-static union kvm_mmu_page_role
-kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
+static union kvm_mmu_role
+kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool mmu_init)
 {
-	union kvm_mmu_page_role role = {0};
+	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, mmu_init);
 
-	role.guest_mode = is_guest_mode(vcpu);
-	role.smm = is_smm(vcpu);
-	role.ad_disabled = (shadow_accessed_mask == 0);
-	role.level = kvm_x86_ops->get_tdp_level(vcpu);
-	role.direct = true;
-	role.access = ACC_ALL;
+	role.base.ad_disabled = (shadow_accessed_mask == 0);
+	role.base.level = kvm_x86_ops->get_tdp_level(vcpu);
+	role.base.direct = true;
 
 	return role;
 }
@@ -4744,9 +4750,14 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
 static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 {
 	struct kvm_mmu *context = vcpu->arch.mmu;
+	union kvm_mmu_role new_role =
+		kvm_calc_tdp_mmu_root_page_role(vcpu, true);
 
-	context->mmu_role.base.word = mmu_base_role_mask.word &
-				  kvm_calc_tdp_mmu_root_page_role(vcpu).word;
+	new_role.base.word &= mmu_base_role_mask.word;
+	if (new_role.as_u64 == context->mmu_role.as_u64)
+		return;
+
+	context->mmu_role.as_u64 = new_role.as_u64;
 	context->page_fault = tdp_page_fault;
 	context->sync_page = nonpaging_sync_page;
 	context->invlpg = nonpaging_invlpg;
@@ -4786,29 +4797,23 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 	reset_tdp_shadow_zero_bits_mask(vcpu, context);
 }
 
-static union kvm_mmu_page_role
-kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu)
-{
-	union kvm_mmu_page_role role = {0};
-	bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
-	bool smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
-
-	role.nxe = is_nx(vcpu);
-	role.cr4_pae = !!is_pae(vcpu);
-	role.cr0_wp  = is_write_protection(vcpu);
-	role.smep_andnot_wp = smep && !is_write_protection(vcpu);
-	role.smap_andnot_wp = smap && !is_write_protection(vcpu);
-	role.guest_mode = is_guest_mode(vcpu);
-	role.smm = is_smm(vcpu);
-	role.direct = !is_paging(vcpu);
-	role.access = ACC_ALL;
+static union kvm_mmu_role
+kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool mmu_init)
+{
+	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, mmu_init);
+
+	role.base.smep_andnot_wp = role.ext.cr4_smep &&
+		!is_write_protection(vcpu);
+	role.base.smap_andnot_wp = role.ext.cr4_smap &&
+		!is_write_protection(vcpu);
+	role.base.direct = !is_paging(vcpu);
 
 	if (!is_long_mode(vcpu))
-		role.level = PT32E_ROOT_LEVEL;
+		role.base.level = PT32E_ROOT_LEVEL;
 	else if (is_la57_mode(vcpu))
-		role.level = PT64_ROOT_5LEVEL;
+		role.base.level = PT64_ROOT_5LEVEL;
 	else
-		role.level = PT64_ROOT_4LEVEL;
+		role.base.level = PT64_ROOT_4LEVEL;
 
 	return role;
 }
@@ -4816,6 +4821,12 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu)
 void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 {
 	struct kvm_mmu *context = vcpu->arch.mmu;
+	union kvm_mmu_role new_role =
+		kvm_calc_shadow_mmu_root_page_role(vcpu, true);
+
+	new_role.base.word &= mmu_base_role_mask.word;
+	if (new_role.as_u64 == context->mmu_role.as_u64)
+		return;
 
 	if (!is_paging(vcpu))
 		nonpaging_init_context(vcpu, context);
@@ -4826,8 +4837,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
 	else
 		paging32_init_context(vcpu, context);
 
-	context->mmu_role.base.word = mmu_base_role_mask.word &
-				  kvm_calc_shadow_mmu_root_page_role(vcpu).word;
+	context->mmu_role.as_u64 = new_role.as_u64;
 	reset_shadow_zero_bits_mask(vcpu, context);
 }
 EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
@@ -4836,7 +4846,7 @@ static union kvm_mmu_role
 kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
 				   bool execonly)
 {
-	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu);
+	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, true);
 
 	role.base.level = PT64_ROOT_4LEVEL;
 	role.base.direct = false;
@@ -4961,10 +4971,14 @@ EXPORT_SYMBOL_GPL(kvm_init_mmu);
 static union kvm_mmu_page_role
 kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu)
 {
+	union kvm_mmu_role role;
+
 	if (tdp_enabled)
-		return kvm_calc_tdp_mmu_root_page_role(vcpu);
+		role = kvm_calc_tdp_mmu_root_page_role(vcpu, false);
 	else
-		return kvm_calc_shadow_mmu_root_page_role(vcpu);
+		role = kvm_calc_shadow_mmu_root_page_role(vcpu, false);
+
+	return role.base;
 }
 
 void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 9/9] x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu()
  2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
                   ` (7 preceding siblings ...)
  2018-09-25 17:58 ` [PATCH v2 8/9] x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed Vitaly Kuznetsov
@ 2018-09-25 17:58 ` Vitaly Kuznetsov
  2018-09-26 15:17   ` Sean Christopherson
  8 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-25 17:58 UTC (permalink / raw)
  To: kvm
  Cc: Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, Sean Christopherson, linux-kernel

We don't use root page role for nested_mmu, however, optimizing out
re-initialization in case nothing changed is still valuable as this
is done for every nested vmentry.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 arch/x86/kvm/mmu.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f676c14d5c62..c6d536f12560 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4907,8 +4907,14 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu)
 
 static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu)
 {
+	union kvm_mmu_role new_role = kvm_calc_mmu_role_common(vcpu, true);
 	struct kvm_mmu *g_context = &vcpu->arch.nested_mmu;
 
+	new_role.base.word &= mmu_base_role_mask.word;
+	if (new_role.as_u64 == g_context->mmu_role.as_u64)
+		return;
+
+	g_context->mmu_role.as_u64 = new_role.as_u64;
 	g_context->get_cr3           = get_cr3;
 	g_context->get_pdptr         = kvm_pdptr_read;
 	g_context->inject_page_fault = kvm_inject_page_fault;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu
  2018-09-25 17:58 ` [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu Vitaly Kuznetsov
@ 2018-09-26 14:02   ` Sean Christopherson
  2018-09-26 17:18     ` Vitaly Kuznetsov
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 14:02 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:39PM +0200, Vitaly Kuznetsov wrote:
> When EPT is used for nested guest we need to re-init MMU as shadow
> EPT MMU (nested_ept_init_mmu_context() does that). When we return back
> from L2 to L1 kvm_mmu_reset_context() in nested_vmx_load_cr3() resets
> MMU back to normal TDP mode. Add a special 'guest_mmu' so we can use
> separate root caches; the improved hit rate is not very important for
> single vCPU performance, but it avoids contention on the mmu_lock for
> many vCPUs.
> 
> On the nested CPUID benchmark, with 16 vCPUs, an L2->L1->L2 vmexit
> goes from 42k to 26k cycles.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> Changes since v1:
> - drop now unneded local vmx variable in vmx_free_vcpu_nested
>   [Sean Christopherson]
> ---
>  arch/x86/include/asm/kvm_host.h |  3 +++
>  arch/x86/kvm/mmu.c              | 15 +++++++++++----
>  arch/x86/kvm/vmx.c              | 27 ++++++++++++++++++---------
>  3 files changed, 32 insertions(+), 13 deletions(-)

...

> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 2d55adab52de..93ff08136fc1 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -8468,8 +8468,10 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
>   * Free whatever needs to be freed from vmx->nested when L1 goes down, or
>   * just stops using VMX.
>   */
> -static void free_nested(struct vcpu_vmx *vmx)
> +static void free_nested(struct kvm_vcpu *vcpu)
>  {
> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
> +
>  	if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
>  		return;
>  
> @@ -8502,6 +8504,8 @@ static void free_nested(struct vcpu_vmx *vmx)
>  		vmx->nested.pi_desc = NULL;
>  	}
>  
> +	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
> +
>  	free_loaded_vmcs(&vmx->nested.vmcs02);
>  }
>  
> @@ -8510,7 +8514,7 @@ static int handle_vmoff(struct kvm_vcpu *vcpu)
>  {
>  	if (!nested_vmx_check_permission(vcpu))
>  		return 1;
> -	free_nested(to_vmx(vcpu));
> +	free_nested(vcpu);
>  	nested_vmx_succeed(vcpu);
>  	return kvm_skip_emulated_instruction(vcpu);
>  }
> @@ -8541,6 +8545,8 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
>  	if (vmptr == vmx->nested.current_vmptr)
>  		nested_release_vmcs12(vmx);
>  
> +	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);

Shouldn't we only free guest_mmu if VMCLEAR is targeting current_vmptr?
Assuming that's the case, we could put the call to kvm_mmu_free_roots()
in nested_release_vmcs12() instead of calling it from handle_vmclear()
and handle_vmptrld().

> +
>  	kvm_vcpu_write_guest(vcpu,
>  			vmptr + offsetof(struct vmcs12, launch_state),
>  			&zero, sizeof(zero));
> @@ -8924,6 +8930,9 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
>  		}
>  
>  		nested_release_vmcs12(vmx);
> +
> +		kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu,
> +				   KVM_MMU_ROOTS_ALL);
>  		/*
>  		 * Load VMCS12 from guest memory since it is not already
>  		 * cached.
> @@ -10976,12 +10985,10 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
>   */
>  static void vmx_free_vcpu_nested(struct kvm_vcpu *vcpu)
>  {
> -       struct vcpu_vmx *vmx = to_vmx(vcpu);
> -
> -       vcpu_load(vcpu);
> -       vmx_switch_vmcs(vcpu, &vmx->vmcs01);
> -       free_nested(vmx);
> -       vcpu_put(vcpu);
> +	vcpu_load(vcpu);
> +	vmx_switch_vmcs(vcpu, &to_vmx(vcpu)->vmcs01);
> +	free_nested(vcpu);
> +	vcpu_put(vcpu);
>  }
>  
>  static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
> @@ -11331,6 +11338,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
>  	if (!valid_ept_address(vcpu, nested_ept_get_cr3(vcpu)))
>  		return 1;
>  
> +	vcpu->arch.mmu = &vcpu->arch.guest_mmu;
>  	kvm_init_shadow_ept_mmu(vcpu,
>  			to_vmx(vcpu)->nested.msrs.ept_caps &
>  			VMX_EPT_EXECUTE_ONLY_BIT,
> @@ -11346,6 +11354,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
>  
>  static void nested_ept_uninit_mmu_context(struct kvm_vcpu *vcpu)
>  {
> +	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
>  }
>  
> @@ -13421,7 +13430,7 @@ static void vmx_leave_nested(struct kvm_vcpu *vcpu)
>  		to_vmx(vcpu)->nested.nested_run_pending = 0;
>  		nested_vmx_vmexit(vcpu, -1, 0, 0);
>  	}
> -	free_nested(to_vmx(vcpu));
> +	free_nested(vcpu);
>  }
>  
>  /*
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()
  2018-09-25 17:58 ` [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
@ 2018-09-26 14:11   ` Sean Christopherson
  2018-09-26 17:16     ` Vitaly Kuznetsov
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 14:11 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:37PM +0200, Vitaly Kuznetsov wrote:
> kvm_init_shadow_ept_mmu() doesn't set get_pdptr() hook and is this
> not a problem just because MMU context is already initialized and this
> hook points to kvm_pdptr_read(). As we're intended to use a dedicated
> MMU for shadow EPT MMU set this hook explicitly.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/mmu.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ca79ec0d8060..2bdc63f67886 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4846,6 +4846,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>  	context->root_level = PT64_ROOT_4LEVEL;
>  	context->direct_map = false;
>  	context->base_role.word = root_page_role.word & mmu_base_role_mask.word;
> +	context->get_pdptr = kvm_pdptr_read;

Would it make sense to set this in nested_ept_init_mmu_context()
along with set_cr3, get_cr3 and inject_page_fault?  The other MMU
flows set them as a package deal.

Either way...

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

> +
>  	update_permission_bitmask(vcpu, context, true);
>  	update_pkru_bitmask(vcpu, context, true);
>  	update_last_nonleaf_level(vcpu, context);

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 5/9] x86/kvm/mmu: get rid of redundant kvm_mmu_setup()
  2018-09-25 17:58 ` [PATCH v2 5/9] x86/kvm/mmu: get rid of redundant kvm_mmu_setup() Vitaly Kuznetsov
@ 2018-09-26 14:15   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 14:15 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:40PM +0200, Vitaly Kuznetsov wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> Just inline the contents into the sole caller, kvm_init_mmu is now
> public.
> 
> Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU
  2018-09-25 17:58 ` [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU Vitaly Kuznetsov
@ 2018-09-26 14:17   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 14:17 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:36PM +0200, Vitaly Kuznetsov wrote:
> As a preparation to full MMU split between L1 and L2 make vcpu->arch.mmu
> a pointer to the currently used mmu. For now, this is always
> vcpu->arch.root_mmu. No functional change.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/9] x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots()
  2018-09-25 17:58 ` [PATCH v2 3/9] x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots() Vitaly Kuznetsov
@ 2018-09-26 14:18   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 14:18 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:38PM +0200, Vitaly Kuznetsov wrote:
> Add an option to specify which MMU root we want to free. This will
> be used when nested and non-nested MMUs for L1 are split.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu
  2018-09-25 17:58 ` [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu Vitaly Kuznetsov
@ 2018-09-26 14:40   ` Sean Christopherson
  2018-09-26 17:19     ` Vitaly Kuznetsov
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 14:40 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:41PM +0200, Vitaly Kuznetsov wrote:
> In preparation to MMU reconfiguration avoidance we need a space to
> cache source data. As this partially intersects with kvm_mmu_page_role,
> create 64bit sized union kvm_mmu_role holding both base and extended data.
> No functional change.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---

One nit below, other than that...

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index e59e5f49c8c2..bb1ef0f68f8e 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2359,7 +2359,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>  	int collisions = 0;
>  	LIST_HEAD(invalid_list);
>  
> -	role = vcpu->arch.mmu->base_role;
> +	role = vcpu->arch.mmu->mmu_role.base;
>  	role.level = level;
>  	role.direct = direct;
>  	if (role.direct)
> @@ -4407,7 +4407,8 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
>  void
>  reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
>  {
> -	bool uses_nx = context->nx || context->base_role.smep_andnot_wp;
> +	bool uses_nx = context->nx ||
> +		context->mmu_role.base.smep_andnot_wp;
>  	struct rsvd_bits_validate *shadow_zero_check;
>  	int i;
>  
> @@ -4726,7 +4727,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_mmu *context = vcpu->arch.mmu;
>  
> -	context->base_role.word = mmu_base_role_mask.word &
> +	context->mmu_role.base.word = mmu_base_role_mask.word &
>  				  kvm_calc_tdp_mmu_root_page_role(vcpu).word;
>  	context->page_fault = tdp_page_fault;
>  	context->sync_page = nonpaging_sync_page;
> @@ -4807,7 +4808,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
>  	else
>  		paging32_init_context(vcpu, context);
>  
> -	context->base_role.word = mmu_base_role_mask.word &
> +	context->mmu_role.base.word = mmu_base_role_mask.word &
>  				  kvm_calc_shadow_mmu_root_page_role(vcpu).word;
>  	reset_shadow_zero_bits_mask(vcpu, context);
>  }
> @@ -4816,7 +4817,7 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
>  static union kvm_mmu_page_role
>  kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
>  {
> -	union kvm_mmu_page_role role = vcpu->arch.mmu->base_role;
> +	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
>  
>  	role.level = PT64_ROOT_4LEVEL;
>  	role.direct = false;
> @@ -4846,7 +4847,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>  	context->update_pte = ept_update_pte;
>  	context->root_level = PT64_ROOT_4LEVEL;
>  	context->direct_map = false;
> -	context->base_role.word = root_page_role.word & mmu_base_role_mask.word;
> +	context->mmu_role.base.word =
> +		root_page_role.word & mmu_base_role_mask.word;
>  	context->get_pdptr = kvm_pdptr_read;
>  
>  	update_permission_bitmask(vcpu, context, true);
> @@ -5161,10 +5163,13 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
>  
>  		local_flush = true;
>  		while (npte--) {
> +			unsigned int base_role =

Nit: should this be a u32 to match mmu_role.base.word?

> +				vcpu->arch.mmu->mmu_role.base.word;
> +
>  			entry = *spte;
>  			mmu_page_zap_pte(vcpu->kvm, sp, spte);
>  			if (gentry &&
> -			      !((sp->role.word ^ vcpu->arch.mmu->base_role.word)
> +			      !((sp->role.word ^ base_role)
>  			      & mmu_base_role_mask.word) && rmap_can_add(vcpu))
>  				mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
>  			if (need_remote_flush(entry, *spte))
> @@ -5861,6 +5866,16 @@ int kvm_mmu_module_init(void)
>  {
>  	int ret = -ENOMEM;
>  
> +	/*
> +	 * MMU roles use union aliasing which is, generally speaking, an
> +	 * undefined behavior. However, we supposedly know how compilers behave
> +	 * and the current status quo is unlikely to change. Guardians below are
> +	 * supposed to let us know if the assumption becomes false.
> +	 */
> +	BUILD_BUG_ON(sizeof(union kvm_mmu_page_role) != sizeof(u32));
> +	BUILD_BUG_ON(sizeof(union kvm_mmu_extended_role) != sizeof(u32));
> +	BUILD_BUG_ON(sizeof(union kvm_mmu_role) != sizeof(u64));
> +
>  	kvm_mmu_reset_all_pte_masks();
>  
>  	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 93ff08136fc1..c56a80c15c4f 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -9321,7 +9321,7 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
>  
>  		kvm_mmu_unload(vcpu);
>  		mmu->ept_ad = accessed_dirty;
> -		mmu->base_role.ad_disabled = !accessed_dirty;
> +		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
>  		vmcs12->ept_pointer = address;
>  		/*
>  		 * TODO: Check what's the correct approach in case
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu()
  2018-09-25 17:58 ` [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
@ 2018-09-26 15:06   ` Sean Christopherson
  2018-09-26 17:30     ` Vitaly Kuznetsov
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 15:06 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:42PM +0200, Vitaly Kuznetsov wrote:
> MMU re-initialization is expensive, in particular,
> update_permission_bitmask() and update_pkru_bitmask() are.
> 
> Cache the data used to setup shadow EPT MMU and avoid full re-init when
> it is unchanged.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 14 +++++++++
>  arch/x86/kvm/mmu.c              | 51 ++++++++++++++++++++++++---------
>  2 files changed, 52 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 1821b0215230..87ddaa1579e7 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -274,7 +274,21 @@ union kvm_mmu_page_role {
>  };
>  
>  union kvm_mmu_extended_role {
> +/*
> + * This structure complements kvm_mmu_page_role caching everything needed for
> + * MMU configuration. If nothing in both these structures changed, MMU
> + * re-configuration can be skipped. @valid bit is set on first usage so we don't
> + * treat all-zero structure as valid data.
> + */
>  	u32 word;
> +	struct {
> +		unsigned int valid:1;
> +		unsigned int execonly:1;
> +		unsigned int cr4_pse:1;
> +		unsigned int cr4_pke:1;
> +		unsigned int cr4_smap:1;
> +		unsigned int cr4_smep:1;
> +	};
>  };
>  
>  union kvm_mmu_role {
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index bb1ef0f68f8e..d8611914544a 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4708,6 +4708,24 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu,
>  	paging64_init_context_common(vcpu, context, PT32E_ROOT_LEVEL);
>  }
>  
> +static union kvm_mmu_role
> +kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu)
> +{
> +	union kvm_mmu_role role = {0};
> +
> +	role.base.access = ACC_ALL;
> +	role.base.cr0_wp = is_write_protection(vcpu);
> +
> +	role.ext.cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0;
> +	role.ext.cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP) != 0;
> +	role.ext.cr4_pse = !!is_pse(vcpu);
> +	role.ext.cr4_pke = kvm_read_cr4_bits(vcpu, X86_CR4_PKE) != 0;
> +
> +	role.ext.valid = 1;
> +
> +	return role;
> +}
> +
>  static union kvm_mmu_page_role
>  kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
>  {
> @@ -4814,16 +4832,18 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
>  }
>  EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
>  
> -static union kvm_mmu_page_role
> -kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
> +static union kvm_mmu_role
> +kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
> +				   bool execonly)
>  {
> -	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
> +	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu);

kvm_calc_mmu_role_common() doesn't preserve the current mmu_role.base
and kvm_calc_mmu_role_common() doesn't capture all base fields.  Won't
@role will be incorrect for base fields that aren't set below, e.g.
cr4_pae, smep_andnot_wp, smap_andnot_wp, etc...

>  
> -	role.level = PT64_ROOT_4LEVEL;
> -	role.direct = false;
> -	role.ad_disabled = !accessed_dirty;
> -	role.guest_mode = true;
> -	role.access = ACC_ALL;
> +	role.base.level = PT64_ROOT_4LEVEL;
> +	role.base.direct = false;
> +	role.base.ad_disabled = !accessed_dirty;
> +	role.base.guest_mode = true;
> +
> +	role.ext.execonly = execonly;
>  
>  	return role;
>  }
> @@ -4832,10 +4852,16 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>  			     bool accessed_dirty, gpa_t new_eptp)
>  {
>  	struct kvm_mmu *context = vcpu->arch.mmu;
> -	union kvm_mmu_page_role root_page_role =
> -		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty);
> +	union kvm_mmu_role new_role =
> +		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
> +						   execonly);
> +
> +	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, false);
> +
> +	new_role.base.word &= mmu_base_role_mask.word;
> +	if (new_role.as_u64 == context->mmu_role.as_u64)
> +		return;
>  
> -	__kvm_mmu_new_cr3(vcpu, new_eptp, root_page_role, false);
>  	context->shadow_root_level = PT64_ROOT_4LEVEL;
>  
>  	context->nx = true;
> @@ -4847,8 +4873,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>  	context->update_pte = ept_update_pte;
>  	context->root_level = PT64_ROOT_4LEVEL;
>  	context->direct_map = false;
> -	context->mmu_role.base.word =
> -		root_page_role.word & mmu_base_role_mask.word;
> +	context->mmu_role.as_u64 = new_role.as_u64;
>  	context->get_pdptr = kvm_pdptr_read;
>  
>  	update_permission_bitmask(vcpu, context, true);
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 8/9] x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed
  2018-09-25 17:58 ` [PATCH v2 8/9] x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed Vitaly Kuznetsov
@ 2018-09-26 15:15   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 15:15 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:43PM +0200, Vitaly Kuznetsov wrote:
> MMU reconfiguration in init_kvm_tdp_mmu()/kvm_init_shadow_mmu() can be
> avoided if the source data used to configure it didn't change; enhance
> kvm_mmu_scache with the required fields and consolidate common code in

Nit: kvm_mmu_scache no longer exists, probably say "source cache" instead?

> kvm_calc_mmu_role_common().
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  2 +
>  arch/x86/kvm/mmu.c              | 86 +++++++++++++++++++--------------
>  2 files changed, 52 insertions(+), 36 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 87ddaa1579e7..609811066580 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -284,10 +284,12 @@ union kvm_mmu_extended_role {
>  	struct {
>  		unsigned int valid:1;
>  		unsigned int execonly:1;
> +		unsigned int cr0_pg:1;
>  		unsigned int cr4_pse:1;
>  		unsigned int cr4_pke:1;
>  		unsigned int cr4_smap:1;
>  		unsigned int cr4_smep:1;
> +		unsigned int cr4_la57:1;
>  	};
>  };
>  
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index d8611914544a..f676c14d5c62 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4709,34 +4709,40 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu,
>  }
>  
>  static union kvm_mmu_role
> -kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu)
> +kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, bool mmu_init)
>  {
>  	union kvm_mmu_role role = {0};
>  
>  	role.base.access = ACC_ALL;
> +	role.base.nxe = !!is_nx(vcpu);
> +	role.base.cr4_pae = !!is_pae(vcpu);
>  	role.base.cr0_wp = is_write_protection(vcpu);
> +	role.base.smm = is_smm(vcpu);
> +	role.base.guest_mode = is_guest_mode(vcpu);
>  
> +	if (!mmu_init)
> +		return role;

Can you add a comment explaining why we don't fill in role.ext when
!mmu_init?  Or maybe just rename mmu_init to something like base_only?
From what I can tell it's false when we only care about the base role,
which just happens to be only in the non-init flow.

> +
> +	role.ext.cr0_pg = !!is_paging(vcpu);
>  	role.ext.cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0;
>  	role.ext.cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP) != 0;
>  	role.ext.cr4_pse = !!is_pse(vcpu);
>  	role.ext.cr4_pke = kvm_read_cr4_bits(vcpu, X86_CR4_PKE) != 0;
> +	role.ext.cr4_la57 = kvm_read_cr4_bits(vcpu, X86_CR4_LA57) != 0;
>  
>  	role.ext.valid = 1;
>  
>  	return role;
>  }
>  
> -static union kvm_mmu_page_role
> -kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
> +static union kvm_mmu_role
> +kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, bool mmu_init)
>  {
> -	union kvm_mmu_page_role role = {0};
> +	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, mmu_init);
>  
> -	role.guest_mode = is_guest_mode(vcpu);
> -	role.smm = is_smm(vcpu);
> -	role.ad_disabled = (shadow_accessed_mask == 0);
> -	role.level = kvm_x86_ops->get_tdp_level(vcpu);
> -	role.direct = true;
> -	role.access = ACC_ALL;
> +	role.base.ad_disabled = (shadow_accessed_mask == 0);
> +	role.base.level = kvm_x86_ops->get_tdp_level(vcpu);
> +	role.base.direct = true;
>  
>  	return role;
>  }
> @@ -4744,9 +4750,14 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
>  static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_mmu *context = vcpu->arch.mmu;
> +	union kvm_mmu_role new_role =
> +		kvm_calc_tdp_mmu_root_page_role(vcpu, true);
>  
> -	context->mmu_role.base.word = mmu_base_role_mask.word &
> -				  kvm_calc_tdp_mmu_root_page_role(vcpu).word;
> +	new_role.base.word &= mmu_base_role_mask.word;
> +	if (new_role.as_u64 == context->mmu_role.as_u64)
> +		return;
> +
> +	context->mmu_role.as_u64 = new_role.as_u64;
>  	context->page_fault = tdp_page_fault;
>  	context->sync_page = nonpaging_sync_page;
>  	context->invlpg = nonpaging_invlpg;
> @@ -4786,29 +4797,23 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
>  	reset_tdp_shadow_zero_bits_mask(vcpu, context);
>  }
>  
> -static union kvm_mmu_page_role
> -kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu)
> -{
> -	union kvm_mmu_page_role role = {0};
> -	bool smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
> -	bool smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
> -
> -	role.nxe = is_nx(vcpu);
> -	role.cr4_pae = !!is_pae(vcpu);
> -	role.cr0_wp  = is_write_protection(vcpu);
> -	role.smep_andnot_wp = smep && !is_write_protection(vcpu);
> -	role.smap_andnot_wp = smap && !is_write_protection(vcpu);
> -	role.guest_mode = is_guest_mode(vcpu);
> -	role.smm = is_smm(vcpu);
> -	role.direct = !is_paging(vcpu);
> -	role.access = ACC_ALL;
> +static union kvm_mmu_role
> +kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, bool mmu_init)
> +{
> +	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, mmu_init);
> +
> +	role.base.smep_andnot_wp = role.ext.cr4_smep &&
> +		!is_write_protection(vcpu);
> +	role.base.smap_andnot_wp = role.ext.cr4_smap &&
> +		!is_write_protection(vcpu);
> +	role.base.direct = !is_paging(vcpu);
>  
>  	if (!is_long_mode(vcpu))
> -		role.level = PT32E_ROOT_LEVEL;
> +		role.base.level = PT32E_ROOT_LEVEL;
>  	else if (is_la57_mode(vcpu))
> -		role.level = PT64_ROOT_5LEVEL;
> +		role.base.level = PT64_ROOT_5LEVEL;
>  	else
> -		role.level = PT64_ROOT_4LEVEL;
> +		role.base.level = PT64_ROOT_4LEVEL;
>  
>  	return role;
>  }
> @@ -4816,6 +4821,12 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu)
>  void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_mmu *context = vcpu->arch.mmu;
> +	union kvm_mmu_role new_role =
> +		kvm_calc_shadow_mmu_root_page_role(vcpu, true);
> +
> +	new_role.base.word &= mmu_base_role_mask.word;
> +	if (new_role.as_u64 == context->mmu_role.as_u64)
> +		return;
>  
>  	if (!is_paging(vcpu))
>  		nonpaging_init_context(vcpu, context);
> @@ -4826,8 +4837,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
>  	else
>  		paging32_init_context(vcpu, context);
>  
> -	context->mmu_role.base.word = mmu_base_role_mask.word &
> -				  kvm_calc_shadow_mmu_root_page_role(vcpu).word;
> +	context->mmu_role.as_u64 = new_role.as_u64;
>  	reset_shadow_zero_bits_mask(vcpu, context);
>  }
>  EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
> @@ -4836,7 +4846,7 @@ static union kvm_mmu_role
>  kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
>  				   bool execonly)
>  {
> -	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu);
> +	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu, true);
>  
>  	role.base.level = PT64_ROOT_4LEVEL;
>  	role.base.direct = false;
> @@ -4961,10 +4971,14 @@ EXPORT_SYMBOL_GPL(kvm_init_mmu);
>  static union kvm_mmu_page_role
>  kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu)
>  {
> +	union kvm_mmu_role role;
> +
>  	if (tdp_enabled)
> -		return kvm_calc_tdp_mmu_root_page_role(vcpu);
> +		role = kvm_calc_tdp_mmu_root_page_role(vcpu, false);
>  	else
> -		return kvm_calc_shadow_mmu_root_page_role(vcpu);
> +		role = kvm_calc_shadow_mmu_root_page_role(vcpu, false);
> +
> +	return role.base;
>  }
>  
>  void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 9/9] x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu()
  2018-09-25 17:58 ` [PATCH v2 9/9] x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu() Vitaly Kuznetsov
@ 2018-09-26 15:17   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2018-09-26 15:17 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

On Tue, Sep 25, 2018 at 07:58:44PM +0200, Vitaly Kuznetsov wrote:
> We don't use root page role for nested_mmu, however, optimizing out
> re-initialization in case nothing changed is still valuable as this
> is done for every nested vmentry.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()
  2018-09-26 14:11   ` Sean Christopherson
@ 2018-09-26 17:16     ` Vitaly Kuznetsov
  0 siblings, 0 replies; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-26 17:16 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Tue, Sep 25, 2018 at 07:58:37PM +0200, Vitaly Kuznetsov wrote:
>> kvm_init_shadow_ept_mmu() doesn't set get_pdptr() hook and is this
>> not a problem just because MMU context is already initialized and this
>> hook points to kvm_pdptr_read(). As we're intended to use a dedicated
>> MMU for shadow EPT MMU set this hook explicitly.
>> 
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>>  arch/x86/kvm/mmu.c | 2 ++
>>  1 file changed, 2 insertions(+)
>> 
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index ca79ec0d8060..2bdc63f67886 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -4846,6 +4846,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>>  	context->root_level = PT64_ROOT_4LEVEL;
>>  	context->direct_map = false;
>>  	context->base_role.word = root_page_role.word & mmu_base_role_mask.word;
>> +	context->get_pdptr = kvm_pdptr_read;
>
> Would it make sense to set this in nested_ept_init_mmu_context()
> along with set_cr3, get_cr3 and inject_page_fault?  The other MMU
> flows set them as a package deal.

Well, kvm_init_shadow_ept_mmu() has only one call site and reading the
code I was under an impression we set set_cr3/get_cr3/inject_page_fault
in vmx.c just to avoid passing all these vmx-specific functions as
pointers to kvm_init_shadow_ept_mmu(). With get_pdptr() I was thinking
"oh, great, this is not vmx-specific so we can set it in mmu.c" - but I
see your point and I'm ready to budge :-)

>
> Either way...
>
> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

Thanks!

>
>> +
>>  	update_permission_bitmask(vcpu, context, true);
>>  	update_pkru_bitmask(vcpu, context, true);
>>  	update_last_nonleaf_level(vcpu, context);

-- 
Vitaly

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu
  2018-09-26 14:02   ` Sean Christopherson
@ 2018-09-26 17:18     ` Vitaly Kuznetsov
  0 siblings, 0 replies; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-26 17:18 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Tue, Sep 25, 2018 at 07:58:39PM +0200, Vitaly Kuznetsov wrote:
>> When EPT is used for nested guest we need to re-init MMU as shadow
>> EPT MMU (nested_ept_init_mmu_context() does that). When we return back
>> from L2 to L1 kvm_mmu_reset_context() in nested_vmx_load_cr3() resets
>> MMU back to normal TDP mode. Add a special 'guest_mmu' so we can use
>> separate root caches; the improved hit rate is not very important for
>> single vCPU performance, but it avoids contention on the mmu_lock for
>> many vCPUs.
>> 
>> On the nested CPUID benchmark, with 16 vCPUs, an L2->L1->L2 vmexit
>> goes from 42k to 26k cycles.
>> 
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>> Changes since v1:
>> - drop now unneded local vmx variable in vmx_free_vcpu_nested
>>   [Sean Christopherson]
>> ---
>>  arch/x86/include/asm/kvm_host.h |  3 +++
>>  arch/x86/kvm/mmu.c              | 15 +++++++++++----
>>  arch/x86/kvm/vmx.c              | 27 ++++++++++++++++++---------
>>  3 files changed, 32 insertions(+), 13 deletions(-)
>
> ...
>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 2d55adab52de..93ff08136fc1 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -8468,8 +8468,10 @@ static inline void nested_release_vmcs12(struct vcpu_vmx *vmx)
>>   * Free whatever needs to be freed from vmx->nested when L1 goes down, or
>>   * just stops using VMX.
>>   */
>> -static void free_nested(struct vcpu_vmx *vmx)
>> +static void free_nested(struct kvm_vcpu *vcpu)
>>  {
>> +	struct vcpu_vmx *vmx = to_vmx(vcpu);
>> +
>>  	if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
>>  		return;
>>  
>> @@ -8502,6 +8504,8 @@ static void free_nested(struct vcpu_vmx *vmx)
>>  		vmx->nested.pi_desc = NULL;
>>  	}
>>  
>> +	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
>> +
>>  	free_loaded_vmcs(&vmx->nested.vmcs02);
>>  }
>>  
>> @@ -8510,7 +8514,7 @@ static int handle_vmoff(struct kvm_vcpu *vcpu)
>>  {
>>  	if (!nested_vmx_check_permission(vcpu))
>>  		return 1;
>> -	free_nested(to_vmx(vcpu));
>> +	free_nested(vcpu);
>>  	nested_vmx_succeed(vcpu);
>>  	return kvm_skip_emulated_instruction(vcpu);
>>  }
>> @@ -8541,6 +8545,8 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
>>  	if (vmptr == vmx->nested.current_vmptr)
>>  		nested_release_vmcs12(vmx);
>>  
>> +	kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu, KVM_MMU_ROOTS_ALL);
>
> Shouldn't we only free guest_mmu if VMCLEAR is targeting
> current_vmptr?

Right you are, this was definitely overlooked, no need for
kvm_mmu_free_roots() when we VMCLEAR some-other-vmptr.

> Assuming that's the case, we could put the call to kvm_mmu_free_roots()
> in nested_release_vmcs12() instead of calling it from handle_vmclear()
> and handle_vmptrld().

Yep, will do in v3.

>
>> +
>>  	kvm_vcpu_write_guest(vcpu,
>>  			vmptr + offsetof(struct vmcs12, launch_state),
>>  			&zero, sizeof(zero));
>> @@ -8924,6 +8930,9 @@ static int handle_vmptrld(struct kvm_vcpu *vcpu)
>>  		}
>>  
>>  		nested_release_vmcs12(vmx);
>> +
>> +		kvm_mmu_free_roots(vcpu, &vcpu->arch.guest_mmu,
>> +				   KVM_MMU_ROOTS_ALL);
>>  		/*
>>  		 * Load VMCS12 from guest memory since it is not already
>>  		 * cached.
>> @@ -10976,12 +10985,10 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
>>   */
>>  static void vmx_free_vcpu_nested(struct kvm_vcpu *vcpu)
>>  {
>> -       struct vcpu_vmx *vmx = to_vmx(vcpu);
>> -
>> -       vcpu_load(vcpu);
>> -       vmx_switch_vmcs(vcpu, &vmx->vmcs01);
>> -       free_nested(vmx);
>> -       vcpu_put(vcpu);
>> +	vcpu_load(vcpu);
>> +	vmx_switch_vmcs(vcpu, &to_vmx(vcpu)->vmcs01);
>> +	free_nested(vcpu);
>> +	vcpu_put(vcpu);
>>  }
>>  
>>  static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
>> @@ -11331,6 +11338,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
>>  	if (!valid_ept_address(vcpu, nested_ept_get_cr3(vcpu)))
>>  		return 1;
>>  
>> +	vcpu->arch.mmu = &vcpu->arch.guest_mmu;
>>  	kvm_init_shadow_ept_mmu(vcpu,
>>  			to_vmx(vcpu)->nested.msrs.ept_caps &
>>  			VMX_EPT_EXECUTE_ONLY_BIT,
>> @@ -11346,6 +11354,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
>>  
>>  static void nested_ept_uninit_mmu_context(struct kvm_vcpu *vcpu)
>>  {
>> +	vcpu->arch.mmu = &vcpu->arch.root_mmu;
>>  	vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
>>  }
>>  
>> @@ -13421,7 +13430,7 @@ static void vmx_leave_nested(struct kvm_vcpu *vcpu)
>>  		to_vmx(vcpu)->nested.nested_run_pending = 0;
>>  		nested_vmx_vmexit(vcpu, -1, 0, 0);
>>  	}
>> -	free_nested(to_vmx(vcpu));
>> +	free_nested(vcpu);
>>  }
>>  
>>  /*
>> -- 
>> 2.17.1
>> 

-- 
Vitaly

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu
  2018-09-26 14:40   ` Sean Christopherson
@ 2018-09-26 17:19     ` Vitaly Kuznetsov
  0 siblings, 0 replies; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-26 17:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Tue, Sep 25, 2018 at 07:58:41PM +0200, Vitaly Kuznetsov wrote:
>> In preparation to MMU reconfiguration avoidance we need a space to
>> cache source data. As this partially intersects with kvm_mmu_page_role,
>> create 64bit sized union kvm_mmu_role holding both base and extended data.
>> No functional change.
>> 
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>
> One nit below, other than that...
>
> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index e59e5f49c8c2..bb1ef0f68f8e 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -2359,7 +2359,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>>  	int collisions = 0;
>>  	LIST_HEAD(invalid_list);
>>  
>> -	role = vcpu->arch.mmu->base_role;
>> +	role = vcpu->arch.mmu->mmu_role.base;
>>  	role.level = level;
>>  	role.direct = direct;
>>  	if (role.direct)
>> @@ -4407,7 +4407,8 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
>>  void
>>  reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, struct kvm_mmu *context)
>>  {
>> -	bool uses_nx = context->nx || context->base_role.smep_andnot_wp;
>> +	bool uses_nx = context->nx ||
>> +		context->mmu_role.base.smep_andnot_wp;
>>  	struct rsvd_bits_validate *shadow_zero_check;
>>  	int i;
>>  
>> @@ -4726,7 +4727,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
>>  {
>>  	struct kvm_mmu *context = vcpu->arch.mmu;
>>  
>> -	context->base_role.word = mmu_base_role_mask.word &
>> +	context->mmu_role.base.word = mmu_base_role_mask.word &
>>  				  kvm_calc_tdp_mmu_root_page_role(vcpu).word;
>>  	context->page_fault = tdp_page_fault;
>>  	context->sync_page = nonpaging_sync_page;
>> @@ -4807,7 +4808,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
>>  	else
>>  		paging32_init_context(vcpu, context);
>>  
>> -	context->base_role.word = mmu_base_role_mask.word &
>> +	context->mmu_role.base.word = mmu_base_role_mask.word &
>>  				  kvm_calc_shadow_mmu_root_page_role(vcpu).word;
>>  	reset_shadow_zero_bits_mask(vcpu, context);
>>  }
>> @@ -4816,7 +4817,7 @@ EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
>>  static union kvm_mmu_page_role
>>  kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
>>  {
>> -	union kvm_mmu_page_role role = vcpu->arch.mmu->base_role;
>> +	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
>>  
>>  	role.level = PT64_ROOT_4LEVEL;
>>  	role.direct = false;
>> @@ -4846,7 +4847,8 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>>  	context->update_pte = ept_update_pte;
>>  	context->root_level = PT64_ROOT_4LEVEL;
>>  	context->direct_map = false;
>> -	context->base_role.word = root_page_role.word & mmu_base_role_mask.word;
>> +	context->mmu_role.base.word =
>> +		root_page_role.word & mmu_base_role_mask.word;
>>  	context->get_pdptr = kvm_pdptr_read;
>>  
>>  	update_permission_bitmask(vcpu, context, true);
>> @@ -5161,10 +5163,13 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
>>  
>>  		local_flush = true;
>>  		while (npte--) {
>> +			unsigned int base_role =
>
> Nit: should this be a u32 to match mmu_role.base.word?
>

Of course, will do in v3. Thanks!

>> +				vcpu->arch.mmu->mmu_role.base.word;
>> +
>>  			entry = *spte;
>>  			mmu_page_zap_pte(vcpu->kvm, sp, spte);
>>  			if (gentry &&
>> -			      !((sp->role.word ^ vcpu->arch.mmu->base_role.word)
>> +			      !((sp->role.word ^ base_role)
>>  			      & mmu_base_role_mask.word) && rmap_can_add(vcpu))
>>  				mmu_pte_write_new_pte(vcpu, sp, spte, &gentry);
>>  			if (need_remote_flush(entry, *spte))
>> @@ -5861,6 +5866,16 @@ int kvm_mmu_module_init(void)
>>  {
>>  	int ret = -ENOMEM;
>>  
>> +	/*
>> +	 * MMU roles use union aliasing which is, generally speaking, an
>> +	 * undefined behavior. However, we supposedly know how compilers behave
>> +	 * and the current status quo is unlikely to change. Guardians below are
>> +	 * supposed to let us know if the assumption becomes false.
>> +	 */
>> +	BUILD_BUG_ON(sizeof(union kvm_mmu_page_role) != sizeof(u32));
>> +	BUILD_BUG_ON(sizeof(union kvm_mmu_extended_role) != sizeof(u32));
>> +	BUILD_BUG_ON(sizeof(union kvm_mmu_role) != sizeof(u64));
>> +
>>  	kvm_mmu_reset_all_pte_masks();
>>  
>>  	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 93ff08136fc1..c56a80c15c4f 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -9321,7 +9321,7 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
>>  
>>  		kvm_mmu_unload(vcpu);
>>  		mmu->ept_ad = accessed_dirty;
>> -		mmu->base_role.ad_disabled = !accessed_dirty;
>> +		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
>>  		vmcs12->ept_pointer = address;
>>  		/*
>>  		 * TODO: Check what's the correct approach in case
>> -- 
>> 2.17.1
>> 

-- 
Vitaly

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu()
  2018-09-26 15:06   ` Sean Christopherson
@ 2018-09-26 17:30     ` Vitaly Kuznetsov
  2018-09-27 13:44       ` Vitaly Kuznetsov
  0 siblings, 1 reply; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-26 17:30 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

Sean Christopherson <sean.j.christopherson@intel.com> writes:

> On Tue, Sep 25, 2018 at 07:58:42PM +0200, Vitaly Kuznetsov wrote:
>> MMU re-initialization is expensive, in particular,
>> update_permission_bitmask() and update_pkru_bitmask() are.
>> 
>> Cache the data used to setup shadow EPT MMU and avoid full re-init when
>> it is unchanged.
>> 
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>>  arch/x86/include/asm/kvm_host.h | 14 +++++++++
>>  arch/x86/kvm/mmu.c              | 51 ++++++++++++++++++++++++---------
>>  2 files changed, 52 insertions(+), 13 deletions(-)
>> 
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 1821b0215230..87ddaa1579e7 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -274,7 +274,21 @@ union kvm_mmu_page_role {
>>  };
>>  
>>  union kvm_mmu_extended_role {
>> +/*
>> + * This structure complements kvm_mmu_page_role caching everything needed for
>> + * MMU configuration. If nothing in both these structures changed, MMU
>> + * re-configuration can be skipped. @valid bit is set on first usage so we don't
>> + * treat all-zero structure as valid data.
>> + */
>>  	u32 word;
>> +	struct {
>> +		unsigned int valid:1;
>> +		unsigned int execonly:1;
>> +		unsigned int cr4_pse:1;
>> +		unsigned int cr4_pke:1;
>> +		unsigned int cr4_smap:1;
>> +		unsigned int cr4_smep:1;
>> +	};
>>  };
>>  
>>  union kvm_mmu_role {
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index bb1ef0f68f8e..d8611914544a 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -4708,6 +4708,24 @@ static void paging32E_init_context(struct kvm_vcpu *vcpu,
>>  	paging64_init_context_common(vcpu, context, PT32E_ROOT_LEVEL);
>>  }
>>  
>> +static union kvm_mmu_role
>> +kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu)
>> +{
>> +	union kvm_mmu_role role = {0};
>> +
>> +	role.base.access = ACC_ALL;
>> +	role.base.cr0_wp = is_write_protection(vcpu);
>> +
>> +	role.ext.cr4_smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP) != 0;
>> +	role.ext.cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP) != 0;
>> +	role.ext.cr4_pse = !!is_pse(vcpu);
>> +	role.ext.cr4_pke = kvm_read_cr4_bits(vcpu, X86_CR4_PKE) != 0;
>> +
>> +	role.ext.valid = 1;
>> +
>> +	return role;
>> +}
>> +
>>  static union kvm_mmu_page_role
>>  kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu)
>>  {
>> @@ -4814,16 +4832,18 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu)
>>  }
>>  EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu);
>>  
>> -static union kvm_mmu_page_role
>> -kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
>> +static union kvm_mmu_role
>> +kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
>> +				   bool execonly)
>>  {
>> -	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
>> +	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu);
>
> kvm_calc_mmu_role_common() doesn't preserve the current mmu_role.base
> and kvm_calc_mmu_role_common() doesn't capture all base fields.  Won't
> @role will be incorrect for base fields that aren't set below, e.g.
> cr4_pae, smep_andnot_wp, smap_andnot_wp, etc...

Oh, I see what you mean. Actually, PATCH8 of this series adds some of
this stuff but smep_andnot_wp and smap_andnot_wp are still not set. I
think I'll enhance kvm_calc_mmu_role_common() and move some stuff from
PATCH8 to this one.
(The fact that @role is currently not fully re-initialized here is very
unobvious so I would definitely prefer to explicitly initialize
everything over inheriting something from previously initialized role).

Thanks!

>
>>  
>> -	role.level = PT64_ROOT_4LEVEL;
>> -	role.direct = false;
>> -	role.ad_disabled = !accessed_dirty;
>> -	role.guest_mode = true;
>> -	role.access = ACC_ALL;
>> +	role.base.level = PT64_ROOT_4LEVEL;
>> +	role.base.direct = false;
>> +	role.base.ad_disabled = !accessed_dirty;
>> +	role.base.guest_mode = true;
>> +
>> +	role.ext.execonly = execonly;
>>  
>>  	return role;
>>  }
>> @@ -4832,10 +4852,16 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>>  			     bool accessed_dirty, gpa_t new_eptp)
>>  {
>>  	struct kvm_mmu *context = vcpu->arch.mmu;
>> -	union kvm_mmu_page_role root_page_role =
>> -		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty);
>> +	union kvm_mmu_role new_role =
>> +		kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
>> +						   execonly);
>> +
>> +	__kvm_mmu_new_cr3(vcpu, new_eptp, new_role.base, false);
>> +
>> +	new_role.base.word &= mmu_base_role_mask.word;
>> +	if (new_role.as_u64 == context->mmu_role.as_u64)
>> +		return;
>>  
>> -	__kvm_mmu_new_cr3(vcpu, new_eptp, root_page_role, false);
>>  	context->shadow_root_level = PT64_ROOT_4LEVEL;
>>  
>>  	context->nx = true;
>> @@ -4847,8 +4873,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
>>  	context->update_pte = ept_update_pte;
>>  	context->root_level = PT64_ROOT_4LEVEL;
>>  	context->direct_map = false;
>> -	context->mmu_role.base.word =
>> -		root_page_role.word & mmu_base_role_mask.word;
>> +	context->mmu_role.as_u64 = new_role.as_u64;
>>  	context->get_pdptr = kvm_pdptr_read;
>>  
>>  	update_permission_bitmask(vcpu, context, true);
>> -- 
>> 2.17.1
>> 

-- 
Vitaly

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu()
  2018-09-26 17:30     ` Vitaly Kuznetsov
@ 2018-09-27 13:44       ` Vitaly Kuznetsov
  0 siblings, 0 replies; 24+ messages in thread
From: Vitaly Kuznetsov @ 2018-09-27 13:44 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, Paolo Bonzini, Radim Krčmář,
	Jim Mattson, Liran Alon, linux-kernel

Vitaly Kuznetsov <vkuznets@redhat.com> writes:

> Sean Christopherson <sean.j.christopherson@intel.com> writes:
>
>> On Tue, Sep 25, 2018 at 07:58:42PM +0200, Vitaly Kuznetsov wrote:
...
>>>  
>>> -static union kvm_mmu_page_role
>>> -kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty)
>>> +static union kvm_mmu_role
>>> +kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
>>> +				   bool execonly)
>>>  {
>>> -	union kvm_mmu_page_role role = vcpu->arch.mmu->mmu_role.base;
>>> +	union kvm_mmu_role role = kvm_calc_mmu_role_common(vcpu);
>>
>> kvm_calc_mmu_role_common() doesn't preserve the current mmu_role.base
>> and kvm_calc_mmu_role_common() doesn't capture all base fields.  Won't
>> @role will be incorrect for base fields that aren't set below, e.g.
>> cr4_pae, smep_andnot_wp, smap_andnot_wp, etc...
>
> Oh, I see what you mean. Actually, PATCH8 of this series adds some of
> this stuff but smep_andnot_wp and smap_andnot_wp are still not set. I
> think I'll enhance kvm_calc_mmu_role_common() and move some stuff from
> PATCH8 to this one.
> (The fact that @role is currently not fully re-initialized here is very
> unobvious so I would definitely prefer to explicitly initialize
> everything over inheriting something from previously initialized role).

On the other hand if we want to perform full re-initialization we'll
have to distinguish between shadow and TDP here and this isn't what we
want. I'm about to change my mind as it seems that inheriting base role
here is not the worst idea after all...

-- 
Vitaly

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2018-09-27 13:44 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-25 17:58 [PATCH v2 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and L2 Vitaly Kuznetsov
2018-09-25 17:58 ` [PATCH v2 1/9] x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU Vitaly Kuznetsov
2018-09-26 14:17   ` Sean Christopherson
2018-09-25 17:58 ` [PATCH v2 2/9] x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
2018-09-26 14:11   ` Sean Christopherson
2018-09-26 17:16     ` Vitaly Kuznetsov
2018-09-25 17:58 ` [PATCH v2 3/9] x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots() Vitaly Kuznetsov
2018-09-26 14:18   ` Sean Christopherson
2018-09-25 17:58 ` [PATCH v2 4/9] x86/kvm/mmu: introduce guest_mmu Vitaly Kuznetsov
2018-09-26 14:02   ` Sean Christopherson
2018-09-26 17:18     ` Vitaly Kuznetsov
2018-09-25 17:58 ` [PATCH v2 5/9] x86/kvm/mmu: get rid of redundant kvm_mmu_setup() Vitaly Kuznetsov
2018-09-26 14:15   ` Sean Christopherson
2018-09-25 17:58 ` [PATCH v2 6/9] x86/kvm/mmu: make space for source data caching in struct kvm_mmu Vitaly Kuznetsov
2018-09-26 14:40   ` Sean Christopherson
2018-09-26 17:19     ` Vitaly Kuznetsov
2018-09-25 17:58 ` [PATCH v2 7/9] x86/kvm/nVMX: introduce source data cache for kvm_init_shadow_ept_mmu() Vitaly Kuznetsov
2018-09-26 15:06   ` Sean Christopherson
2018-09-26 17:30     ` Vitaly Kuznetsov
2018-09-27 13:44       ` Vitaly Kuznetsov
2018-09-25 17:58 ` [PATCH v2 8/9] x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed Vitaly Kuznetsov
2018-09-26 15:15   ` Sean Christopherson
2018-09-25 17:58 ` [PATCH v2 9/9] x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu() Vitaly Kuznetsov
2018-09-26 15:17   ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).