All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] KVM: x86/mmu: Cleanups for eager page splitting
@ 2022-06-24 21:30 Sean Christopherson
  2022-06-24 21:30 ` [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index Sean Christopherson
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Sean Christopherson @ 2022-06-24 21:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, David Matlack

Eager page splitting cleanups for a few minor things that were noted in
code review but didn't make it into the committed code.

Patch 1 goes beyond the eager page splitting series, but it's still very
much relevant.

The last patch in particular is a bit more urgent than I first realized.
I had forgotten that pte_list_desc is now 128 bytes, and I also had a
brain fart and thought it was just allocating pointers, i.e. 8 bytes.
In other words, I was thinking the 513 object buffer was "only" wasting
~8kb per VM, whereas it actually costs ~64kb per VM.

v2:
 - Add helper to compute SPTE's index into its parent. [David].
 - Collect reviews [David].
 - Tweak the comment in the last patch after more analysis.

v1:
 - https://lore.kernel.org/all/20220624171808.2845941-1-seanjc@google.com

Sean Christopherson (4):
  KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index
  KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages
  KVM: x86/mmu: Use "unsigned int", not "u32", for SPTEs' @access info
  KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default
    capacity

 arch/x86/kvm/mmu/mmu.c         | 74 +++++++++++++++++++++-------------
 arch/x86/kvm/mmu/paging_tmpl.h |  4 +-
 arch/x86/kvm/mmu/spte.h        |  6 +++
 3 files changed, 53 insertions(+), 31 deletions(-)


base-commit: 4b88b1a518b337de1252b8180519ca4c00015c9e
-- 
2.37.0.rc0.161.g10f37bed90-goog


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index
  2022-06-24 21:30 [PATCH v2 0/4] KVM: x86/mmu: Cleanups for eager page splitting Sean Christopherson
@ 2022-06-24 21:30 ` Sean Christopherson
  2022-06-24 23:41   ` David Matlack
  2022-06-24 21:30 ` [PATCH v2 2/4] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages Sean Christopherson
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2022-06-24 21:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, David Matlack

Add spte_index() to dedup all the code that calculates a SPTE's index
into its parent's page table and/or spt array.  Opportunistically tweak
the calculation to avoid pointer arithmetic, which is subtle (subtract in
8-byte chunks) and less performant (requires the compiler to generate the
subtraction).

Suggested-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c         | 22 ++++++++++------------
 arch/x86/kvm/mmu/paging_tmpl.h |  4 ++--
 arch/x86/kvm/mmu/spte.h        |  6 ++++++
 3 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index bd74a287b54a..b04e9ce2469a 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1036,7 +1036,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
 	struct kvm_rmap_head *rmap_head;
 
 	sp = sptep_to_sp(spte);
-	gfn = kvm_mmu_page_get_gfn(sp, spte - sp->spt);
+	gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte));
 
 	/*
 	 * Unlike rmap_add, rmap_remove does not run in the context of a vCPU
@@ -1587,7 +1587,7 @@ static void __rmap_add(struct kvm *kvm,
 	int rmap_count;
 
 	sp = sptep_to_sp(spte);
-	kvm_mmu_page_set_translation(sp, spte - sp->spt, gfn, access);
+	kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access);
 	kvm_update_page_stats(kvm, sp->role.level, 1);
 
 	rmap_head = gfn_to_rmap(gfn, sp->role.level, slot);
@@ -1714,11 +1714,9 @@ static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp)
 static void mark_unsync(u64 *spte)
 {
 	struct kvm_mmu_page *sp;
-	unsigned int index;
 
 	sp = sptep_to_sp(spte);
-	index = spte - sp->spt;
-	if (__test_and_set_bit(index, sp->unsync_child_bitmap))
+	if (__test_and_set_bit(spte_index(spte), sp->unsync_child_bitmap))
 		return;
 	if (sp->unsync_children++)
 		return;
@@ -2201,7 +2199,7 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsig
 	 */
 	if (role.has_4_byte_gpte) {
 		WARN_ON_ONCE(role.level != PG_LEVEL_4K);
-		role.quadrant = (sptep - parent_sp->spt) % 2;
+		role.quadrant = spte_index(sptep) & 1;
 	}
 
 	return role;
@@ -2826,7 +2824,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
 		rmap_add(vcpu, slot, sptep, gfn, pte_access);
 	} else {
 		/* Already rmapped but the pte_access bits may have changed. */
-		kvm_mmu_page_set_access(sp, sptep - sp->spt, pte_access);
+		kvm_mmu_page_set_access(sp, spte_index(sptep), pte_access);
 	}
 
 	return ret;
@@ -2842,7 +2840,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu,
 	int i, ret;
 	gfn_t gfn;
 
-	gfn = kvm_mmu_page_get_gfn(sp, start - sp->spt);
+	gfn = kvm_mmu_page_get_gfn(sp, spte_index(start));
 	slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, access & ACC_WRITE_MASK);
 	if (!slot)
 		return -1;
@@ -2868,7 +2866,7 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu,
 
 	WARN_ON(!sp->role.direct);
 
-	i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1);
+	i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1);
 	spte = sp->spt + i;
 
 	for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) {
@@ -6146,8 +6144,8 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu
 	unsigned int access;
 	gfn_t gfn;
 
-	gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt);
-	access = kvm_mmu_page_get_access(huge_sp, huge_sptep - huge_sp->spt);
+	gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
+	access = kvm_mmu_page_get_access(huge_sp, spte_index(huge_sptep));
 
 	/*
 	 * Note, huge page splitting always uses direct shadow pages, regardless
@@ -6221,7 +6219,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm,
 	u64 spte;
 
 	/* Grab information for the tracepoint before dropping the MMU lock. */
-	gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt);
+	gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep));
 	level = huge_sp->role.level;
 	spte = *huge_sptep;
 
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 2448fa8d8438..d06dee7d38a8 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -595,7 +595,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
 	if (sp->role.direct)
 		return __direct_pte_prefetch(vcpu, sp, sptep);
 
-	i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1);
+	i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1);
 	spte = sp->spt + i;
 
 	for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) {
@@ -933,7 +933,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
 				break;
 
 			pte_gpa = FNAME(get_level1_sp_gpa)(sp);
-			pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
+			pte_gpa += spte_index(sptep) * sizeof(pt_element_t);
 
 			mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL);
 			if (is_shadow_present_pte(old_spte))
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index b5c855f5514f..ba3dccb202bc 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -190,6 +190,12 @@ static inline bool is_removed_spte(u64 spte)
 	return spte == REMOVED_SPTE;
 }
 
+/* Get an SPTE's index into its parent's page table (and the spt array). */
+static inline int spte_index(u64 *sptep)
+{
+	return ((unsigned long)sptep / sizeof(*sptep)) & (SPTE_ENT_PER_PAGE - 1);
+}
+
 /*
  * In some cases, we need to preserve the GFN of a non-present or reserved
  * SPTE when we usurp the upper five bits of the physical address space to
-- 
2.37.0.rc0.161.g10f37bed90-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/4] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages
  2022-06-24 21:30 [PATCH v2 0/4] KVM: x86/mmu: Cleanups for eager page splitting Sean Christopherson
  2022-06-24 21:30 ` [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index Sean Christopherson
@ 2022-06-24 21:30 ` Sean Christopherson
  2022-06-24 23:43   ` David Matlack
  2022-06-24 21:30 ` [PATCH v2 3/4] KVM: x86/mmu: Use "unsigned int", not "u32", for SPTEs' @access info Sean Christopherson
  2022-06-24 21:30 ` [PATCH v2 4/4] KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default capacity Sean Christopherson
  3 siblings, 1 reply; 7+ messages in thread
From: Sean Christopherson @ 2022-06-24 21:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, David Matlack

Tweak the comment above the computation of the quadrant for PG_LEVEL_4K
shadow pages to explicitly call out how and why KVM uses role.quadrant to
consume gPTE bits.

Opportunistically wrap an unnecessarily long line.

No functional change intended.

Link: https://lore.kernel.org/all/YqvWvBv27fYzOFdE@google.com
Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b04e9ce2469a..83ca71361acd 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2166,7 +2166,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
 	return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role);
 }
 
-static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsigned int access)
+static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,
+						  unsigned int access)
 {
 	struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep);
 	union kvm_mmu_page_role role;
@@ -2193,9 +2194,15 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsig
 	 * uses 2 PAE page tables, each mapping a 2MiB region. For these,
 	 * @role.quadrant encodes which half of the region they map.
 	 *
-	 * Note, the 4 PAE page directories are pre-allocated and the quadrant
-	 * assigned in mmu_alloc_root(). So only page tables need to be handled
-	 * here.
+	 * Concretely, a 4-byte PDE consumes bits 31:22, while an 8-byte PDE
+	 * consumes bits 29:21.  To consume bits 31:30, KVM's uses 4 shadow
+	 * PDPTEs; those 4 PAE page directories are pre-allocated and their
+	 * quadrant is assigned in mmu_alloc_root().   A 4-byte PTE consumes
+	 * bits 21:12, while an 8-byte PTE consumes bits 20:12.  To consume
+	 * bit 21 in the PTE (the child here), KVM propagates that bit to the
+	 * quadrant, i.e. sets quadrant to '0' or '1'.  The parent 8-byte PDE
+	 * covers bit 21 (see above), thus the quadrant is calculated from the
+	 * _least_ significant bit of the PDE index.
 	 */
 	if (role.has_4_byte_gpte) {
 		WARN_ON_ONCE(role.level != PG_LEVEL_4K);
-- 
2.37.0.rc0.161.g10f37bed90-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/4] KVM: x86/mmu: Use "unsigned int", not "u32", for SPTEs' @access info
  2022-06-24 21:30 [PATCH v2 0/4] KVM: x86/mmu: Cleanups for eager page splitting Sean Christopherson
  2022-06-24 21:30 ` [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index Sean Christopherson
  2022-06-24 21:30 ` [PATCH v2 2/4] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages Sean Christopherson
@ 2022-06-24 21:30 ` Sean Christopherson
  2022-06-24 21:30 ` [PATCH v2 4/4] KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default capacity Sean Christopherson
  3 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2022-06-24 21:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, David Matlack

Use an "unsigned int" for @access parameters instead of a "u32", mostly
to be consistent throughout KVM, but also because "u32" is misleading.
@access can actually squeeze into a u8, i.e. doesn't need 32 bits, but is
as an "unsigned int" because sp->role.access is an unsigned int.

No functional change intended.

Link: https://lore.kernel.org/all/YqyZxEfxXLsHGoZ%2F@google.com
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 83ca71361acd..eae5c801e442 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -717,7 +717,8 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index)
 	return sp->role.access;
 }
 
-static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, gfn_t gfn, u32 access)
+static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index,
+					 gfn_t gfn, unsigned int access)
 {
 	if (sp_has_gptes(sp)) {
 		sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access;
@@ -735,7 +736,8 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, gfn
 	          sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn);
 }
 
-static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, u32 access)
+static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index,
+				    unsigned int access)
 {
 	gfn_t gfn = kvm_mmu_page_get_gfn(sp, index);
 
@@ -1580,7 +1582,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
 static void __rmap_add(struct kvm *kvm,
 		       struct kvm_mmu_memory_cache *cache,
 		       const struct kvm_memory_slot *slot,
-		       u64 *spte, gfn_t gfn, u32 access)
+		       u64 *spte, gfn_t gfn, unsigned int access)
 {
 	struct kvm_mmu_page *sp;
 	struct kvm_rmap_head *rmap_head;
@@ -1601,7 +1603,7 @@ static void __rmap_add(struct kvm *kvm,
 }
 
 static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot,
-		     u64 *spte, gfn_t gfn, u32 access)
+		     u64 *spte, gfn_t gfn, unsigned int access)
 {
 	struct kvm_mmu_memory_cache *cache = &vcpu->arch.mmu_pte_list_desc_cache;
 
-- 
2.37.0.rc0.161.g10f37bed90-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 4/4] KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default capacity
  2022-06-24 21:30 [PATCH v2 0/4] KVM: x86/mmu: Cleanups for eager page splitting Sean Christopherson
                   ` (2 preceding siblings ...)
  2022-06-24 21:30 ` [PATCH v2 3/4] KVM: x86/mmu: Use "unsigned int", not "u32", for SPTEs' @access info Sean Christopherson
@ 2022-06-24 21:30 ` Sean Christopherson
  3 siblings, 0 replies; 7+ messages in thread
From: Sean Christopherson @ 2022-06-24 21:30 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, David Matlack

Buffer split_desc_cache, the cache used to allcoate rmap list entries,
only by the default cache capacity (currently 40), not by doubling the
minimum (513).  Aliasing L2 GPAs to L1 GPAs is uncommon, thus eager page
splitting is unlikely to need 500+ entries.  And because each object
is (currently) a non-trivial 128 bytes (see struct pte_list_desc), those
extra ~500 entries means KVM is in all likelihood wasting ~64kb of memory.

Link: https://lore.kernel.org/all/YrTDcrsn0%2F+alpzf@google.com
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 27 ++++++++++++++++++---------
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index eae5c801e442..52664c3caaab 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6123,17 +6123,26 @@ static bool need_topup_split_caches_or_resched(struct kvm *kvm)
 
 static int topup_split_caches(struct kvm *kvm)
 {
-	int r;
-
-	lockdep_assert_held(&kvm->slots_lock);
-
 	/*
-	 * Setting capacity == min would cause KVM to drop mmu_lock even if
-	 * just one object was consumed from the cache, so make capacity
-	 * larger than min.
+	 * Allocating rmap list entries when splitting huge pages for nested
+	 * MMUs is uncommon as KVM needs to use a list if and only if there is
+	 * more than one rmap entry for a gfn, i.e. requires an L1 gfn to be
+	 * aliased by multiple L2 gfns and/or from multiple nested roots with
+	 * different roles.  Aliasing gfns when using TDP is atypical for VMMs;
+	 * a few gfns are often aliased during boot, e.g. when remapping BIOS,
+	 * but aliasing rarely occurs post-boot or for many gfns.  If there is
+	 * only one rmap entry, rmap->val points directly at that one entry and
+	 * doesn't need to allocate a list.  Buffer the cache by the default
+	 * capacity so that KVM doesn't have to drop mmu_lock to topup if KVM
+	 * encounters an aliased gfn or two.
 	 */
-	r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache,
-					 2 * SPLIT_DESC_CACHE_MIN_NR_OBJECTS,
+	const int capacity = SPLIT_DESC_CACHE_MIN_NR_OBJECTS +
+			     KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE;
+	int r;
+
+	lockdep_assert_held(&kvm->slots_lock);
+
+	r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache, capacity,
 					 SPLIT_DESC_CACHE_MIN_NR_OBJECTS);
 	if (r)
 		return r;
-- 
2.37.0.rc0.161.g10f37bed90-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index
  2022-06-24 21:30 ` [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index Sean Christopherson
@ 2022-06-24 23:41   ` David Matlack
  0 siblings, 0 replies; 7+ messages in thread
From: David Matlack @ 2022-06-24 23:41 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Fri, Jun 24, 2022 at 09:30:36PM +0000, Sean Christopherson wrote:
> Add spte_index() to dedup all the code that calculates a SPTE's index
> into its parent's page table and/or spt array.  Opportunistically tweak
> the calculation to avoid pointer arithmetic, which is subtle (subtract in
> 8-byte chunks) and less performant (requires the compiler to generate the
> subtraction).
> 
> Suggested-by: David Matlack <dmatlack@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Thanks for the cleanup. I did a git-grep to look for any other users to
convert to spte_index() and didn't find any.

Reviewed-by: David Matlack <dmatlack@google.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/4] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages
  2022-06-24 21:30 ` [PATCH v2 2/4] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages Sean Christopherson
@ 2022-06-24 23:43   ` David Matlack
  0 siblings, 0 replies; 7+ messages in thread
From: David Matlack @ 2022-06-24 23:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel

On Fri, Jun 24, 2022 at 09:30:37PM +0000, Sean Christopherson wrote:
> Tweak the comment above the computation of the quadrant for PG_LEVEL_4K
> shadow pages to explicitly call out how and why KVM uses role.quadrant to
> consume gPTE bits.
> 
> Opportunistically wrap an unnecessarily long line.
> 
> No functional change intended.
> 
> Link: https://lore.kernel.org/all/YqvWvBv27fYzOFdE@google.com
> Cc: David Matlack <dmatlack@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: David Matlack <dmatlack@google.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-06-24 23:43 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-24 21:30 [PATCH v2 0/4] KVM: x86/mmu: Cleanups for eager page splitting Sean Christopherson
2022-06-24 21:30 ` [PATCH v2 1/4] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index Sean Christopherson
2022-06-24 23:41   ` David Matlack
2022-06-24 21:30 ` [PATCH v2 2/4] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages Sean Christopherson
2022-06-24 23:43   ` David Matlack
2022-06-24 21:30 ` [PATCH v2 3/4] KVM: x86/mmu: Use "unsigned int", not "u32", for SPTEs' @access info Sean Christopherson
2022-06-24 21:30 ` [PATCH v2 4/4] KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default capacity Sean Christopherson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.