linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk()
@ 2022-06-05  6:43 Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 01/12] KVM: X86/MMU: Warn if sp->unsync_children > 0 in link_shadow_page() Lai Jiangshan
                   ` (11 more replies)
  0 siblings, 12 replies; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

mmu_pages_clear_parents() is not really required (see patch4).

mmu_unsync_walk() can be simplified when the function is removed.

Lai Jiangshan (12):
  KVM: X86/MMU: Warn if sp->unsync_children > 0 in link_shadow_page()
  KVM: X86/MMU: Rename kvm_unlink_unsync_page() to
    kvm_mmu_page_clear_unsync()
  KVM: X86/MMU: Split a part of kvm_unsync_page() as
    kvm_mmu_page_mark_unsync()
  KVM: X86/MMU: Remove mmu_pages_clear_parents()
  KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk()
  KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear()
  KVM: X86/MMU: Remove the useless struct mmu_page_path
  KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages
  KVM: X86/MMU: Unfold struct mmu_page_and_offset in struct
    kvm_mmu_pages
  KVM: X86/MMU: Don't add parents to struct kvm_mmu_pages
  KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next()
  KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec

 arch/x86/kvm/mmu/mmu.c | 173 ++++++++++++-----------------------------
 1 file changed, 51 insertions(+), 122 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/12] KVM: X86/MMU: Warn if sp->unsync_children > 0 in link_shadow_page()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 02/12] KVM: X86/MMU: Rename kvm_unlink_unsync_page() to kvm_mmu_page_clear_unsync() Lai Jiangshan
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

The check for sp->unsync_children in link_shadow_page() can be removed
since FNAME(fetch) ensures it is zero.  (@sp is direct when
link_shadow_page() is called from other places, which also means
sp->unsync_children is zero.)

link_shadow_page() is not a fast path, check it and warn instead.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 086f32dffdbe..f61416818116 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2197,7 +2197,13 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
 
 	mmu_page_add_parent_pte(vcpu, sp, sptep);
 
-	if (sp->unsync_children || sp->unsync)
+	/*
+	 * Propagate the unsync bit when sp->unsync.
+	 *
+	 * The caller ensures the sp is synced when it has unsync children,
+	 * so sp->unsync_children must be zero.  See FNAME(fetch).
+	 */
+	if (sp->unsync || WARN_ON_ONCE(sp->unsync_children))
 		mark_unsync(sptep);
 }
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/12] KVM: X86/MMU: Rename kvm_unlink_unsync_page() to kvm_mmu_page_clear_unsync()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 01/12] KVM: X86/MMU: Warn if sp->unsync_children > 0 in link_shadow_page() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-14 22:10   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 03/12] KVM: X86/MMU: Split a part of kvm_unsync_page() as kvm_mmu_page_mark_unsync() Lai Jiangshan
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

"Unlink" is ambiguous, the function does not disconnect any link.

Use "clear" instead which is an antonym of "mark" in the name of the
function mark_unsync() or kvm_mmu_mark_parents_unsync().

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f61416818116..c20981dfc4fd 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1825,7 +1825,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp,
 	return __mmu_unsync_walk(sp, pvec);
 }
 
-static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
+static void kvm_mmu_page_clear_unsync(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
 	WARN_ON(!sp->unsync);
 	trace_kvm_mmu_sync_page(sp);
@@ -1987,7 +1987,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 		}
 
 		for_each_sp(pages, sp, parents, i) {
-			kvm_unlink_unsync_page(vcpu->kvm, sp);
+			kvm_mmu_page_clear_unsync(vcpu->kvm, sp);
 			flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
 			mmu_pages_clear_parents(&parents);
 		}
@@ -2326,7 +2326,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
 		unaccount_shadowed(kvm, sp);
 
 	if (sp->unsync)
-		kvm_unlink_unsync_page(kvm, sp);
+		kvm_mmu_page_clear_unsync(kvm, sp);
 	if (!sp->root_count) {
 		/* Count self */
 		(*nr_zapped)++;
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/12] KVM: X86/MMU: Split a part of kvm_unsync_page() as kvm_mmu_page_mark_unsync()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 01/12] KVM: X86/MMU: Warn if sp->unsync_children > 0 in link_shadow_page() Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 02/12] KVM: X86/MMU: Rename kvm_unlink_unsync_page() to kvm_mmu_page_clear_unsync() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-14 22:19   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents() Lai Jiangshan
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Make it as the opposite function of kvm_mmu_page_clear_unsync().

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c20981dfc4fd..cc0207e26f6e 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2529,12 +2529,16 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva)
 	return r;
 }
 
-static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
+static void kvm_mmu_page_mark_unsync(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
 	trace_kvm_mmu_unsync_page(sp);
 	++kvm->stat.mmu_unsync;
 	sp->unsync = 1;
+}
 
+static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
+{
+	kvm_mmu_page_mark_unsync(kvm, sp);
 	kvm_mmu_mark_parents_unsync(sp);
 }
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (2 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 03/12] KVM: X86/MMU: Split a part of kvm_unsync_page() as kvm_mmu_page_mark_unsync() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-14 23:15   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk() Lai Jiangshan
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

mmu_unsync_walk() is designed to be workable in a pagetable which has
unsync child bits set in the shadow pages in the pagetable but without
any unsync shadow pages.

This can be resulted when the unsync shadow pages of a pagetable
can be walked from other pagetables and have been synced or zapped
when other pagetables are synced or zapped.

So mmu_pages_clear_parents() is not required even when the callers of
mmu_unsync_walk() zap or sync the pagetable.

So remove mmu_pages_clear_parents() and the child bits can be cleared in
the next call of mmu_unsync_walk() in one go.

Removing mmu_pages_clear_parents() allows for further simplifying
mmu_unsync_walk() including removing the struct mmu_page_path since
the function is the only user of it.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 19 -------------------
 1 file changed, 19 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index cc0207e26f6e..f35fd5c59c38 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1948,23 +1948,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec,
 	return mmu_pages_next(pvec, parents, 0);
 }
 
-static void mmu_pages_clear_parents(struct mmu_page_path *parents)
-{
-	struct kvm_mmu_page *sp;
-	unsigned int level = 0;
-
-	do {
-		unsigned int idx = parents->idx[level];
-		sp = parents->parent[level];
-		if (!sp)
-			return;
-
-		WARN_ON(idx == INVALID_INDEX);
-		clear_unsync_child_bit(sp, idx);
-		level++;
-	} while (!sp->unsync_children);
-}
-
 static int mmu_sync_children(struct kvm_vcpu *vcpu,
 			     struct kvm_mmu_page *parent, bool can_yield)
 {
@@ -1989,7 +1972,6 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 		for_each_sp(pages, sp, parents, i) {
 			kvm_mmu_page_clear_unsync(vcpu->kvm, sp);
 			flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
-			mmu_pages_clear_parents(&parents);
 		}
 		if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) {
 			kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, flush);
@@ -2298,7 +2280,6 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
 
 		for_each_sp(pages, sp, parents, i) {
 			kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
-			mmu_pages_clear_parents(&parents);
 			zapped++;
 		}
 	}
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (3 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 19:52   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 06/12] KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear() Lai Jiangshan
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

mmu_unsync_walk() and __mmu_unsync_walk() requires the caller to clear
unsync for the shadow pages in the resulted pvec by synching them or
zapping them.

All callers does so.

Otherwise mmu_unsync_walk() and __mmu_unsync_walk() can't work because
they always walk from the beginning.

It is possible to make mmu_unsync_walk() and __mmu_unsync_walk() lists
unsync shadow pages in the resulted pvec without needing synching them
or zapping them later.  It would require to change mmu_unsync_walk()
and __mmu_unsync_walk() and make it walk from the last visited position
derived from the resulted pvec of the previous call of mmu_unsync_walk().

It would complicate the walk and no callers require the possible new
behavior.

It is better to keep the original behavior.

Since the shadow pages in the resulted pvec will be synced or zapped,
and clear_unsync_child_bit() for parents will be called anyway later.

Call clear_unsync_child_bit() earlier and directly in __mmu_unsync_walk()
to make the code more efficient (the memory of the shadow pages is hot
in the CPU cache, and no need to visit the shadow pages again later).

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f35fd5c59c38..2446ede0b7b9 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1794,19 +1794,23 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 				return -ENOSPC;
 
 			ret = __mmu_unsync_walk(child, pvec);
-			if (!ret) {
-				clear_unsync_child_bit(sp, i);
-				continue;
-			} else if (ret > 0) {
-				nr_unsync_leaf += ret;
-			} else
+			if (ret < 0)
 				return ret;
-		} else if (child->unsync) {
+			nr_unsync_leaf += ret;
+		}
+
+		/*
+		 * Clear unsync bit for @child directly if @child is fully
+		 * walked and all the unsync shadow pages descended from
+		 * @child (including itself) are added into @pvec, the caller
+		 * must sync or zap all the unsync shadow pages in @pvec.
+		 */
+		clear_unsync_child_bit(sp, i);
+		if (child->unsync) {
 			nr_unsync_leaf++;
 			if (mmu_pages_add(pvec, child, i))
 				return -ENOSPC;
-		} else
-			clear_unsync_child_bit(sp, i);
+		}
 	}
 
 	return nr_unsync_leaf;
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/12] KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (4 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 20:07   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path Lai Jiangshan
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

mmu_unsync_walk() and __mmu_unsync_walk() requires the caller to clear
unsync for the shadow pages in the resulted pvec by synching them or
zapping them.

All callers does so.

Otherwise mmu_unsync_walk() and __mmu_unsync_walk() can't work because
they always walk from the beginning.

And mmu_unsync_walk() and __mmu_unsync_walk() directly clear unsync bits
now, rename it.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2446ede0b7b9..a56d328365e4 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1773,7 +1773,7 @@ static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
 	__clear_bit(idx, sp->unsync_child_bitmap);
 }
 
-static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
+static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 			   struct kvm_mmu_pages *pvec)
 {
 	int i, ret, nr_unsync_leaf = 0;
@@ -1793,7 +1793,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 			if (mmu_pages_add(pvec, child, i))
 				return -ENOSPC;
 
-			ret = __mmu_unsync_walk(child, pvec);
+			ret = __mmu_unsync_walk_and_clear(child, pvec);
 			if (ret < 0)
 				return ret;
 			nr_unsync_leaf += ret;
@@ -1818,7 +1818,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 
 #define INVALID_INDEX (-1)
 
-static int mmu_unsync_walk(struct kvm_mmu_page *sp,
+static int mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 			   struct kvm_mmu_pages *pvec)
 {
 	pvec->nr = 0;
@@ -1826,7 +1826,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp,
 		return 0;
 
 	mmu_pages_add(pvec, sp, INVALID_INDEX);
-	return __mmu_unsync_walk(sp, pvec);
+	return __mmu_unsync_walk_and_clear(sp, pvec);
 }
 
 static void kvm_mmu_page_clear_unsync(struct kvm *kvm, struct kvm_mmu_page *sp)
@@ -1962,7 +1962,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 	LIST_HEAD(invalid_list);
 	bool flush = false;
 
-	while (mmu_unsync_walk(parent, &pages)) {
+	while (mmu_unsync_walk_and_clear(parent, &pages)) {
 		bool protected = false;
 
 		for_each_sp(pages, sp, parents, i)
@@ -2279,7 +2279,7 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
 	if (parent->role.level == PG_LEVEL_4K)
 		return 0;
 
-	while (mmu_unsync_walk(parent, &pages)) {
+	while (mmu_unsync_walk_and_clear(parent, &pages)) {
 		struct kvm_mmu_page *sp;
 
 		for_each_sp(pages, sp, parents, i) {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (5 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 06/12] KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 20:15   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 08/12] KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages Lai Jiangshan
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

struct mmu_page_path is set and updated but never used since
mmu_pages_clear_parents() is removed.

Remove it.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 37 +++++++++----------------------------
 1 file changed, 9 insertions(+), 28 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a56d328365e4..65a2f4a2ce25 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1897,39 +1897,28 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 	       unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen);
 }
 
-struct mmu_page_path {
-	struct kvm_mmu_page *parent[PT64_ROOT_MAX_LEVEL];
-	unsigned int idx[PT64_ROOT_MAX_LEVEL];
-};
-
-#define for_each_sp(pvec, sp, parents, i)			\
-		for (i = mmu_pages_first(&pvec, &parents);	\
+#define for_each_sp(pvec, sp, i)					\
+		for (i = mmu_pages_first(&pvec);			\
 			i < pvec.nr && ({ sp = pvec.page[i].sp; 1;});	\
-			i = mmu_pages_next(&pvec, &parents, i))
+			i = mmu_pages_next(&pvec, i))
 
-static int mmu_pages_next(struct kvm_mmu_pages *pvec,
-			  struct mmu_page_path *parents,
-			  int i)
+static int mmu_pages_next(struct kvm_mmu_pages *pvec, int i)
 {
 	int n;
 
 	for (n = i+1; n < pvec->nr; n++) {
 		struct kvm_mmu_page *sp = pvec->page[n].sp;
-		unsigned idx = pvec->page[n].idx;
 		int level = sp->role.level;
 
-		parents->idx[level-1] = idx;
 		if (level == PG_LEVEL_4K)
 			break;
 
-		parents->parent[level-2] = sp;
 	}
 
 	return n;
 }
 
-static int mmu_pages_first(struct kvm_mmu_pages *pvec,
-			   struct mmu_page_path *parents)
+static int mmu_pages_first(struct kvm_mmu_pages *pvec)
 {
 	struct kvm_mmu_page *sp;
 	int level;
@@ -1943,13 +1932,7 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec,
 	level = sp->role.level;
 	WARN_ON(level == PG_LEVEL_4K);
 
-	parents->parent[level-2] = sp;
-
-	/* Also set up a sentinel.  Further entries in pvec are all
-	 * children of sp, so this element is never overwritten.
-	 */
-	parents->parent[level-1] = NULL;
-	return mmu_pages_next(pvec, parents, 0);
+	return mmu_pages_next(pvec, 0);
 }
 
 static int mmu_sync_children(struct kvm_vcpu *vcpu,
@@ -1957,7 +1940,6 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 {
 	int i;
 	struct kvm_mmu_page *sp;
-	struct mmu_page_path parents;
 	struct kvm_mmu_pages pages;
 	LIST_HEAD(invalid_list);
 	bool flush = false;
@@ -1965,7 +1947,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 	while (mmu_unsync_walk_and_clear(parent, &pages)) {
 		bool protected = false;
 
-		for_each_sp(pages, sp, parents, i)
+		for_each_sp(pages, sp, i)
 			protected |= kvm_vcpu_write_protect_gfn(vcpu, sp->gfn);
 
 		if (protected) {
@@ -1973,7 +1955,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 			flush = false;
 		}
 
-		for_each_sp(pages, sp, parents, i) {
+		for_each_sp(pages, sp, i) {
 			kvm_mmu_page_clear_unsync(vcpu->kvm, sp);
 			flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
 		}
@@ -2273,7 +2255,6 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
 				   struct list_head *invalid_list)
 {
 	int i, zapped = 0;
-	struct mmu_page_path parents;
 	struct kvm_mmu_pages pages;
 
 	if (parent->role.level == PG_LEVEL_4K)
@@ -2282,7 +2263,7 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
 	while (mmu_unsync_walk_and_clear(parent, &pages)) {
 		struct kvm_mmu_page *sp;
 
-		for_each_sp(pages, sp, parents, i) {
+		for_each_sp(pages, sp, i) {
 			kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
 			zapped++;
 		}
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/12] KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (6 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 20:31   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 09/12] KVM: X86/MMU: Unfold struct mmu_page_and_offset in " Lai Jiangshan
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

The value is only set but never really used.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 65a2f4a2ce25..dc159db46b34 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1745,13 +1745,11 @@ static int nonpaging_sync_page(struct kvm_vcpu *vcpu,
 struct kvm_mmu_pages {
 	struct mmu_page_and_offset {
 		struct kvm_mmu_page *sp;
-		unsigned int idx;
 	} page[KVM_PAGE_ARRAY_NR];
 	unsigned int nr;
 };
 
-static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp,
-			 int idx)
+static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp)
 {
 	int i;
 
@@ -1761,7 +1759,6 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp,
 				return 0;
 
 	pvec->page[pvec->nr].sp = sp;
-	pvec->page[pvec->nr].idx = idx;
 	pvec->nr++;
 	return (pvec->nr == KVM_PAGE_ARRAY_NR);
 }
@@ -1790,7 +1787,7 @@ static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 		child = to_shadow_page(ent & PT64_BASE_ADDR_MASK);
 
 		if (child->unsync_children) {
-			if (mmu_pages_add(pvec, child, i))
+			if (mmu_pages_add(pvec, child))
 				return -ENOSPC;
 
 			ret = __mmu_unsync_walk_and_clear(child, pvec);
@@ -1808,7 +1805,7 @@ static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 		clear_unsync_child_bit(sp, i);
 		if (child->unsync) {
 			nr_unsync_leaf++;
-			if (mmu_pages_add(pvec, child, i))
+			if (mmu_pages_add(pvec, child))
 				return -ENOSPC;
 		}
 	}
@@ -1816,8 +1813,6 @@ static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 	return nr_unsync_leaf;
 }
 
-#define INVALID_INDEX (-1)
-
 static int mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 			   struct kvm_mmu_pages *pvec)
 {
@@ -1825,7 +1820,7 @@ static int mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 	if (!sp->unsync_children)
 		return 0;
 
-	mmu_pages_add(pvec, sp, INVALID_INDEX);
+	mmu_pages_add(pvec, sp);
 	return __mmu_unsync_walk_and_clear(sp, pvec);
 }
 
@@ -1926,8 +1921,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec)
 	if (pvec->nr == 0)
 		return 0;
 
-	WARN_ON(pvec->page[0].idx != INVALID_INDEX);
-
 	sp = pvec->page[0].sp;
 	level = sp->role.level;
 	WARN_ON(level == PG_LEVEL_4K);
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/12] KVM: X86/MMU: Unfold struct mmu_page_and_offset in struct kvm_mmu_pages
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (7 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 08/12] KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 10/12] KVM: X86/MMU: Don't add parents to " Lai Jiangshan
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

struct kvm_mmu_page *sp is the only field in struct mmu_page_and_offset.

Unfold it.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dc159db46b34..a5563e5ee2e5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1743,9 +1743,7 @@ static int nonpaging_sync_page(struct kvm_vcpu *vcpu,
 #define KVM_PAGE_ARRAY_NR 16
 
 struct kvm_mmu_pages {
-	struct mmu_page_and_offset {
-		struct kvm_mmu_page *sp;
-	} page[KVM_PAGE_ARRAY_NR];
+	struct kvm_mmu_page *sp[KVM_PAGE_ARRAY_NR];
 	unsigned int nr;
 };
 
@@ -1755,10 +1753,10 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp)
 
 	if (sp->unsync)
 		for (i=0; i < pvec->nr; i++)
-			if (pvec->page[i].sp == sp)
+			if (pvec->sp[i] == sp)
 				return 0;
 
-	pvec->page[pvec->nr].sp = sp;
+	pvec->sp[pvec->nr] = sp;
 	pvec->nr++;
 	return (pvec->nr == KVM_PAGE_ARRAY_NR);
 }
@@ -1894,7 +1892,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 
 #define for_each_sp(pvec, sp, i)					\
 		for (i = mmu_pages_first(&pvec);			\
-			i < pvec.nr && ({ sp = pvec.page[i].sp; 1;});	\
+			i < pvec.nr && ({ sp = pvec.sp[i]; 1;});	\
 			i = mmu_pages_next(&pvec, i))
 
 static int mmu_pages_next(struct kvm_mmu_pages *pvec, int i)
@@ -1902,7 +1900,7 @@ static int mmu_pages_next(struct kvm_mmu_pages *pvec, int i)
 	int n;
 
 	for (n = i+1; n < pvec->nr; n++) {
-		struct kvm_mmu_page *sp = pvec->page[n].sp;
+		struct kvm_mmu_page *sp = pvec->sp[n];
 		int level = sp->role.level;
 
 		if (level == PG_LEVEL_4K)
@@ -1921,7 +1919,7 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec)
 	if (pvec->nr == 0)
 		return 0;
 
-	sp = pvec->page[0].sp;
+	sp = pvec->sp[0];
 	level = sp->role.level;
 	WARN_ON(level == PG_LEVEL_4K);
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/12] KVM: X86/MMU: Don't add parents to struct kvm_mmu_pages
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (8 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 09/12] KVM: X86/MMU: Unfold struct mmu_page_and_offset in " Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 20:34   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 11/12] KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next() Lai Jiangshan
  2022-06-05  6:43 ` [PATCH 12/12] KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec Lai Jiangshan
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Parents added into the struct kvm_mmu_pages are never used.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 36 +++++-------------------------------
 1 file changed, 5 insertions(+), 31 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a5563e5ee2e5..304a515bd073 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1751,10 +1751,9 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp)
 {
 	int i;
 
-	if (sp->unsync)
-		for (i=0; i < pvec->nr; i++)
-			if (pvec->sp[i] == sp)
-				return 0;
+	for (i=0; i < pvec->nr; i++)
+		if (pvec->sp[i] == sp)
+			return 0;
 
 	pvec->sp[pvec->nr] = sp;
 	pvec->nr++;
@@ -1785,9 +1784,6 @@ static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 		child = to_shadow_page(ent & PT64_BASE_ADDR_MASK);
 
 		if (child->unsync_children) {
-			if (mmu_pages_add(pvec, child))
-				return -ENOSPC;
-
 			ret = __mmu_unsync_walk_and_clear(child, pvec);
 			if (ret < 0)
 				return ret;
@@ -1818,7 +1814,6 @@ static int mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 	if (!sp->unsync_children)
 		return 0;
 
-	mmu_pages_add(pvec, sp);
 	return __mmu_unsync_walk_and_clear(sp, pvec);
 }
 
@@ -1897,33 +1892,12 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 
 static int mmu_pages_next(struct kvm_mmu_pages *pvec, int i)
 {
-	int n;
-
-	for (n = i+1; n < pvec->nr; n++) {
-		struct kvm_mmu_page *sp = pvec->sp[n];
-		int level = sp->role.level;
-
-		if (level == PG_LEVEL_4K)
-			break;
-
-	}
-
-	return n;
+	return i + 1;
 }
 
 static int mmu_pages_first(struct kvm_mmu_pages *pvec)
 {
-	struct kvm_mmu_page *sp;
-	int level;
-
-	if (pvec->nr == 0)
-		return 0;
-
-	sp = pvec->sp[0];
-	level = sp->role.level;
-	WARN_ON(level == PG_LEVEL_4K);
-
-	return mmu_pages_next(pvec, 0);
+	return 0;
 }
 
 static int mmu_sync_children(struct kvm_vcpu *vcpu,
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 11/12] KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next()
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (9 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 10/12] KVM: X86/MMU: Don't add parents to " Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 20:40   ` Sean Christopherson
  2022-06-05  6:43 ` [PATCH 12/12] KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec Lai Jiangshan
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Use i = 0 andd i++ instead.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 304a515bd073..7cfc4bc89f60 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1886,19 +1886,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 }
 
 #define for_each_sp(pvec, sp, i)					\
-		for (i = mmu_pages_first(&pvec);			\
-			i < pvec.nr && ({ sp = pvec.sp[i]; 1;});	\
-			i = mmu_pages_next(&pvec, i))
-
-static int mmu_pages_next(struct kvm_mmu_pages *pvec, int i)
-{
-	return i + 1;
-}
-
-static int mmu_pages_first(struct kvm_mmu_pages *pvec)
-{
-	return 0;
-}
+		for (i = 0; i < pvec.nr && ({ sp = pvec.sp[i]; 1;}); i++)
 
 static int mmu_sync_children(struct kvm_vcpu *vcpu,
 			     struct kvm_mmu_page *parent, bool can_yield)
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 12/12] KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec
  2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
                   ` (10 preceding siblings ...)
  2022-06-05  6:43 ` [PATCH 11/12] KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next() Lai Jiangshan
@ 2022-06-05  6:43 ` Lai Jiangshan
  2022-07-19 20:45   ` Sean Christopherson
  11 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-06-05  6:43 UTC (permalink / raw)
  To: linux-kernel, kvm, Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

It is implemented as a vector and variable names for it are pvec.

Rename it to kvm_mmu_page_vec for better describing it.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/kvm/mmu/mmu.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 7cfc4bc89f60..64e0d155068c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1742,12 +1742,12 @@ static int nonpaging_sync_page(struct kvm_vcpu *vcpu,
 
 #define KVM_PAGE_ARRAY_NR 16
 
-struct kvm_mmu_pages {
+struct kvm_mmu_page_vec {
 	struct kvm_mmu_page *sp[KVM_PAGE_ARRAY_NR];
 	unsigned int nr;
 };
 
-static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp)
+static int mmu_pages_add(struct kvm_mmu_page_vec *pvec, struct kvm_mmu_page *sp)
 {
 	int i;
 
@@ -1768,7 +1768,7 @@ static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
 }
 
 static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
-			   struct kvm_mmu_pages *pvec)
+			   struct kvm_mmu_page_vec *pvec)
 {
 	int i, ret, nr_unsync_leaf = 0;
 
@@ -1808,7 +1808,7 @@ static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
 }
 
 static int mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
-			   struct kvm_mmu_pages *pvec)
+			   struct kvm_mmu_page_vec *pvec)
 {
 	pvec->nr = 0;
 	if (!sp->unsync_children)
@@ -1885,7 +1885,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
 	       unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen);
 }
 
-#define for_each_sp(pvec, sp, i)					\
+#define page_vec_for_each_sp(pvec, sp, i)					\
 		for (i = 0; i < pvec.nr && ({ sp = pvec.sp[i]; 1;}); i++)
 
 static int mmu_sync_children(struct kvm_vcpu *vcpu,
@@ -1893,14 +1893,14 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 {
 	int i;
 	struct kvm_mmu_page *sp;
-	struct kvm_mmu_pages pages;
+	struct kvm_mmu_page_vec pvec;
 	LIST_HEAD(invalid_list);
 	bool flush = false;
 
-	while (mmu_unsync_walk_and_clear(parent, &pages)) {
+	while (mmu_unsync_walk_and_clear(parent, &pvec)) {
 		bool protected = false;
 
-		for_each_sp(pages, sp, i)
+		page_vec_for_each_sp(pvec, sp, i)
 			protected |= kvm_vcpu_write_protect_gfn(vcpu, sp->gfn);
 
 		if (protected) {
@@ -1908,7 +1908,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
 			flush = false;
 		}
 
-		for_each_sp(pages, sp, i) {
+		page_vec_for_each_sp(pvec, sp, i) {
 			kvm_mmu_page_clear_unsync(vcpu->kvm, sp);
 			flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
 		}
@@ -2208,15 +2208,15 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
 				   struct list_head *invalid_list)
 {
 	int i, zapped = 0;
-	struct kvm_mmu_pages pages;
+	struct kvm_mmu_page_vec pvec;
 
 	if (parent->role.level == PG_LEVEL_4K)
 		return 0;
 
-	while (mmu_unsync_walk_and_clear(parent, &pages)) {
+	while (mmu_unsync_walk_and_clear(parent, &pvec)) {
 		struct kvm_mmu_page *sp;
 
-		for_each_sp(pages, sp, i) {
+		page_vec_for_each_sp(pvec, sp, i) {
 			kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
 			zapped++;
 		}
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 02/12] KVM: X86/MMU: Rename kvm_unlink_unsync_page() to kvm_mmu_page_clear_unsync()
  2022-06-05  6:43 ` [PATCH 02/12] KVM: X86/MMU: Rename kvm_unlink_unsync_page() to kvm_mmu_page_clear_unsync() Lai Jiangshan
@ 2022-07-14 22:10   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-14 22:10 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> "Unlink" is ambiguous, the function does not disconnect any link.
> 
> Use "clear" instead which is an antonym of "mark" in the name of the
> function mark_unsync() or kvm_mmu_mark_parents_unsync().

Hmm, but "clearing a page" is a common operation.  Might not be proper English,
but my vote is to use "unmark".  KVM already uses link+unlink, account+unaccount,
etc..., so mark+unmark should be intuitive for readers.

> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index f61416818116..c20981dfc4fd 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1825,7 +1825,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp,
>  	return __mmu_unsync_walk(sp, pvec);
>  }
>  
> -static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> +static void kvm_mmu_page_clear_unsync(struct kvm *kvm, struct kvm_mmu_page *sp)
>  {
>  	WARN_ON(!sp->unsync);
>  	trace_kvm_mmu_sync_page(sp);
> @@ -1987,7 +1987,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
>  		}
>  
>  		for_each_sp(pages, sp, parents, i) {
> -			kvm_unlink_unsync_page(vcpu->kvm, sp);
> +			kvm_mmu_page_clear_unsync(vcpu->kvm, sp);
>  			flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
>  			mmu_pages_clear_parents(&parents);
>  		}
> @@ -2326,7 +2326,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
>  		unaccount_shadowed(kvm, sp);
>  
>  	if (sp->unsync)
> -		kvm_unlink_unsync_page(kvm, sp);
> +		kvm_mmu_page_clear_unsync(kvm, sp);
>  	if (!sp->root_count) {
>  		/* Count self */
>  		(*nr_zapped)++;
> -- 
> 2.19.1.6.gb485710b
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 03/12] KVM: X86/MMU: Split a part of kvm_unsync_page() as kvm_mmu_page_mark_unsync()
  2022-06-05  6:43 ` [PATCH 03/12] KVM: X86/MMU: Split a part of kvm_unsync_page() as kvm_mmu_page_mark_unsync() Lai Jiangshan
@ 2022-07-14 22:19   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-14 22:19 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> Make it as the opposite function of kvm_mmu_page_clear_unsync().
> 
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index c20981dfc4fd..cc0207e26f6e 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2529,12 +2529,16 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva)
>  	return r;
>  }
>  
> -static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
> +static void kvm_mmu_page_mark_unsync(struct kvm *kvm, struct kvm_mmu_page *sp)

The existing code is anything but consistent, but I think I prefer the pattern:

	kvm_mmu_<action>_<target>_<flag>

I.e. kvm_mmu_mark_page_unsync() + kvm_mmu_unmark_page_unsync() to yield:

	kvm_mmu_mark_page_unsync(kvm, sp);
	kvm_mmu_mark_parents_unsync(sp);

so that at least this code will be consistent with itself.

>  {
>  	trace_kvm_mmu_unsync_page(sp);
>  	++kvm->stat.mmu_unsync;
>  	sp->unsync = 1;
> +}
>  
> +static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)

Rather than keep kvm_unsync_page(), what about just open coding the calls in
mmu_try_to_unsync_pages()?  I can't imagine we'll ever have a second caller.

There won't be a direct pair to kvm_sync_page(), but that's not necessarily a bad
thing since they are really direct opposites anyway.

> +{
> +	kvm_mmu_page_mark_unsync(kvm, sp);
>  	kvm_mmu_mark_parents_unsync(sp);
>  }
>  
> -- 
> 2.19.1.6.gb485710b
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents()
  2022-06-05  6:43 ` [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents() Lai Jiangshan
@ 2022-07-14 23:15   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-14 23:15 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

For the shortlog, I really want to capture the net effect.  It took me a lot of
staring and reading (and hopefully not misreading) to figure out that this is a
glorified nop.

  KVM: x86/mmu: Update unsync children metadata via recursion, not bottom-up walk

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> mmu_unsync_walk() is designed to be workable in a pagetable which has
> unsync child bits set in the shadow pages in the pagetable but without
> any unsync shadow pages.
> 
> This can be resulted when the unsync shadow pages of a pagetable
> can be walked from other pagetables and have been synced or zapped
> when other pagetables are synced or zapped.
>
> So mmu_pages_clear_parents() is not required even when the callers of
> mmu_unsync_walk() zap or sync the pagetable.

There's one other critical piece that it took me a quite some time to suss out
from the code: the @parent passed to mmu_sync_children() _is_ updated because
mmu_sync_children() loops on mmu_unsync_walk().  It's only the parents of @parent
that are not updated, but they weren't updated anyways because mmu_pages_clear_parents()
doesn't operate on the parents of @parent.

> So remove mmu_pages_clear_parents() and the child bits can be cleared in
> the next call of mmu_unsync_walk() in one go.

Ah, I missed (over and over) that the "next call" is the one right mmu_sync_children()
and mmu_unsync_walk(), not a future call.

Because I kept losing track of which pagetable was which, how about this for
a changelog?

  When syncing a shadow page with unsync children, do not update the
  "unsync children" metadata from the bottom up, and instead defer the
  update to the next "iteration" of mmu_unsync_walk() (all users of
  mmu_unsync_walk() loop until it returns "no unsync children").

  mmu_unsync_walk() is designed to handle the scenario where a shadow page
  has a false positive on having unsync children, i.e. unsync_children can
  be elevated without any child shadow pages actually being unsync.

  Such a scenario already occurs when a child is synced or zapped by a
  different walk of the page tables, i.e. with a different set of parents,
  as unmarking parents is done only for the current walk.

  Note, mmu_pages_clear_parents() doesn't update parents of @parent, so
  there's no change in functionality from that perspective.

  Removing mmu_pages_clear_parents() allows for further simplifying
  mmu_unsync_walk(), including removing the struct mmu_page_path since
  mmu_pages_clear_parents() was the only the function is the only user of it.

With a cleaned up shortlog+changelog, and assuming I didn't misread everything...

Reviewed-by: Sean Christopherson <seanjc@google.com>

> 
> Removing mmu_pages_clear_parents() allows for further simplifying
> mmu_unsync_walk() including removing the struct mmu_page_path since
> the function is the only user of it.
> 
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk()
  2022-06-05  6:43 ` [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk() Lai Jiangshan
@ 2022-07-19 19:52   ` Sean Christopherson
  2022-07-21  9:32     ` Lai Jiangshan
  0 siblings, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 19:52 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> mmu_unsync_walk() and __mmu_unsync_walk() requires the caller to clear
> unsync for the shadow pages in the resulted pvec by synching them or
> zapping them.
> 
> All callers does so.
> 
> Otherwise mmu_unsync_walk() and __mmu_unsync_walk() can't work because
> they always walk from the beginning.
> 
> It is possible to make mmu_unsync_walk() and __mmu_unsync_walk() lists
> unsync shadow pages in the resulted pvec without needing synching them
> or zapping them later.  It would require to change mmu_unsync_walk()
> and __mmu_unsync_walk() and make it walk from the last visited position
> derived from the resulted pvec of the previous call of mmu_unsync_walk().
> 
> It would complicate the walk and no callers require the possible new
> behavior.
> 
> It is better to keep the original behavior.
> 
> Since the shadow pages in the resulted pvec will be synced or zapped,
> and clear_unsync_child_bit() for parents will be called anyway later.
> 
> Call clear_unsync_child_bit() earlier and directly in __mmu_unsync_walk()
> to make the code more efficient (the memory of the shadow pages is hot
> in the CPU cache, and no need to visit the shadow pages again later).

The changelog and shortlog do a poor job of capturing what this patch actually
does.  This is a prime example of why I prefer that changelogs first document
what the patch is doing, and only then dive into background details and alternatives.

This changelog has 6-7 paragraphs talking about current KVM behaviors and
alternatives before talking about the patch itself, and then doesn't actually
describe the net effect of the change.

The use of "directly" in the shortlog is also confusing because __mmu_unsync_walk()
already invokes clear_unsync_child_bit(), e.g. this patch only affects
__mmu_unsync_walk().  IIUC, the change is that __mmu_unsync_walk() will clear
the unsync info when adding to @pvec instead of having to redo the walk after
zapping/synching the page.

  KVM: x86/mmu: Clear unsync child _before_ final zap/sync

  Clear the unsync child information for a shadow page when adding it to
  the array of to-be-zapped/synced shadow pages, i.e. _before_ the actual
  zap/sync that effects the "no longer unsync" state change.  Callers of
  mmu_unsync_walk() and __mmu_unsync_walk() are required to zap/sync all
  entries in @pvec before dropping mmu_lock, i.e. once a shadow page is
  added to the set of pages to zap/sync, success is guaranteed.

  Clearing the unsync info when adding to the array yields more efficient
  code as KVM will no longer need to rewalk the shadow pages to "discover"
  that the child   pages is no longer unsync, and as a bonus, the metadata
  for the shadow   page will be hot in the CPU cache.

  Note, this obviously doesn't work if success isn't guaranteed, but
  mmu_unsync_walk() and __mmu_unsync_walk() would require significant
  changes to allow restarting a walk after failure to zap/sync.  I.e.
  this is but one of many details that would need to change.

> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++---------
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index f35fd5c59c38..2446ede0b7b9 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1794,19 +1794,23 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
>  				return -ENOSPC;
>  
>  			ret = __mmu_unsync_walk(child, pvec);
> -			if (!ret) {
> -				clear_unsync_child_bit(sp, i);
> -				continue;
> -			} else if (ret > 0) {
> -				nr_unsync_leaf += ret;
> -			} else
> +			if (ret < 0)
>  				return ret;
> -		} else if (child->unsync) {
> +			nr_unsync_leaf += ret;
> +		}
> +
> +		/*
> +		 * Clear unsync bit for @child directly if @child is fully
> +		 * walked and all the unsync shadow pages descended from
> +		 * @child (including itself) are added into @pvec, the caller
> +		 * must sync or zap all the unsync shadow pages in @pvec.
> +		 */
> +		clear_unsync_child_bit(sp, i);
> +		if (child->unsync) {
>  			nr_unsync_leaf++;
>  			if (mmu_pages_add(pvec, child, i))

This ordering is wrong, no?  If the child itself is unsync and can't be added to
@pvec, i.e. fails here, then clearing its bit in unsync_child_bitmap is wrong.

I also dislike that that this patch obfuscates that a shadow page can't be unsync
itself _and_ have unsync children (because only PG_LEVEL_4K can be unsync).  In
other words, keep the

	if (child->unsync_children) {

	} else if (child->unsync) {

	}

And at that point, we can streamline this further:

	int i, ret, nr_unsync_leaf = 0;

	for_each_set_bit(i, sp->unsync_child_bitmap, 512) {
		struct kvm_mmu_page *child;
		u64 ent = sp->spt[i];

		if (is_shadow_present_pte(ent) && !is_large_pte(ent)) {
			child = to_shadow_page(ent & PT64_BASE_ADDR_MASK);
			if (child->unsync_children) {
				ret = __mmu_unsync_walk_and_clear(child, pvec);
				if (ret < 0)
					return ret;
				nr_unsync_leaf += ret;
			} else if (child->unsync) {
				if (mmu_pages_add(pvec, child))
					return -ENOSPC;
				nr_unsync_leaf++;
			}
		}

		/*
		 * Clear the unsync info, the child is either already sync
		 * (bitmap is stale) or is guaranteed to be zapped/synced by
		 * the caller before mmu_lock is released.  Note, the caller is
		 * required to zap/sync all entries in @pvec even if an error
		 * is returned!
		 */
		clear_unsync_child_bit(sp, i);
	}

	return nr_unsync_leaf;

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 06/12] KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear()
  2022-06-05  6:43 ` [PATCH 06/12] KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear() Lai Jiangshan
@ 2022-07-19 20:07   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 20:07 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> mmu_unsync_walk() and __mmu_unsync_walk() requires the caller to clear
> unsync for the shadow pages in the resulted pvec by synching them or
> zapping them.
> 
> All callers does so.
> 
> Otherwise mmu_unsync_walk() and __mmu_unsync_walk() can't work because
> they always walk from the beginning.
> 
> And mmu_unsync_walk() and __mmu_unsync_walk() directly clear unsync bits
> now, rename it.

What about mmu_gather_unsync_shadow_pages()?  I agree that "walk" isn't a great
name, but IMO that's true regardless of when it updates the unsync bitmap.  And
similar to a previous complaint about "clear" being ambiguous, I don't think it's
realistic that we'll be able to come up with a name the precisely and unambiguously
describes what exactly is being cleared.

Instead, regardless of what name we settle on, add a function comment.  Probably
in the patch that changes the clear_unsync_child_bit behavior.  That's a better
place to document the implementation detail.

> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 2446ede0b7b9..a56d328365e4 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1773,7 +1773,7 @@ static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
>  	__clear_bit(idx, sp->unsync_child_bitmap);
>  }
>  
> -static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
> +static int __mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
>  			   struct kvm_mmu_pages *pvec)
>  {
>  	int i, ret, nr_unsync_leaf = 0;
> @@ -1793,7 +1793,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
>  			if (mmu_pages_add(pvec, child, i))
>  				return -ENOSPC;
>  
> -			ret = __mmu_unsync_walk(child, pvec);
> +			ret = __mmu_unsync_walk_and_clear(child, pvec);
>  			if (ret < 0)
>  				return ret;
>  			nr_unsync_leaf += ret;
> @@ -1818,7 +1818,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
>  
>  #define INVALID_INDEX (-1)
>  
> -static int mmu_unsync_walk(struct kvm_mmu_page *sp,
> +static int mmu_unsync_walk_and_clear(struct kvm_mmu_page *sp,
>  			   struct kvm_mmu_pages *pvec)

Please align indentation.

>  {
>  	pvec->nr = 0;
> @@ -1826,7 +1826,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp,
>  		return 0;
>  
>  	mmu_pages_add(pvec, sp, INVALID_INDEX);
> -	return __mmu_unsync_walk(sp, pvec);
> +	return __mmu_unsync_walk_and_clear(sp, pvec);
>  }
>  
>  static void kvm_mmu_page_clear_unsync(struct kvm *kvm, struct kvm_mmu_page *sp)
> @@ -1962,7 +1962,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
>  	LIST_HEAD(invalid_list);
>  	bool flush = false;
>  
> -	while (mmu_unsync_walk(parent, &pages)) {
> +	while (mmu_unsync_walk_and_clear(parent, &pages)) {
>  		bool protected = false;
>  
>  		for_each_sp(pages, sp, parents, i)
> @@ -2279,7 +2279,7 @@ static int mmu_zap_unsync_children(struct kvm *kvm,
>  	if (parent->role.level == PG_LEVEL_4K)
>  		return 0;
>  
> -	while (mmu_unsync_walk(parent, &pages)) {
> +	while (mmu_unsync_walk_and_clear(parent, &pages)) {
>  		struct kvm_mmu_page *sp;
>  
>  		for_each_sp(pages, sp, parents, i) {
> -- 
> 2.19.1.6.gb485710b
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path
  2022-06-05  6:43 ` [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path Lai Jiangshan
@ 2022-07-19 20:15   ` Sean Christopherson
  2022-07-21  9:43     ` Lai Jiangshan
  0 siblings, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 20:15 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

Nit, s/useless/now-unused, or "no longer used".  I associate "useless" in shortlogs
as "this <xyz> is pointless and always has been pointless", whereas "now-unused"
is likely to be interpreted as "remove <xyz> as it's no longer used after recent
changes".

Alternatively, can this patch be squashed with the patch that removes
mmu_pages_clear_parents()?  Yeah, it'll be a (much?) larger patch, but leaving
dead code behind is arguably worse.

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> struct mmu_page_path is set and updated but never used since
> mmu_pages_clear_parents() is removed.
> 
> Remove it.
> 
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 08/12] KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages
  2022-06-05  6:43 ` [PATCH 08/12] KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages Lai Jiangshan
@ 2022-07-19 20:31   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 20:31 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

It's arguably not useless, e.g. it's still used for a sanity check.  Not sure
how to word that though.  Maybe?

  KVM: x86/mmu: Drop no-longer-necessary mmu_page_and_offset.idx

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> The value is only set but never really used.

Please elaborate on why it's no longer truly used.  Something like:

  Drop mmu_page_and_offset.idx, it's no longer strictly necessary now that
  KVM doesn't recurse up the walk to clear unsync information in parents.
  The field is still used for a sanity check, but that sanity check will
  soon be made obsolete by further simplifying the gathering of unsync
  shadow pages

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/12] KVM: X86/MMU: Don't add parents to struct kvm_mmu_pages
  2022-06-05  6:43 ` [PATCH 10/12] KVM: X86/MMU: Don't add parents to " Lai Jiangshan
@ 2022-07-19 20:34   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 20:34 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> Parents added into the struct kvm_mmu_pages are never used.

s/never/no longer, and if possible, exapnd on why they are no longer used.  Most
of that can be gleaned from prior patches, but capturing the high level historical
details isn't that onerous.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 11/12] KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next()
  2022-06-05  6:43 ` [PATCH 11/12] KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next() Lai Jiangshan
@ 2022-07-19 20:40   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 20:40 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> Use i = 0 andd i++ instead.

s/andd/and, but even better would be to write a full changelog.

  Drop mmu_pages_{next,first}() and open code their now trivial
  implementations in for_each_sp(), the sole caller.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/12] KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec
  2022-06-05  6:43 ` [PATCH 12/12] KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec Lai Jiangshan
@ 2022-07-19 20:45   ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-19 20:45 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, kvm, Paolo Bonzini, Vitaly Kuznetsov,
	Maxim Levitsky, Lai Jiangshan

On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> It is implemented as a vector and variable names for it are pvec.

Please define "it" in the changelog before referencing "it".  Avoiding dependencies
on the shortlog is trivial and really does help as it avoids having to jump back
to see what "it" refers to.

> Rename it to kvm_mmu_page_vec for better describing it.
> 
> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk()
  2022-07-19 19:52   ` Sean Christopherson
@ 2022-07-21  9:32     ` Lai Jiangshan
  2022-07-21 16:26       ` Sean Christopherson
  0 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-07-21  9:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: LKML, open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips),
	Paolo Bonzini, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

On Wed, Jul 20, 2022 at 3:52 AM Sean Christopherson <seanjc@google.com> wrote:

> > ---
> >  arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++---------
> >  1 file changed, 13 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index f35fd5c59c38..2446ede0b7b9 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -1794,19 +1794,23 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
> >                               return -ENOSPC;
> >
> >                       ret = __mmu_unsync_walk(child, pvec);
> > -                     if (!ret) {
> > -                             clear_unsync_child_bit(sp, i);
> > -                             continue;
> > -                     } else if (ret > 0) {
> > -                             nr_unsync_leaf += ret;
> > -                     } else
> > +                     if (ret < 0)
> >                               return ret;
> > -             } else if (child->unsync) {
> > +                     nr_unsync_leaf += ret;
> > +             }
> > +
> > +             /*
> > +              * Clear unsync bit for @child directly if @child is fully
> > +              * walked and all the unsync shadow pages descended from
> > +              * @child (including itself) are added into @pvec, the caller
> > +              * must sync or zap all the unsync shadow pages in @pvec.
> > +              */
> > +             clear_unsync_child_bit(sp, i);
> > +             if (child->unsync) {
> >                       nr_unsync_leaf++;
> >                       if (mmu_pages_add(pvec, child, i))
>
> This ordering is wrong, no?  If the child itself is unsync and can't be added to
> @pvec, i.e. fails here, then clearing its bit in unsync_child_bitmap is wrong.

mmu_pages_add() can always successfully add the page to @pvec and
the caller needs to guarantee there is enough room to do so.

When it returns true, it means it will fail if you keep adding pages.

>
> I also dislike that that this patch obfuscates that a shadow page can't be unsync
> itself _and_ have unsync children (because only PG_LEVEL_4K can be unsync).  In
> other words, keep the
>
>         if (child->unsync_children) {
>
>         } else if (child->unsync) {
>
>         }
>

The code was not streamlined like this just because
I need to add some comments on clear_unsync_child_bit().

Duplicated clear_unsync_child_bit() would require
duplicated comments.  I will use "See above" instead.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path
  2022-07-19 20:15   ` Sean Christopherson
@ 2022-07-21  9:43     ` Lai Jiangshan
  2022-07-21 15:25       ` Sean Christopherson
  0 siblings, 1 reply; 27+ messages in thread
From: Lai Jiangshan @ 2022-07-21  9:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: LKML, open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips),
	Paolo Bonzini, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

On Wed, Jul 20, 2022 at 4:15 AM Sean Christopherson <seanjc@google.com> wrote:
>
> Nit, s/useless/now-unused, or "no longer used".  I associate "useless" in shortlogs
> as "this <xyz> is pointless and always has been pointless", whereas "now-unused"
> is likely to be interpreted as "remove <xyz> as it's no longer used after recent
> changes".
>
> Alternatively, can this patch be squashed with the patch that removes
> mmu_pages_clear_parents()?  Yeah, it'll be a (much?) larger patch, but leaving
> dead code behind is arguably worse.

Defined by the C-language and the machine, struct mmu_page_path is used
in for_each_sp() and the data is set and updated every iteration.

It is not really dead code.

Defined by the sematic that we want, gathering unsync pages,
we don't need struct mmu_page_path, since the struct is not used
by "someone reading it".

>
> On Sun, Jun 05, 2022, Lai Jiangshan wrote:
> > From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> >
> > struct mmu_page_path is set and updated but never used since
> > mmu_pages_clear_parents() is removed.
> >
> > Remove it.
> >
> > Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> > ---

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path
  2022-07-21  9:43     ` Lai Jiangshan
@ 2022-07-21 15:25       ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-21 15:25 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips),
	Paolo Bonzini, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

On Thu, Jul 21, 2022, Lai Jiangshan wrote:
> On Wed, Jul 20, 2022 at 4:15 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > Nit, s/useless/now-unused, or "no longer used".  I associate "useless" in shortlogs
> > as "this <xyz> is pointless and always has been pointless", whereas "now-unused"
> > is likely to be interpreted as "remove <xyz> as it's no longer used after recent
> > changes".
> >
> > Alternatively, can this patch be squashed with the patch that removes
> > mmu_pages_clear_parents()?  Yeah, it'll be a (much?) larger patch, but leaving
> > dead code behind is arguably worse.
> 
> Defined by the C-language and the machine, struct mmu_page_path is used
> in for_each_sp() and the data is set and updated every iteration.
> 
> It is not really dead code.

I'm not talking about just "struct mmu_page_path", but also the pointless updates
in for_each_sp().  And I think even if we're being super pedantic, it _is_ dead
code because C99 allows the compiler to drop code that the compiler can prove has
no side effects.  I learned this the hard way by discovering that an asm() blob
with an output constraint will be elided if the output isn't consumed and the asm()
blob isn't tagged volatile.

  In the abstract machine, all expressions are evaluated as specified by the
  semantics. An actual implementation need not evaluate part of an expression if
  it can deduce that its value is not used and that no needed side effects are
  produced (including any caused by calling a function or accessing a volatile object)

I don't see any advantage to separating this from mmu_pages_clear_parents().  It
doesn't make the code any easier to review.  I'd argue it does the opposite because
it makes it harder to see that mmu_pages_clear_parents() was the only user, i.e.
squashing this would provide further justification for dropping mmu_pages_clear_parents().

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk()
  2022-07-21  9:32     ` Lai Jiangshan
@ 2022-07-21 16:26       ` Sean Christopherson
  0 siblings, 0 replies; 27+ messages in thread
From: Sean Christopherson @ 2022-07-21 16:26 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips),
	Paolo Bonzini, Vitaly Kuznetsov, Maxim Levitsky, Lai Jiangshan

[-- Attachment #1: Type: text/plain, Size: 3336 bytes --]

On Thu, Jul 21, 2022, Lai Jiangshan wrote:
> On Wed, Jul 20, 2022 at 3:52 AM Sean Christopherson <seanjc@google.com> wrote:
> 
> > > ---
> > >  arch/x86/kvm/mmu/mmu.c | 22 +++++++++++++---------
> > >  1 file changed, 13 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > index f35fd5c59c38..2446ede0b7b9 100644
> > > --- a/arch/x86/kvm/mmu/mmu.c
> > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > @@ -1794,19 +1794,23 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
> > >                               return -ENOSPC;
> > >
> > >                       ret = __mmu_unsync_walk(child, pvec);
> > > -                     if (!ret) {
> > > -                             clear_unsync_child_bit(sp, i);
> > > -                             continue;
> > > -                     } else if (ret > 0) {
> > > -                             nr_unsync_leaf += ret;
> > > -                     } else
> > > +                     if (ret < 0)
> > >                               return ret;
> > > -             } else if (child->unsync) {
> > > +                     nr_unsync_leaf += ret;
> > > +             }
> > > +
> > > +             /*
> > > +              * Clear unsync bit for @child directly if @child is fully
> > > +              * walked and all the unsync shadow pages descended from
> > > +              * @child (including itself) are added into @pvec, the caller
> > > +              * must sync or zap all the unsync shadow pages in @pvec.
> > > +              */
> > > +             clear_unsync_child_bit(sp, i);
> > > +             if (child->unsync) {
> > >                       nr_unsync_leaf++;
> > >                       if (mmu_pages_add(pvec, child, i))
> >
> > This ordering is wrong, no?  If the child itself is unsync and can't be added to
> > @pvec, i.e. fails here, then clearing its bit in unsync_child_bitmap is wrong.
> 
> mmu_pages_add() can always successfully add the page to @pvec and
> the caller needs to guarantee there is enough room to do so.
> 
> When it returns true, it means it will fail if you keep adding pages.

Oof, that's downright evil.  As prep work, can you fold in the attached patches
earlier in this series?  Then this patch can yield:

	for_each_set_bit(i, sp->unsync_child_bitmap, 512) {
		struct kvm_mmu_page *child;
		u64 ent = sp->spt[i];

		if (!is_shadow_present_pte(ent) || is_large_pte(ent))
			goto clear_unsync_child;

		child = to_shadow_page(ent & SPTE_BASE_ADDR_MASK);
		if (!child->unsync && !child->unsync_children)
			goto clear_unsync_child;

		if (mmu_is_page_vec_full(pvec))
			return -ENOSPC;

		mmu_pages_add(pvec, child, i);

		if (child->unsync_children) {
			ret = __mmu_unsync_walk(child, pvec);
			if (!ret)
				goto clear_unsync_child;
			else if (ret > 0)
				nr_unsync_leaf += ret;
			else
				return ret;
		} else {
			nr_unsync_leaf++;
		}

clear_unsync_child:
                /*
                 * Clear the unsync info, the child is either already sync
                 * (bitmap is stale) or is guaranteed to be zapped/synced by
                 * the caller before mmu_lock is released.  Note, the caller is
                 * required to zap/sync all entries in @pvec even if an error
                 * is returned!
                 */
                clear_unsync_child_bit(sp, i);
        }

[-- Attachment #2: 0001-KVM-x86-mmu-Separate-page-vec-is-full-from-adding-a-.patch --]
[-- Type: text/x-diff, Size: 2699 bytes --]

From f2968d1afb08708c8292808b88aa915ec714e154 Mon Sep 17 00:00:00 2001
From: Sean Christopherson <seanjc@google.com>
Date: Thu, 21 Jul 2022 08:38:35 -0700
Subject: [PATCH 1/2] KVM: x86/mmu: Separate "page vec is full" from adding a
 page to the array

Move the check for a full "page vector" out of mmu_pages_add(), returning
true/false (effectively) looks a _lot_ like returning success/fail, which
is very misleading and will even be more misleading when a future patch
clears the unsync child bit upon a page being added to the vector (as
opposed to clearing the bit when the vector is processed by the caller).

Checking that the vector is full when adding a previous page is also
sub-optimal, e.g. KVM unnecessarily returns an error if the vector is
full but there are no more unsync pages to process.  Separating the check
from the "add" will allow fixing this quirk in a future patch.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 52664c3caaab..ac60a52044ef 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1741,20 +1741,26 @@ struct kvm_mmu_pages {
 	unsigned int nr;
 };
 
-static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp,
+static bool mmu_is_page_vec_full(struct kvm_mmu_pages *pvec)
+{
+	return (pvec->nr == KVM_PAGE_ARRAY_NR);
+}
+
+static void mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp,
 			 int idx)
 {
 	int i;
 
-	if (sp->unsync)
-		for (i=0; i < pvec->nr; i++)
+	if (sp->unsync) {
+		for (i = 0; i < pvec->nr; i++) {
 			if (pvec->page[i].sp == sp)
-				return 0;
+				return;
+		}
+	}
 
 	pvec->page[pvec->nr].sp = sp;
 	pvec->page[pvec->nr].idx = idx;
 	pvec->nr++;
-	return (pvec->nr == KVM_PAGE_ARRAY_NR);
 }
 
 static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx)
@@ -1781,7 +1787,9 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 		child = to_shadow_page(ent & SPTE_BASE_ADDR_MASK);
 
 		if (child->unsync_children) {
-			if (mmu_pages_add(pvec, child, i))
+			mmu_pages_add(pvec, child, i);
+
+			if (mmu_is_page_vec_full(pvec))
 				return -ENOSPC;
 
 			ret = __mmu_unsync_walk(child, pvec);
@@ -1794,7 +1802,9 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 				return ret;
 		} else if (child->unsync) {
 			nr_unsync_leaf++;
-			if (mmu_pages_add(pvec, child, i))
+			mmu_pages_add(pvec, child, i);
+
+			if (mmu_is_page_vec_full(pvec))
 				return -ENOSPC;
 		} else
 			clear_unsync_child_bit(sp, i);
-- 
2.37.1.359.gd136c6c3e2-goog


[-- Attachment #3: 0002-KVM-x86-mmu-Check-for-full-page-vector-_before_-addi.patch --]
[-- Type: text/x-diff, Size: 1826 bytes --]

From c8b0d983791ef783165bbf2230ebc41145bf052e Mon Sep 17 00:00:00 2001
From: Sean Christopherson <seanjc@google.com>
Date: Thu, 21 Jul 2022 08:49:37 -0700
Subject: [PATCH 2/2] KVM: x86/mmu: Check for full page vector _before_ adding
 a new page

Check for a full page vector before adding to the vector instead of after
adding to the vector array, i.e. bail if and only if the vector is full
_and_ a new page needs to be added.  Previously, KVM would still bail if
the vector was full but there were no more unsync pages to process.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ac60a52044ef..aca9a8e6c626 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1785,13 +1785,17 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 		}
 
 		child = to_shadow_page(ent & SPTE_BASE_ADDR_MASK);
+		if (!child->unsync && !child->unsync_children) {
+			clear_unsync_child_bit(sp, i);
+			continue;
+		}
+
+		if (mmu_is_page_vec_full(pvec))
+			return -ENOSPC;
+
+		mmu_pages_add(pvec, child, i);
 
 		if (child->unsync_children) {
-			mmu_pages_add(pvec, child, i);
-
-			if (mmu_is_page_vec_full(pvec))
-				return -ENOSPC;
-
 			ret = __mmu_unsync_walk(child, pvec);
 			if (!ret) {
 				clear_unsync_child_bit(sp, i);
@@ -1800,14 +1804,9 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp,
 				nr_unsync_leaf += ret;
 			} else
 				return ret;
-		} else if (child->unsync) {
+		} else {
 			nr_unsync_leaf++;
-			mmu_pages_add(pvec, child, i);
-
-			if (mmu_is_page_vec_full(pvec))
-				return -ENOSPC;
-		} else
-			clear_unsync_child_bit(sp, i);
+		}
 	}
 
 	return nr_unsync_leaf;
-- 
2.37.1.359.gd136c6c3e2-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2022-07-21 16:27 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-05  6:43 [PATCH 00/12] KVM: X86/MMU: Simpliy mmu_unsync_walk() Lai Jiangshan
2022-06-05  6:43 ` [PATCH 01/12] KVM: X86/MMU: Warn if sp->unsync_children > 0 in link_shadow_page() Lai Jiangshan
2022-06-05  6:43 ` [PATCH 02/12] KVM: X86/MMU: Rename kvm_unlink_unsync_page() to kvm_mmu_page_clear_unsync() Lai Jiangshan
2022-07-14 22:10   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 03/12] KVM: X86/MMU: Split a part of kvm_unsync_page() as kvm_mmu_page_mark_unsync() Lai Jiangshan
2022-07-14 22:19   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 04/12] KVM: X86/MMU: Remove mmu_pages_clear_parents() Lai Jiangshan
2022-07-14 23:15   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 05/12] KVM: X86/MMU: Clear unsync bit directly in __mmu_unsync_walk() Lai Jiangshan
2022-07-19 19:52   ` Sean Christopherson
2022-07-21  9:32     ` Lai Jiangshan
2022-07-21 16:26       ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 06/12] KVM: X86/MMU: Rename mmu_unsync_walk() to mmu_unsync_walk_and_clear() Lai Jiangshan
2022-07-19 20:07   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 07/12] KVM: X86/MMU: Remove the useless struct mmu_page_path Lai Jiangshan
2022-07-19 20:15   ` Sean Christopherson
2022-07-21  9:43     ` Lai Jiangshan
2022-07-21 15:25       ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 08/12] KVM: X86/MMU: Remove the useless idx from struct kvm_mmu_pages Lai Jiangshan
2022-07-19 20:31   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 09/12] KVM: X86/MMU: Unfold struct mmu_page_and_offset in " Lai Jiangshan
2022-06-05  6:43 ` [PATCH 10/12] KVM: X86/MMU: Don't add parents to " Lai Jiangshan
2022-07-19 20:34   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 11/12] KVM: X86/MMU: Remove mmu_pages_first() and mmu_pages_next() Lai Jiangshan
2022-07-19 20:40   ` Sean Christopherson
2022-06-05  6:43 ` [PATCH 12/12] KVM: X86/MMU: Rename struct kvm_mmu_pages to struct kvm_mmu_page_vec Lai Jiangshan
2022-07-19 20:45   ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).