All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] KVM: x86/mmu: Zap orphaned kids for nested TDP MMU
@ 2020-09-23 22:14 Sean Christopherson
  2020-09-23 22:14 ` [PATCH v3 1/2] KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to FNAME(invlpg) Sean Christopherson
  2020-09-23 22:14 ` [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent Sean Christopherson
  0 siblings, 2 replies; 6+ messages in thread
From: Sean Christopherson @ 2020-09-23 22:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Peter Shier, Ben Gardon

Refreshed version of Ben's patch to zap orphaned MMU shadow pages so that
they don't turn into zombies.

v3:
  - Rebased to kvm/queue, commit e1ba1a15af73 ("KVM: SVM: Enable INVPCID
    feature on AMD").

Ben Gardon (1):
  KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only
    parent

Sean Christopherson (1):
  KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to
    FNAME(invlpg)

 arch/x86/kvm/mmu/mmu.c         | 38 ++++++++++++++++++++++------------
 arch/x86/kvm/mmu/paging_tmpl.h |  7 +++++--
 2 files changed, 30 insertions(+), 15 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to FNAME(invlpg)
  2020-09-23 22:14 [PATCH v3 0/2] KVM: x86/mmu: Zap orphaned kids for nested TDP MMU Sean Christopherson
@ 2020-09-23 22:14 ` Sean Christopherson
  2020-09-23 23:19   ` Ben Gardon
  2020-09-23 22:14 ` [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent Sean Christopherson
  1 sibling, 1 reply; 6+ messages in thread
From: Sean Christopherson @ 2020-09-23 22:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Peter Shier, Ben Gardon

Move the logic that controls whether or not FNAME(invlpg) needs to flush
fully into FNAME(invlpg) so that mmu_page_zap_pte() doesn't return a
value.  This allows a future patch to redefine the return semantics for
mmu_page_zap_pte() so that it can recursively zap orphaned child shadow
pages for nested TDP MMUs.

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 10 +++-------
 arch/x86/kvm/mmu/paging_tmpl.h |  7 +++++--
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 76c5826e29a2..a91e8601594d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2615,7 +2615,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 	}
 }
 
-static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
+static void mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 			     u64 *spte)
 {
 	u64 pte;
@@ -2631,13 +2631,9 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 			child = to_shadow_page(pte & PT64_BASE_ADDR_MASK);
 			drop_parent_pte(child, spte);
 		}
-		return true;
-	}
-
-	if (is_mmio_spte(pte))
+	} else if (is_mmio_spte(pte)) {
 		mmu_spte_clear_no_track(spte);
-
-	return false;
+	}
 }
 
 static void kvm_mmu_page_unlink_children(struct kvm *kvm,
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 4dd6b1e5b8cf..3bb624a3dda9 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -895,6 +895,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
 {
 	struct kvm_shadow_walk_iterator iterator;
 	struct kvm_mmu_page *sp;
+	u64 old_spte;
 	int level;
 	u64 *sptep;
 
@@ -917,7 +918,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
 		sptep = iterator.sptep;
 
 		sp = sptep_to_sp(sptep);
-		if (is_last_spte(*sptep, level)) {
+		old_spte = *sptep;
+		if (is_last_spte(old_spte, level)) {
 			pt_element_t gpte;
 			gpa_t pte_gpa;
 
@@ -927,7 +929,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
 			pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 			pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
 
-			if (mmu_page_zap_pte(vcpu->kvm, sp, sptep))
+			mmu_page_zap_pte(vcpu->kvm, sp, sptep);
+			if (is_shadow_present_pte(old_spte))
 				kvm_flush_remote_tlbs_with_address(vcpu->kvm,
 					sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent
  2020-09-23 22:14 [PATCH v3 0/2] KVM: x86/mmu: Zap orphaned kids for nested TDP MMU Sean Christopherson
  2020-09-23 22:14 ` [PATCH v3 1/2] KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to FNAME(invlpg) Sean Christopherson
@ 2020-09-23 22:14 ` Sean Christopherson
  2020-09-23 23:29   ` Ben Gardon
  1 sibling, 1 reply; 6+ messages in thread
From: Sean Christopherson @ 2020-09-23 22:14 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Peter Shier, Ben Gardon

From: Ben Gardon <bgardon@google.com>

Recursively zap all to-be-orphaned children, unsynced or otherwise, when
zapping a shadow page for a nested TDP MMU.  KVM currently only zaps the
unsynced child pages, but not the synced ones.  This can create problems
over time when running many nested guests because it leaves unlinked
pages which will not be freed until the page quota is hit. With the
default page quota of 20 shadow pages per 1000 guest pages, this looks
like a memory leak and can degrade MMU performance.

In a recent benchmark, substantial performance degradation was observed:
An L1 guest was booted with 64G memory.
2G nested Windows guests were booted, 10 at a time for 20
iterations. (200 total boots)
Windows was used in this benchmark because they touch all of their
memory on startup.
By the end of the benchmark, the nested guests were taking ~10% longer
to boot. With this patch there is no degradation in boot time.
Without this patch the benchmark ends with hundreds of thousands of
stale EPT02 pages cluttering up rmaps and the page hash map. As a
result, VM shutdown is also much slower: deleting memslot 0 was
observed to take over a minute. With this patch it takes just a
few miliseconds.

Cc: Peter Shier <pshier@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/mmu/mmu.c         | 30 +++++++++++++++++++++++-------
 arch/x86/kvm/mmu/paging_tmpl.h |  2 +-
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a91e8601594d..e993d5cd4bc8 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2615,8 +2615,9 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 	}
 }
 
-static void mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
-			     u64 *spte)
+/* Returns the number of zapped non-leaf child shadow pages. */
+static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
+			    u64 *spte, struct list_head *invalid_list)
 {
 	u64 pte;
 	struct kvm_mmu_page *child;
@@ -2630,19 +2631,34 @@ static void mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 		} else {
 			child = to_shadow_page(pte & PT64_BASE_ADDR_MASK);
 			drop_parent_pte(child, spte);
+
+			/*
+			 * Recursively zap nested TDP SPs, parentless SPs are
+			 * unlikely to be used again in the near future.  This
+			 * avoids retaining a large number of stale nested SPs.
+			 */
+			if (tdp_enabled && invalid_list &&
+			    child->role.guest_mode && !child->parent_ptes.val)
+				return kvm_mmu_prepare_zap_page(kvm, child,
+								invalid_list);
 		}
 	} else if (is_mmio_spte(pte)) {
 		mmu_spte_clear_no_track(spte);
 	}
+	return 0;
 }
 
-static void kvm_mmu_page_unlink_children(struct kvm *kvm,
-					 struct kvm_mmu_page *sp)
+static int kvm_mmu_page_unlink_children(struct kvm *kvm,
+					struct kvm_mmu_page *sp,
+					struct list_head *invalid_list)
 {
+	int zapped = 0;
 	unsigned i;
 
 	for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
-		mmu_page_zap_pte(kvm, sp, sp->spt + i);
+		zapped += mmu_page_zap_pte(kvm, sp, sp->spt + i, invalid_list);
+
+	return zapped;
 }
 
 static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
@@ -2688,7 +2704,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
 	trace_kvm_mmu_prepare_zap_page(sp);
 	++kvm->stat.mmu_shadow_zapped;
 	*nr_zapped = mmu_zap_unsync_children(kvm, sp, invalid_list);
-	kvm_mmu_page_unlink_children(kvm, sp);
+	*nr_zapped += kvm_mmu_page_unlink_children(kvm, sp, invalid_list);
 	kvm_mmu_unlink_parents(kvm, sp);
 
 	/* Zapping children means active_mmu_pages has become unstable. */
@@ -5396,7 +5412,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 			u32 base_role = vcpu->arch.mmu->mmu_role.base.word;
 
 			entry = *spte;
-			mmu_page_zap_pte(vcpu->kvm, sp, spte);
+			mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL);
 			if (gentry &&
 			    !((sp->role.word ^ base_role) & ~role_ign.word) &&
 			    rmap_can_add(vcpu))
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 3bb624a3dda9..e1066226b8f0 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -929,7 +929,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
 			pte_gpa = FNAME(get_level1_sp_gpa)(sp);
 			pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
 
-			mmu_page_zap_pte(vcpu->kvm, sp, sptep);
+			mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL);
 			if (is_shadow_present_pte(old_spte))
 				kvm_flush_remote_tlbs_with_address(vcpu->kvm,
 					sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/2] KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to FNAME(invlpg)
  2020-09-23 22:14 ` [PATCH v3 1/2] KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to FNAME(invlpg) Sean Christopherson
@ 2020-09-23 23:19   ` Ben Gardon
  0 siblings, 0 replies; 6+ messages in thread
From: Ben Gardon @ 2020-09-23 23:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Peter Shier

On Wed, Sep 23, 2020 at 3:14 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Move the logic that controls whether or not FNAME(invlpg) needs to flush
> fully into FNAME(invlpg) so that mmu_page_zap_pte() doesn't return a
> value.  This allows a future patch to redefine the return semantics for
> mmu_page_zap_pte() so that it can recursively zap orphaned child shadow
> pages for nested TDP MMUs.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

>
> ---
>  arch/x86/kvm/mmu/mmu.c         | 10 +++-------
>  arch/x86/kvm/mmu/paging_tmpl.h |  7 +++++--
>  2 files changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 76c5826e29a2..a91e8601594d 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2615,7 +2615,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
>         }
>  }
>
> -static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
> +static void mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
>                              u64 *spte)
>  {
>         u64 pte;
> @@ -2631,13 +2631,9 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
>                         child = to_shadow_page(pte & PT64_BASE_ADDR_MASK);
>                         drop_parent_pte(child, spte);
>                 }
> -               return true;
> -       }
> -
> -       if (is_mmio_spte(pte))
> +       } else if (is_mmio_spte(pte)) {
>                 mmu_spte_clear_no_track(spte);
> -
> -       return false;
> +       }
>  }
>
>  static void kvm_mmu_page_unlink_children(struct kvm *kvm,
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 4dd6b1e5b8cf..3bb624a3dda9 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -895,6 +895,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
>  {
>         struct kvm_shadow_walk_iterator iterator;
>         struct kvm_mmu_page *sp;
> +       u64 old_spte;
>         int level;
>         u64 *sptep;
>
> @@ -917,7 +918,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
>                 sptep = iterator.sptep;
>
>                 sp = sptep_to_sp(sptep);
> -               if (is_last_spte(*sptep, level)) {
> +               old_spte = *sptep;
> +               if (is_last_spte(old_spte, level)) {
>                         pt_element_t gpte;
>                         gpa_t pte_gpa;
>
> @@ -927,7 +929,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
>                         pte_gpa = FNAME(get_level1_sp_gpa)(sp);
>                         pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
>
> -                       if (mmu_page_zap_pte(vcpu->kvm, sp, sptep))
> +                       mmu_page_zap_pte(vcpu->kvm, sp, sptep);
> +                       if (is_shadow_present_pte(old_spte))
>                                 kvm_flush_remote_tlbs_with_address(vcpu->kvm,
>                                         sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
>
> --
> 2.28.0
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent
  2020-09-23 22:14 ` [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent Sean Christopherson
@ 2020-09-23 23:29   ` Ben Gardon
  2020-09-25 20:36     ` Paolo Bonzini
  0 siblings, 1 reply; 6+ messages in thread
From: Ben Gardon @ 2020-09-23 23:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, kvm, linux-kernel, Peter Shier

On Wed, Sep 23, 2020 at 3:14 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> From: Ben Gardon <bgardon@google.com>
>
> Recursively zap all to-be-orphaned children, unsynced or otherwise, when
> zapping a shadow page for a nested TDP MMU.  KVM currently only zaps the
> unsynced child pages, but not the synced ones.  This can create problems
> over time when running many nested guests because it leaves unlinked
> pages which will not be freed until the page quota is hit. With the
> default page quota of 20 shadow pages per 1000 guest pages, this looks
> like a memory leak and can degrade MMU performance.
>
> In a recent benchmark, substantial performance degradation was observed:
> An L1 guest was booted with 64G memory.
> 2G nested Windows guests were booted, 10 at a time for 20
> iterations. (200 total boots)
> Windows was used in this benchmark because they touch all of their
> memory on startup.
> By the end of the benchmark, the nested guests were taking ~10% longer
> to boot. With this patch there is no degradation in boot time.
> Without this patch the benchmark ends with hundreds of thousands of
> stale EPT02 pages cluttering up rmaps and the page hash map. As a
> result, VM shutdown is also much slower: deleting memslot 0 was
> observed to take over a minute. With this patch it takes just a
> few miliseconds.
>
> Cc: Peter Shier <pshier@google.com>
> Signed-off-by: Ben Gardon <bgardon@google.com>
> Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

Reviewed-by: Ben Gardon <bgardon@google.com>
(I don't know if my review is useful here, but the rebase of this
patch looks correct! Thank you for preventing these from becoming
undead, Sean.)

> ---
>  arch/x86/kvm/mmu/mmu.c         | 30 +++++++++++++++++++++++-------
>  arch/x86/kvm/mmu/paging_tmpl.h |  2 +-
>  2 files changed, 24 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a91e8601594d..e993d5cd4bc8 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2615,8 +2615,9 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
>         }
>  }
>
> -static void mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
> -                            u64 *spte)
> +/* Returns the number of zapped non-leaf child shadow pages. */
> +static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
> +                           u64 *spte, struct list_head *invalid_list)
>  {
>         u64 pte;
>         struct kvm_mmu_page *child;
> @@ -2630,19 +2631,34 @@ static void mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
>                 } else {
>                         child = to_shadow_page(pte & PT64_BASE_ADDR_MASK);
>                         drop_parent_pte(child, spte);
> +
> +                       /*
> +                        * Recursively zap nested TDP SPs, parentless SPs are
> +                        * unlikely to be used again in the near future.  This
> +                        * avoids retaining a large number of stale nested SPs.
> +                        */
> +                       if (tdp_enabled && invalid_list &&
> +                           child->role.guest_mode && !child->parent_ptes.val)
> +                               return kvm_mmu_prepare_zap_page(kvm, child,
> +                                                               invalid_list);
>                 }
>         } else if (is_mmio_spte(pte)) {
>                 mmu_spte_clear_no_track(spte);
>         }
> +       return 0;
>  }
>
> -static void kvm_mmu_page_unlink_children(struct kvm *kvm,
> -                                        struct kvm_mmu_page *sp)
> +static int kvm_mmu_page_unlink_children(struct kvm *kvm,
> +                                       struct kvm_mmu_page *sp,
> +                                       struct list_head *invalid_list)
>  {
> +       int zapped = 0;
>         unsigned i;
>
>         for (i = 0; i < PT64_ENT_PER_PAGE; ++i)
> -               mmu_page_zap_pte(kvm, sp, sp->spt + i);
> +               zapped += mmu_page_zap_pte(kvm, sp, sp->spt + i, invalid_list);
> +
> +       return zapped;
>  }
>
>  static void kvm_mmu_unlink_parents(struct kvm *kvm, struct kvm_mmu_page *sp)
> @@ -2688,7 +2704,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm,
>         trace_kvm_mmu_prepare_zap_page(sp);
>         ++kvm->stat.mmu_shadow_zapped;
>         *nr_zapped = mmu_zap_unsync_children(kvm, sp, invalid_list);
> -       kvm_mmu_page_unlink_children(kvm, sp);
> +       *nr_zapped += kvm_mmu_page_unlink_children(kvm, sp, invalid_list);
>         kvm_mmu_unlink_parents(kvm, sp);
>
>         /* Zapping children means active_mmu_pages has become unstable. */
> @@ -5396,7 +5412,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
>                         u32 base_role = vcpu->arch.mmu->mmu_role.base.word;
>
>                         entry = *spte;
> -                       mmu_page_zap_pte(vcpu->kvm, sp, spte);
> +                       mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL);
>                         if (gentry &&
>                             !((sp->role.word ^ base_role) & ~role_ign.word) &&
>                             rmap_can_add(vcpu))
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 3bb624a3dda9..e1066226b8f0 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -929,7 +929,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
>                         pte_gpa = FNAME(get_level1_sp_gpa)(sp);
>                         pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
>
> -                       mmu_page_zap_pte(vcpu->kvm, sp, sptep);
> +                       mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL);
>                         if (is_shadow_present_pte(old_spte))
>                                 kvm_flush_remote_tlbs_with_address(vcpu->kvm,
>                                         sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
> --
> 2.28.0
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent
  2020-09-23 23:29   ` Ben Gardon
@ 2020-09-25 20:36     ` Paolo Bonzini
  0 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2020-09-25 20:36 UTC (permalink / raw)
  To: Ben Gardon, Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
	linux-kernel, Peter Shier

On 24/09/20 01:29, Ben Gardon wrote:
> Reviewed-by: Ben Gardon <bgardon@google.com>
> (I don't know if my review is useful here, but the rebase of this
> patch looks correct! Thank you for preventing these from becoming
> undead, Sean.)

It is; I had your patch on my todo list in case Sean didn't get to it,
but of course it's even better that you guys sorted it out. :)  I have
queued both, thanks.

Paolo


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-09-25 20:36 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-23 22:14 [PATCH v3 0/2] KVM: x86/mmu: Zap orphaned kids for nested TDP MMU Sean Christopherson
2020-09-23 22:14 ` [PATCH v3 1/2] KVM: x86/mmu: Move flush logic from mmu_page_zap_pte() to FNAME(invlpg) Sean Christopherson
2020-09-23 23:19   ` Ben Gardon
2020-09-23 22:14 ` [PATCH v3 2/2] KVM: x86/MMU: Recursively zap nested TDP SPs when zapping last/only parent Sean Christopherson
2020-09-23 23:29   ` Ben Gardon
2020-09-25 20:36     ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.