* [PATCH] KVM: x86/mmu: Recurse down to 1GB level when zapping pages in a range
@ 2022-03-18 16:42 Paolo Bonzini
0 siblings, 0 replies; only message in thread
From: Paolo Bonzini @ 2022-03-18 16:42 UTC (permalink / raw)
To: linux-kernel, kvm
The recursive zapping that was reintroduced by reverting "KVM: x86/mmu:
Zap only TDP MMU leafs in kvm_zap_gfn_range()" can be expensive. Allow
zap_gfn_range to recurse down to the PDPTE level, so that periodic
yielding is possible with a finer granularity.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/tdp_mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 87d8910c9ac2..53689603078a 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -926,8 +926,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
/*
* No need to try to step down in the iterator when zapping all SPTEs,
* zapping the top-level non-leaf SPTEs will recurse on their children.
+ * Do not do it above the 1GB level, to avoid making tdp_mmu_set_spte's
+ * recursion too expensive and allow yielding.
*/
- int min_level = zap_all ? root->role.level : PG_LEVEL_4K;
+ int min_level = zap_all ? PG_LEVEL_1G : PG_LEVEL_4K;
end = min(end, tdp_mmu_max_gfn_host());
--
2.31.1
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2022-03-18 16:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-18 16:42 [PATCH] KVM: x86/mmu: Recurse down to 1GB level when zapping pages in a range Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.