All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
To: Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Julien Thierry <julien.thierry.kdev@gmail.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Janosch Frank <frankja@linux.ibm.com>,
	David Hildenbrand <david@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Joerg Roedel <joro@8bytes.org>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH v5 12/13] KVM: Optimize gfn lookup in kvm_zap_gfn_range()
Date: Mon, 20 Sep 2021 23:39:00 +0200	[thread overview]
Message-ID: <062df8ac9eb280440a5f0c11159616b1bbb1c2c4.1632171479.git.maciej.szmigiero@oracle.com> (raw)
In-Reply-To: <cover.1632171478.git.maciej.szmigiero@oracle.com>

From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>

Introduce a memslots gfn upper bound operation and use it to optimize
kvm_zap_gfn_range().
This way this handler can do a quick lookup for intersecting gfns and won't
have to do a linear scan of the whole memslot set.

Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
---
 arch/x86/kvm/mmu/mmu.c   | 11 +++++--
 include/linux/kvm_host.h | 69 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 78 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a05e581ef210..f2859988d630 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5724,18 +5724,25 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
 	int i;
 	bool flush = false;
 
+	if (WARN_ON_ONCE(gfn_end <= gfn_start))
+		return;
+
 	write_lock(&kvm->mmu_lock);
 
 	kvm_inc_notifier_count(kvm, gfn_start, gfn_end);
 
 	if (kvm_memslots_have_rmaps(kvm)) {
 		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
-			int bkt;
+			int idx;
+			struct rb_node *node;
 
 			slots = __kvm_memslots(kvm, i);
-			kvm_for_each_memslot(memslot, bkt, slots) {
+			idx = slots->node_idx;
+
+			kvm_for_each_memslot_in_gfn_range(node, slots, gfn_start, gfn_end) {
 				gfn_t start, end;
 
+				memslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
 				start = max(gfn_start, memslot->base_gfn);
 				end = min(gfn_end, memslot->base_gfn + memslot->npages);
 				if (start >= end)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6433efff447a..9ae5f7341cf5 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -833,6 +833,75 @@ struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
 	return NULL;
 }
 
+static inline
+struct rb_node *kvm_memslots_gfn_upper_bound(struct kvm_memslots *slots, gfn_t gfn)
+{
+	int idx = slots->node_idx;
+	struct rb_node *node, *result = NULL;
+
+	for (node = slots->gfn_tree.rb_node; node; ) {
+		struct kvm_memory_slot *slot;
+
+		slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
+		if (gfn < slot->base_gfn) {
+			result = node;
+			node = node->rb_left;
+		} else
+			node = node->rb_right;
+	}
+
+	return result;
+}
+
+static inline
+struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t start)
+{
+	struct rb_node *node;
+
+	/*
+	 * Find the slot with the lowest gfn that can possibly intersect with
+	 * the range, so we'll ideally have slot start <= range start
+	 */
+	node = kvm_memslots_gfn_upper_bound(slots, start);
+	if (node) {
+		struct rb_node *pnode;
+
+		/*
+		 * A NULL previous node means that the very first slot
+		 * already has a higher start gfn.
+		 * In this case slot start > range start.
+		 */
+		pnode = rb_prev(node);
+		if (pnode)
+			node = pnode;
+	} else {
+		/* a NULL node below means no slots */
+		node = rb_last(&slots->gfn_tree);
+	}
+
+	return node;
+}
+
+static inline
+bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end)
+{
+	struct kvm_memory_slot *memslot;
+
+	memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]);
+
+	/*
+	 * If this slot starts beyond or at the end of the range so does
+	 * every next one
+	 */
+	return memslot->base_gfn >= end;
+}
+
+/* Iterate over each memslot *possibly* intersecting [start, end) range */
+#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end)	\
+	for (node = kvm_for_each_in_gfn_first(slots, start);		\
+	     node && !kvm_for_each_in_gfn_no_more(slots, node, end);	\
+	     node = rb_next(node))					\
+
 /*
  * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations:
  * - create a new memory slot

  parent reply	other threads:[~2021-09-20 21:43 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-20 21:38 [PATCH v5 00/13] KVM: Scalable memslots implementation Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 01/13] KVM: x86: Cache total page count to avoid traversing the memslot array Maciej S. Szmigiero
2021-10-19 22:24   ` Sean Christopherson
2021-10-19 22:31     ` Sean Christopherson
2021-10-20 18:40       ` Maciej S. Szmigiero
2021-10-20 18:41     ` Maciej S. Szmigiero
2021-10-20 19:01       ` Sean Christopherson
2021-11-01 22:29         ` Sean Christopherson
2021-11-03 11:59           ` Maciej S. Szmigiero
2021-11-03 14:47             ` Sean Christopherson
2021-11-03 15:38               ` Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 02/13] KVM: x86: Don't call kvm_mmu_change_mmu_pages() if the count hasn't changed Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 03/13] KVM: Add "old" memslot parameter to kvm_arch_prepare_memory_region() Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 04/13] KVM: x86: Move n_memslots_pages recalc " Maciej S. Szmigiero
2021-10-19 22:38   ` Sean Christopherson
2021-10-20 18:41     ` Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 05/13] KVM: Integrate gfn_to_memslot_approx() into search_memslots() Maciej S. Szmigiero
2021-10-19 23:38   ` Sean Christopherson
2021-10-20 18:41     ` Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 06/13] KVM: Move WARN on invalid memslot index to update_memslots() Maciej S. Szmigiero
2021-10-19 23:42   ` Sean Christopherson
2021-09-20 21:38 ` [PATCH v5 07/13] KVM: Just resync arch fields when slots_arch_lock gets reacquired Maciej S. Szmigiero
2021-10-19 23:55   ` Sean Christopherson
2021-10-20 18:41     ` Maciej S. Szmigiero
2021-10-20 18:57       ` Sean Christopherson
2021-10-20 18:58         ` Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 08/13] KVM: Resolve memslot ID via a hash table instead of via a static array Maciej S. Szmigiero
2021-10-20  0:43   ` Sean Christopherson
2021-10-20 18:42     ` Maciej S. Szmigiero
2021-10-20 22:39   ` Sean Christopherson
2021-10-21 14:15     ` Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 09/13] KVM: Use interval tree to do fast hva lookup in memslots Maciej S. Szmigiero
2021-10-26 18:19   ` Sean Christopherson
2021-10-26 18:46     ` Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 10/13] KVM: s390: Introduce kvm_s390_get_gfn_end() Maciej S. Szmigiero
2021-09-20 21:38 ` [PATCH v5 11/13] KVM: Keep memslots in tree-based structures instead of array-based ones Maciej S. Szmigiero
2021-10-27  0:36   ` Sean Christopherson
2021-10-27 23:54     ` Sean Christopherson
2021-10-28 22:22       ` Sean Christopherson
2021-09-20 21:39 ` Maciej S. Szmigiero [this message]
2021-10-20 23:47   ` [PATCH v5 12/13] KVM: Optimize gfn lookup in kvm_zap_gfn_range() Sean Christopherson
2021-10-21 14:16     ` Maciej S. Szmigiero
2021-10-21 16:30       ` Sean Christopherson
2021-10-21 21:44         ` Maciej S. Szmigiero
2021-09-20 21:39 ` [PATCH v5 13/13] KVM: Optimize overlapping memslots check Maciej S. Szmigiero
2021-10-26 18:59   ` Sean Christopherson
2021-10-27 13:48     ` Maciej S. Szmigiero
2021-10-28 17:53       ` Sean Christopherson
2021-10-29 16:23         ` Maciej S. Szmigiero
2021-10-30  0:32           ` Sean Christopherson
2021-10-19 22:07 ` [PATCH v5 00/13] KVM: Scalable memslots implementation Sean Christopherson
2021-10-20 18:40   ` Maciej S. Szmigiero
2021-10-20 19:58     ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=062df8ac9eb280440a5f0c11159616b1bbb1c2c4.1632171479.git.maciej.szmigiero@oracle.com \
    --to=mail@maciej.szmigiero.name \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=borntraeger@de.ibm.com \
    --cc=chenhuacai@kernel.org \
    --cc=cohuck@redhat.com \
    --cc=david@redhat.com \
    --cc=frankja@linux.ibm.com \
    --cc=imammedo@redhat.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=james.morse@arm.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=julien.thierry.kdev@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=paulus@ozlabs.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.