All of lore.kernel.org
 help / color / mirror / Atom feed
From: Raghavendra Rao Ananta <rananta@google.com>
To: Oliver Upton <oupton@google.com>, Marc Zyngier <maz@kernel.org>,
	Ricardo Koller <ricarkol@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Will Deacon <will@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jing Zhang <jingzhangos@google.com>,
	Colton Lewis <coltonlewis@google.com>,
	Raghavendra Rao Anata <rananta@google.com>,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path
Date: Mon,  6 Feb 2023 17:23:40 +0000	[thread overview]
Message-ID: <20230206172340.2639971-8-rananta@google.com> (raw)
In-Reply-To: <20230206172340.2639971-1-rananta@google.com>

The current implementation of the stage-2 unmap walker
traverses the entire page-table to clear and flush the TLBs
for each entry. This could be very expensive, especially if
the VM is not backed by hugepages. The unmap operation could be
made efficient by disconnecting the table at the very
top (level at which the largest block mapping can be hosted)
and do the rest of the unmapping using free_removed_table().
If the system supports FEAT_TLBIRANGE, flush the entire range
that has been disconnected from the rest of the page-table.

Suggested-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 44 ++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 0858d1fa85d6b..af3729d0971f2 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -1017,6 +1017,49 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
 	return 0;
 }
 
+/*
+ * The fast walker executes only if the unmap size is exactly equal to the
+ * largest block mapping supported (i.e. at KVM_PGTABLE_MIN_BLOCK_LEVEL),
+ * such that the underneath hierarchy at KVM_PGTABLE_MIN_BLOCK_LEVEL can
+ * be disconnected from the rest of the page-table without the need to
+ * traverse all the PTEs, at all the levels, and unmap each and every one
+ * of them. The disconnected table is freed using free_removed_table().
+ */
+static int fast_stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
+			       enum kvm_pgtable_walk_flags visit)
+{
+	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
+	kvm_pte_t *childp = kvm_pte_follow(ctx->old, mm_ops);
+	struct kvm_s2_mmu *mmu = ctx->arg;
+
+	if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_MIN_BLOCK_LEVEL)
+		return 0;
+
+	if (!stage2_try_break_pte(ctx, mmu))
+		return -EAGAIN;
+
+	/*
+	 * Gain back a reference for stage2_unmap_walker() to free
+	 * this table entry from KVM_PGTABLE_MIN_BLOCK_LEVEL - 1.
+	 */
+	mm_ops->get_page(ctx->ptep);
+
+	mm_ops->free_removed_table(childp, ctx->level);
+	return 0;
+}
+
+static void kvm_pgtable_try_fast_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
+{
+	struct kvm_pgtable_walker walker = {
+		.cb	= fast_stage2_unmap_walker,
+		.arg	= pgt->mmu,
+		.flags	= KVM_PGTABLE_WALK_TABLE_PRE,
+	};
+
+	if (size == kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL))
+		kvm_pgtable_walk(pgt, addr, size, &walker);
+}
+
 int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 {
 	struct kvm_pgtable_walker walker = {
@@ -1025,6 +1068,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 		.flags	= KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
 	};
 
+	kvm_pgtable_try_fast_stage2_unmap(pgt, addr, size);
 	return kvm_pgtable_walk(pgt, addr, size, &walker);
 }
 
-- 
2.39.1.519.gcb327c4b5f-goog


WARNING: multiple messages have this Message-ID (diff)
From: Raghavendra Rao Ananta <rananta@google.com>
To: Oliver Upton <oupton@google.com>, Marc Zyngier <maz@kernel.org>,
	 Ricardo Koller <ricarkol@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	 James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	 Suzuki K Poulose <suzuki.poulose@arm.com>,
	Will Deacon <will@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	 Jing Zhang <jingzhangos@google.com>,
	Colton Lewis <coltonlewis@google.com>,
	 Raghavendra Rao Anata <rananta@google.com>,
	linux-arm-kernel@lists.infradead.org,  kvmarm@lists.linux.dev,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path
Date: Mon,  6 Feb 2023 17:23:40 +0000	[thread overview]
Message-ID: <20230206172340.2639971-8-rananta@google.com> (raw)
In-Reply-To: <20230206172340.2639971-1-rananta@google.com>

The current implementation of the stage-2 unmap walker
traverses the entire page-table to clear and flush the TLBs
for each entry. This could be very expensive, especially if
the VM is not backed by hugepages. The unmap operation could be
made efficient by disconnecting the table at the very
top (level at which the largest block mapping can be hosted)
and do the rest of the unmapping using free_removed_table().
If the system supports FEAT_TLBIRANGE, flush the entire range
that has been disconnected from the rest of the page-table.

Suggested-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 44 ++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 0858d1fa85d6b..af3729d0971f2 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -1017,6 +1017,49 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
 	return 0;
 }
 
+/*
+ * The fast walker executes only if the unmap size is exactly equal to the
+ * largest block mapping supported (i.e. at KVM_PGTABLE_MIN_BLOCK_LEVEL),
+ * such that the underneath hierarchy at KVM_PGTABLE_MIN_BLOCK_LEVEL can
+ * be disconnected from the rest of the page-table without the need to
+ * traverse all the PTEs, at all the levels, and unmap each and every one
+ * of them. The disconnected table is freed using free_removed_table().
+ */
+static int fast_stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
+			       enum kvm_pgtable_walk_flags visit)
+{
+	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
+	kvm_pte_t *childp = kvm_pte_follow(ctx->old, mm_ops);
+	struct kvm_s2_mmu *mmu = ctx->arg;
+
+	if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_MIN_BLOCK_LEVEL)
+		return 0;
+
+	if (!stage2_try_break_pte(ctx, mmu))
+		return -EAGAIN;
+
+	/*
+	 * Gain back a reference for stage2_unmap_walker() to free
+	 * this table entry from KVM_PGTABLE_MIN_BLOCK_LEVEL - 1.
+	 */
+	mm_ops->get_page(ctx->ptep);
+
+	mm_ops->free_removed_table(childp, ctx->level);
+	return 0;
+}
+
+static void kvm_pgtable_try_fast_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
+{
+	struct kvm_pgtable_walker walker = {
+		.cb	= fast_stage2_unmap_walker,
+		.arg	= pgt->mmu,
+		.flags	= KVM_PGTABLE_WALK_TABLE_PRE,
+	};
+
+	if (size == kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL))
+		kvm_pgtable_walk(pgt, addr, size, &walker);
+}
+
 int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 {
 	struct kvm_pgtable_walker walker = {
@@ -1025,6 +1068,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
 		.flags	= KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
 	};
 
+	kvm_pgtable_try_fast_stage2_unmap(pgt, addr, size);
 	return kvm_pgtable_walk(pgt, addr, size, &walker);
 }
 
-- 
2.39.1.519.gcb327c4b5f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-02-06 17:24 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-06 17:23 [PATCH v2 0/7] KVM: arm64: Add support for FEAT_TLBIRANGE Raghavendra Rao Ananta
2023-02-06 17:23 ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 2/7] KVM: arm64: Add FEAT_TLBIRANGE support Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  1:19   ` Oliver Upton
2023-03-30  1:19     ` Oliver Upton
2023-04-03 17:26     ` Raghavendra Rao Ananta
2023-04-03 17:26       ` Raghavendra Rao Ananta
2023-04-04 18:41       ` Oliver Upton
2023-04-04 18:41         ` Oliver Upton
2023-04-04 18:50         ` Oliver Upton
2023-04-04 18:50           ` Oliver Upton
2023-04-04 21:39         ` Raghavendra Rao Ananta
2023-04-04 21:39           ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 3/7] KVM: arm64: Implement __kvm_tlb_flush_range_vmid_ipa() Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:59   ` Oliver Upton
2023-03-30  0:59     ` Oliver Upton
2023-04-03 21:08     ` Raghavendra Rao Ananta
2023-04-03 21:08       ` Raghavendra Rao Ananta
2023-04-04 18:46       ` Oliver Upton
2023-04-04 18:46         ` Oliver Upton
2023-04-04 20:50         ` Raghavendra Rao Ananta
2023-04-04 20:50           ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:53   ` Oliver Upton
2023-03-30  0:53     ` Oliver Upton
2023-04-03 21:23     ` Raghavendra Rao Ananta
2023-04-03 21:23       ` Raghavendra Rao Ananta
2023-04-04 19:09       ` Oliver Upton
2023-04-04 19:09         ` Oliver Upton
2023-04-04 20:59         ` Raghavendra Rao Ananta
2023-04-04 20:59           ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 5/7] KVM: arm64: Flush only the memslot after write-protect Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-02-06 17:23 ` [PATCH v2 6/7] KVM: arm64: Break the table entries using TLBI range instructions Raghavendra Rao Ananta
2023-02-06 17:23   ` Raghavendra Rao Ananta
2023-03-30  0:17   ` Oliver Upton
2023-03-30  0:17     ` Oliver Upton
2023-04-03 21:25     ` Raghavendra Rao Ananta
2023-04-03 21:25       ` Raghavendra Rao Ananta
2023-02-06 17:23 ` Raghavendra Rao Ananta [this message]
2023-02-06 17:23   ` [PATCH v2 7/7] KVM: arm64: Create a fast stage-2 unmap path Raghavendra Rao Ananta
2023-03-30  0:42   ` Oliver Upton
2023-03-30  0:42     ` Oliver Upton
2023-04-04 17:52     ` Raghavendra Rao Ananta
2023-04-04 17:52       ` Raghavendra Rao Ananta
2023-04-04 19:19       ` Oliver Upton
2023-04-04 19:19         ` Oliver Upton
2023-04-04 21:07         ` Raghavendra Rao Ananta
2023-04-04 21:07           ` Raghavendra Rao Ananta
2023-04-04 21:30           ` Oliver Upton
2023-04-04 21:30             ` Oliver Upton
2023-04-04 21:45             ` Raghavendra Rao Ananta
2023-04-04 21:45               ` Raghavendra Rao Ananta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230206172340.2639971-8-rananta@google.com \
    --to=rananta@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=coltonlewis@google.com \
    --cc=james.morse@arm.com \
    --cc=jingzhangos@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=reijiw@google.com \
    --cc=ricarkol@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.