linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Colton Lewis <coltonlewis@google.com>
To: kvm@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,  Marc Zyngier <maz@kernel.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	 James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	 Zenghui Yu <yuzenghui@huawei.com>,
	linux-arm-kernel@lists.infradead.org,
	 linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev,
	 Colton Lewis <coltonlewis@google.com>
Subject: [PATCH 3/3] KVM: arm64: Skip break phase when we have FEAT_BBM level 2
Date: Fri,  2 Jun 2023 17:01:47 +0000	[thread overview]
Message-ID: <20230602170147.1541355-4-coltonlewis@google.com> (raw)
In-Reply-To: <20230602170147.1541355-1-coltonlewis@google.com>

Skip the break phase of break-before-make when the CPU has FEAT_BBM
level 2. This allows skipping some expensive invalidation and
serialization and should result in significant performance
improvements when changing block size.

The ARM manual section D5.10.1 specifically states under heading
"Support levels for changing block size" that FEAT_BBM Level 2 support
means changing block size does not break coherency, ordering
guarantees, or uniprocessor semantics.

Because a compare-and-exchange operation was used in the break phase
to serialize access to the PTE, an analogous compare-and-exchange is
introduced in the make phase to ensure serialization remains even if
the break phase is skipped and proper handling is introduced to
account for this function now having a way to fail.

Considering the possibility that the new pte has different permissions
than the old pte, the minimum necessary tlb invalidations are used.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
---
 arch/arm64/kvm/hyp/pgtable.c | 58 +++++++++++++++++++++++++++++++-----
 1 file changed, 51 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 8acab89080af9..6778e3df697f7 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -643,6 +643,11 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt)
 	return !(pgt->flags & KVM_PGTABLE_S2_NOFWB);
 }

+static bool stage2_has_bbm_level2(void)
+{
+	return cpus_have_const_cap(ARM64_HAS_STAGE2_BBM2);
+}
+
 #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt))

 static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot,
@@ -730,7 +735,7 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_
  * @ctx: context of the visited pte.
  * @mmu: stage-2 mmu
  *
- * Returns: true if the pte was successfully broken.
+ * Returns: true if the pte was successfully broken or there is no need.
  *
  * If the removed pte was valid, performs the necessary serialization and TLB
  * invalidation for the old value. For counted ptes, drops the reference count
@@ -750,6 +755,10 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
 		return false;
 	}

+	/* There is no need to break the pte. */
+	if (stage2_has_bbm_level2())
+		return true;
+
 	if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED))
 		return false;

@@ -771,16 +780,45 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
 	return true;
 }

-static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
+static bool stage2_pte_perms_equal(kvm_pte_t p1, kvm_pte_t p2)
+{
+	u64 perms1 = p1 & KVM_PGTABLE_PROT_RWX;
+	u64 perms2 = p2 & KVM_PGTABLE_PROT_RWX;
+
+	return perms1 == perms2;
+}
+
+/**
+ * stage2_try_make_pte() - Attempts to install a new pte.
+ *
+ * @ctx: context of the visited pte.
+ * @new: new pte to install
+ *
+ * Returns: true if the pte was successfully installed
+ *
+ * If the old pte had different permissions, perform appropriate TLB
+ * invalidation for the old value. For counted ptes, drops the
+ * reference count on the containing table page.
+ */
+static bool stage2_try_make_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, kvm_pte_t new)
 {
 	struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;

-	WARN_ON(!stage2_pte_is_locked(*ctx->ptep));
+	if (!stage2_has_bbm_level2())
+		WARN_ON(!stage2_pte_is_locked(*ctx->ptep));
+
+	if (!stage2_try_set_pte(ctx, new))
+		return false;
+
+	if (kvm_pte_table(ctx->old, ctx->level))
+		kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
+	else if (kvm_pte_valid(ctx->old) && !stage2_pte_perms_equal(ctx->old, new))
+		kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, mmu, ctx->addr, ctx->level);

 	if (stage2_pte_is_counted(new))
 		mm_ops->get_page(ctx->ptep);

-	smp_store_release(ctx->ptep, new);
+	return true;
 }

 static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu,
@@ -879,7 +917,8 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
 	    stage2_pte_executable(new))
 		mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule);

-	stage2_make_pte(ctx, new);
+	if (!stage2_try_make_pte(ctx, data->mmu, new))
+		return -EAGAIN;

 	return 0;
 }
@@ -934,7 +973,9 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx,
 	 * will be mapped lazily.
 	 */
 	new = kvm_init_table_pte(childp, mm_ops);
-	stage2_make_pte(ctx, new);
+
+	if (!stage2_try_make_pte(ctx, data->mmu, new))
+		return -EAGAIN;

 	return 0;
 }
@@ -1385,7 +1426,10 @@ static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx,
 	 * writes the PTE using smp_store_release().
 	 */
 	new = kvm_init_table_pte(childp, mm_ops);
-	stage2_make_pte(ctx, new);
+
+	if (!stage2_try_make_pte(ctx, mmu, new))
+		return -EAGAIN;
+
 	dsb(ishst);
 	return 0;
 }
--
2.41.0.rc0.172.g3f132b7071-goog

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-06-02 17:03 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-02 17:01 [PATCH 0/3] Relax break-before-make use with FEAT_BBM Colton Lewis
2023-06-02 17:01 ` [PATCH 1/3] arm64: Add a capability for FEAT_BBM level 2 Colton Lewis
2023-06-05 15:07   ` Robin Murphy
2023-06-02 17:01 ` [PATCH 2/3] KVM: arm64: Clear possible conflict aborts Colton Lewis
2023-06-09 15:44   ` Oliver Upton
2023-06-02 17:01 ` Colton Lewis [this message]
2023-06-04  8:23   ` [PATCH 3/3] KVM: arm64: Skip break phase when we have FEAT_BBM level 2 Marc Zyngier
2023-06-05 21:36     ` Oliver Upton
2023-06-08 17:21       ` Will Deacon
2023-06-09 14:59         ` Oliver Upton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230602170147.1541355-4-coltonlewis@google.com \
    --to=coltonlewis@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=james.morse@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).