From: Ricardo Koller <ricarkol@google.com>
To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com,
yuzenghui@huawei.com, dmatlack@google.com
Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com,
catalin.marinas@arm.com, andrew.jones@linux.dev,
seanjc@google.com, alexandru.elisei@arm.com,
suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com,
reijiw@google.com, rananta@google.com, bgardon@google.com,
ricarkol@gmail.com, Ricardo Koller <ricarkol@google.com>
Subject: [PATCH v3 04/12] KVM: arm64: Add kvm_pgtable_stage2_split()
Date: Wed, 15 Feb 2023 17:40:38 +0000 [thread overview]
Message-ID: <20230215174046.2201432-5-ricarkol@google.com> (raw)
In-Reply-To: <20230215174046.2201432-1-ricarkol@google.com>
Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a
range of huge pages. This will be used for eager-splitting huge pages
into PAGE_SIZE pages. The goal is to avoid having to split huge pages
on write-protection faults, and instead use this function to do it
ahead of time for large ranges (e.g., all guest memory in 1G chunks at
a time).
No functional change intended. This new function will be used in a
subsequent commit.
Signed-off-by: Ricardo Koller <ricarkol@google.com>
---
arch/arm64/include/asm/kvm_pgtable.h | 30 ++++++++
arch/arm64/kvm/hyp/pgtable.c | 105 +++++++++++++++++++++++++++
2 files changed, 135 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index 2ea397ad3e63..b28489aa0994 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -658,6 +658,36 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr);
*/
int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size);
+/**
+ * kvm_pgtable_stage2_split() - Split a range of huge pages into leaf PTEs pointing
+ * to PAGE_SIZE guest pages.
+ * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init().
+ * @addr: Intermediate physical address from which to split.
+ * @size: Size of the range.
+ * @mc: Cache of pre-allocated and zeroed memory from which to allocate
+ * page-table pages.
+ * @mc_capacity: Number of pages in @mc.
+ *
+ * @addr and the end (@addr + @size) are effectively aligned down and up to
+ * the top level huge-page block size. This is an example using 1GB
+ * huge-pages and 4KB granules.
+ *
+ * [---input range---]
+ * : :
+ * [--1G block pte--][--1G block pte--][--1G block pte--][--1G block pte--]
+ * : :
+ * [--2MB--][--2MB--][--2MB--][--2MB--]
+ * : :
+ * [ ][ ][:][ ][ ][ ][ ][ ][:][ ][ ][ ]
+ * : :
+ *
+ * Return: 0 on success, negative error code on failure. Note that
+ * kvm_pgtable_stage2_split() is best effort: it tries to break as many
+ * blocks in the input range as allowed by @mc_capacity.
+ */
+int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
+ void *mc, u64 mc_capacity);
+
/**
* kvm_pgtable_walk() - Walk a page-table.
* @pgt: Page-table structure initialised by kvm_pgtable_*_init().
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index fed314f2b320..e2fb78398b3d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -1229,6 +1229,111 @@ int kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt,
return 0;
}
+struct stage2_split_data {
+ struct kvm_s2_mmu *mmu;
+ void *memcache;
+ u64 mc_capacity;
+};
+
+/*
+ * Get the number of page-tables needed to replace a bock with a fully
+ * populated tree, up to the PTE level, at particular level.
+ */
+static inline u32 stage2_block_get_nr_page_tables(u32 level)
+{
+ switch (level) {
+ /* There are no blocks at level 0 */
+ case 1: return 1 + PTRS_PER_PTE;
+ case 2: return 1;
+ case 3: return 0;
+ default:
+ WARN_ON_ONCE(1);
+ return ~0;
+ }
+}
+
+static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx,
+ enum kvm_pgtable_walk_flags visit)
+{
+ struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
+ struct stage2_split_data *data = ctx->arg;
+ kvm_pte_t pte = ctx->old, new, *childp;
+ enum kvm_pgtable_prot prot;
+ void *mc = data->memcache;
+ u32 level = ctx->level;
+ u64 phys, nr_pages;
+ bool force_pte;
+ int ret;
+
+ /* No huge-pages exist at the last level */
+ if (level == KVM_PGTABLE_MAX_LEVELS - 1)
+ return 0;
+
+ /* We only split valid block mappings */
+ if (!kvm_pte_valid(pte))
+ return 0;
+
+ nr_pages = stage2_block_get_nr_page_tables(level);
+ if (data->mc_capacity >= nr_pages) {
+ /* Build a tree mapped down to the PTE granularity. */
+ force_pte = true;
+ } else {
+ /*
+ * Don't force PTEs. This requires a single page of PMDs at the
+ * PUD level, or a single page of PTEs at the PMD level. If we
+ * are at the PUD level, the PTEs will be created recursively.
+ */
+ force_pte = false;
+ nr_pages = 1;
+ }
+
+ if (data->mc_capacity < nr_pages)
+ return -ENOMEM;
+
+ phys = kvm_pte_to_phys(pte);
+ prot = kvm_pgtable_stage2_pte_prot(pte);
+
+ ret = kvm_pgtable_stage2_create_unlinked(data->mmu->pgt, &new, phys,
+ level, prot, mc, force_pte);
+ if (ret)
+ return ret;
+
+ if (!stage2_try_break_pte(ctx, data->mmu)) {
+ childp = kvm_pte_follow(new, mm_ops);
+ kvm_pgtable_stage2_free_unlinked(mm_ops, childp, level);
+ mm_ops->put_page(childp);
+ return -EAGAIN;
+ }
+
+ /*
+ * Note, the contents of the page table are guaranteed to be made
+ * visible before the new PTE is assigned because stage2_make_pte()
+ * writes the PTE using smp_store_release().
+ */
+ stage2_make_pte(ctx, new);
+ dsb(ishst);
+ data->mc_capacity -= nr_pages;
+ return 0;
+}
+
+int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
+ void *mc, u64 mc_capacity)
+{
+ struct stage2_split_data split_data = {
+ .mmu = pgt->mmu,
+ .memcache = mc,
+ .mc_capacity = mc_capacity,
+ };
+
+ struct kvm_pgtable_walker walker = {
+ .cb = stage2_split_walker,
+ .flags = KVM_PGTABLE_WALK_LEAF,
+ .arg = &split_data,
+ };
+
+ return kvm_pgtable_walk(pgt, addr, size, &walker);
+}
+
int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu,
struct kvm_pgtable_mm_ops *mm_ops,
enum kvm_pgtable_stage2_flags flags,
--
2.39.1.637.g21b0678d19-goog
next prev parent reply other threads:[~2023-02-15 17:40 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-15 17:40 [PATCH v3 00/12] Implement Eager Page Splitting for ARM Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 01/12] KVM: arm64: Add KVM_PGTABLE_WALK ctx->flags for skipping BBM and CMO Ricardo Koller
2023-02-16 2:56 ` Shaoqin Huang
2023-02-16 18:41 ` Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 02/12] KVM: arm64: Rename free_unlinked to free_removed Ricardo Koller
2023-02-15 23:51 ` Oliver Upton
2023-02-16 3:13 ` Shaoqin Huang
2023-02-16 18:44 ` Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 03/12] KVM: arm64: Add helper for creating unlinked stage2 subtrees Ricardo Koller
2023-02-16 0:13 ` Oliver Upton
2023-02-16 18:34 ` Ricardo Koller
2023-02-15 17:40 ` Ricardo Koller [this message]
2023-02-16 0:36 ` [PATCH v3 04/12] KVM: arm64: Add kvm_pgtable_stage2_split() Oliver Upton
2023-02-16 18:07 ` Ricardo Koller
2023-02-16 18:14 ` Ricardo Koller
2023-02-16 12:22 ` Shaoqin Huang
2023-02-16 18:30 ` Ricardo Koller
2023-02-17 4:07 ` Shaoqin Huang
2023-02-15 17:40 ` [PATCH v3 05/12] KVM: arm64: Refactor kvm_arch_commit_memory_region() Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 06/12] KVM: arm64: Add kvm_uninit_stage2_mmu() Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 07/12] KVM: arm64: Export kvm_are_all_memslots_empty() Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 08/12] KVM: arm64: Add KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 09/12] KVM: arm64: Split huge pages when dirty logging is enabled Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 10/12] KVM: arm64: Open-code kvm_mmu_write_protect_pt_masked() Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 11/12] KVM: arm64: Split huge pages during KVM_CLEAR_DIRTY_LOG Ricardo Koller
2023-02-15 17:40 ` [PATCH v3 12/12] KVM: arm64: Use local TLBI on permission relaxation Ricardo Koller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230215174046.2201432-5-ricarkol@google.com \
--to=ricarkol@google.com \
--cc=alexandru.elisei@arm.com \
--cc=andrew.jones@linux.dev \
--cc=bgardon@google.com \
--cc=catalin.marinas@arm.com \
--cc=dmatlack@google.com \
--cc=eric.auger@redhat.com \
--cc=gshan@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=maz@kernel.org \
--cc=oupton@google.com \
--cc=pbonzini@redhat.com \
--cc=qperret@google.com \
--cc=rananta@google.com \
--cc=reijiw@google.com \
--cc=ricarkol@gmail.com \
--cc=seanjc@google.com \
--cc=suzuki.poulose@arm.com \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).