From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2572E1FA3 for ; Sun, 12 Mar 2023 11:35:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A716CC433D2; Sun, 12 Mar 2023 11:35:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678620904; bh=Wi7yKitSuyaOSMSPvDa3A0je/u2KxjYVhNdIe1mI18w=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=fFZnG0ffBm+Kpv0FkJ4vej6OZVXsKgxCPoBXW9cSHJU22fDXatBFFPK3JVEMcOEvc Iq9qUfLniUwPb/9NNNO3bkn9p6z/2JzLPXoOvRKG0wjQ0Y07QwoY9u1Ujf1Rbjqa1W m3qMg6t1w/qScIzue819uWtyCCTPDEylk9aw6U35RRgxQogWCShdPkjEnZP/H3snCH icpT28/iyP45RhLl7RX+ymQUMUmaVJNIhJwJizbzeRixU6c2Y7CdyjVTP/SXZsotMa t9ACaGO+zY/OUyLHVzMFOGOx17OJjC0oE+Hnu4fMo+hpx805vf1QGsySz0aVqJMwln 3TLXsmi4IYi7Q== Received: from sofa.misterjones.org ([185.219.108.64] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1pbJyg-00Gysv-9P; Sun, 12 Mar 2023 11:35:02 +0000 Date: Sun, 12 Mar 2023 11:35:01 +0000 Message-ID: <87a60i5hju.wl-maz@kernel.org> From: Marc Zyngier To: Ricardo Koller Cc: pbonzini@redhat.com, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Shaoqin Huang Subject: Re: [PATCH v6 04/12] KVM: arm64: Add kvm_pgtable_stage2_split() In-Reply-To: <20230307034555.39733-5-ricarkol@google.com> References: <20230307034555.39733-1-ricarkol@google.com> <20230307034555.39733-5-ricarkol@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) Precedence: bulk X-Mailing-List: kvmarm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: ricarkol@google.com, pbonzini@redhat.com, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, shahuang@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false On Tue, 07 Mar 2023 03:45:47 +0000, Ricardo Koller wrote: > > Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a > range of huge pages. This will be used for eager-splitting huge pages > into PAGE_SIZE pages. The goal is to avoid having to split huge pages > on write-protection faults, and instead use this function to do it > ahead of time for large ranges (e.g., all guest memory in 1G chunks at > a time). > > No functional change intended. This new function will be used in a > subsequent commit. Same comment as before about the usefulness of this last sentence. > > Signed-off-by: Ricardo Koller > Reviewed-by: Shaoqin Huang > --- > arch/arm64/include/asm/kvm_pgtable.h | 30 +++++++ > arch/arm64/kvm/hyp/pgtable.c | 113 +++++++++++++++++++++++++++ > 2 files changed, 143 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index b7b3fc0fa7a5..40e323a718fc 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -665,6 +665,36 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); > */ > int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); > > +/** > + * kvm_pgtable_stage2_split() - Split a range of huge pages into leaf PTEs pointing > + * to PAGE_SIZE guest pages. > + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). > + * @addr: Intermediate physical address from which to split. > + * @size: Size of the range. > + * @mc: Cache of pre-allocated and zeroed memory from which to allocate > + * page-table pages. > + * @mc_capacity: Number of pages in @mc. > + * > + * @addr and the end (@addr + @size) are effectively aligned down and up to > + * the top level huge-page block size. This is an example using 1GB > + * huge-pages and 4KB granules. > + * > + * [---input range---] > + * : : > + * [--1G block pte--][--1G block pte--][--1G block pte--][--1G block pte--] > + * : : > + * [--2MB--][--2MB--][--2MB--][--2MB--] > + * : : > + * [ ][ ][:][ ][ ][ ][ ][ ][:][ ][ ][ ] > + * : : So here, what alignment do we effectively get? > + * > + * Return: 0 on success, negative error code on failure. Note that > + * kvm_pgtable_stage2_split() is best effort: it tries to break as many > + * blocks in the input range as allowed by @mc_capacity. > + */ > +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, > + void *mc, u64 mc_capacity); > + > /** > * kvm_pgtable_walk() - Walk a page-table. > * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 6bdfcb671b32..3149b98d1701 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -1259,6 +1259,119 @@ kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, > return pgtable; > } > > +struct stage2_split_data { > + struct kvm_s2_mmu *mmu; > + void *memcache; > + u64 mc_capacity; Why isn't this a pointer to a *real* memcache structure? > +}; > + > +/* > + * Get the number of page-tables needed to replace a block with a > + * fully populated tree, up to the PTE level, at particular level. > + */ > +static inline int stage2_block_get_nr_page_tables(u32 level) Please drop the inline. The compiler will figure it out. > +{ > + if (WARN_ON_ONCE(level < KVM_PGTABLE_MIN_BLOCK_LEVEL || > + level >= KVM_PGTABLE_MAX_LEVELS)) > + return -EINVAL; Move this check to the 'default' case below. > + > + switch (level) { > + case 1: > + return PTRS_PER_PTE + 1; > + case 2: > + return 1; This is odd. Replacing a block by a table always requires 'PTRS_PER_PTE + 1' pages. Why 1? If this is some special treatment for level-2 mappings, please spell it out. > + case 3: > + return 0; > + default: > + return -EINVAL; > + }; > +} > + > +static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx, > + enum kvm_pgtable_walk_flags visit) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + struct stage2_split_data *data = ctx->arg; > + kvm_pte_t pte = ctx->old, new, *childp; > + enum kvm_pgtable_prot prot; > + void *mc = data->memcache; > + u32 level = ctx->level; > + bool force_pte; > + int nr_pages; > + u64 phys; > + > + /* No huge-pages exist at the last level */ > + if (level == KVM_PGTABLE_MAX_LEVELS - 1) > + return 0; Why the check for level 3 in the previous function if never get there? > + > + /* We only split valid block mappings */ > + if (!kvm_pte_valid(pte)) > + return 0; > + > + nr_pages = stage2_block_get_nr_page_tables(level); > + if (nr_pages < 0) > + return nr_pages; > + > + if (data->mc_capacity >= nr_pages) { > + /* Build a tree mapped down to the PTE granularity. */ > + force_pte = true; > + } else { > + /* > + * Don't force PTEs. This requires a single page of PMDs at the > + * PUD level, or a single page of PTEs at the PMD level. If we > + * are at the PUD level, the PTEs will be created recursively. > + */ I don't understand how you reach this 'single page' conclusion. You need to explain why you get there. > + force_pte = false; > + nr_pages = 1; > + } > + > + if (data->mc_capacity < nr_pages) > + return -ENOMEM; > + > + phys = kvm_pte_to_phys(pte); > + prot = kvm_pgtable_stage2_pte_prot(pte); > + > + childp = kvm_pgtable_stage2_create_unlinked(data->mmu->pgt, phys, > + level, prot, mc, force_pte); > + if (IS_ERR(childp)) > + return PTR_ERR(childp); > + > + if (!stage2_try_break_pte(ctx, data->mmu)) { > + kvm_pgtable_stage2_free_unlinked(mm_ops, childp, level); > + mm_ops->put_page(childp); > + return -EAGAIN; > + } > + > + /* > + * Note, the contents of the page table are guaranteed to be made > + * visible before the new PTE is assigned because stage2_make_pte() > + * writes the PTE using smp_store_release(). > + */ > + new = kvm_init_table_pte(childp, mm_ops); > + stage2_make_pte(ctx, new); > + dsb(ishst); > + data->mc_capacity -= nr_pages; > + return 0; > +} > + > +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, > + void *mc, u64 mc_capacity) > +{ > + struct stage2_split_data split_data = { > + .mmu = pgt->mmu, > + .memcache = mc, > + .mc_capacity = mc_capacity, > + }; > + > + struct kvm_pgtable_walker walker = { > + .cb = stage2_split_walker, > + .flags = KVM_PGTABLE_WALK_LEAF, > + .arg = &split_data, > + }; > + > + return kvm_pgtable_walk(pgt, addr, size, &walker); > +} > + > int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, > struct kvm_pgtable_mm_ops *mm_ops, > enum kvm_pgtable_stage2_flags flags, Thanks, M. -- Without deviation from the norm, progress is not possible.