From: Ricardo Koller <ricarkol@google.com> To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, dmatlack@google.com, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com Cc: kvmarm@lists.linux.dev, ricarkol@gmail.com, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC PATCH 02/12] KVM: arm64: Allow visiting block PTEs in post-order Date: Sat, 12 Nov 2022 08:17:04 +0000 [thread overview] Message-ID: <20221112081714.2169495-3-ricarkol@google.com> (raw) In-Reply-To: <20221112081714.2169495-1-ricarkol@google.com> The page table walker does not visit block PTEs in post-order. But there are some cases where doing so would be beneficial, for example: breaking a 1G block PTE into a full tree in post-order avoids visiting the new tree. Allow post order visits of block PTEs. This will be used in a subsequent commit for eagerly breaking huge pages. Signed-off-by: Ricardo Koller <ricarkol@google.com> --- arch/arm64/include/asm/kvm_pgtable.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/setup.c | 2 +- arch/arm64/kvm/hyp/pgtable.c | 25 ++++++++++++------------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index e2edeed462e8..d2e4a5032146 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -255,7 +255,7 @@ struct kvm_pgtable { * entries. * @KVM_PGTABLE_WALK_TABLE_PRE: Visit table entries before their * children. - * @KVM_PGTABLE_WALK_TABLE_POST: Visit table entries after their + * @KVM_PGTABLE_WALK_POST: Visit leaf or table entries after their * children. * @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared * with other software walkers. @@ -263,7 +263,7 @@ struct kvm_pgtable { enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_LEAF = BIT(0), KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), - KVM_PGTABLE_WALK_TABLE_POST = BIT(2), + KVM_PGTABLE_WALK_POST = BIT(2), KVM_PGTABLE_WALK_SHARED = BIT(3), }; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index b47d969ae4d3..b0c1618d053b 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -265,7 +265,7 @@ static int fix_hyp_pgtable_refcnt(void) { struct kvm_pgtable_walker walker = { .cb = fix_hyp_pgtable_refcnt_walker, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, .arg = pkvm_pgtable.mm_ops, }; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b16107bf917c..1b371f6dbac2 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -206,16 +206,15 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data, if (!table) { data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level)); data->addr += kvm_granule_size(level); - goto out; + } else { + childp = (kvm_pteref_t)kvm_pte_follow(ctx.old, mm_ops); + ret = __kvm_pgtable_walk(data, mm_ops, childp, level + 1); + if (ret) + goto out; } - childp = (kvm_pteref_t)kvm_pte_follow(ctx.old, mm_ops); - ret = __kvm_pgtable_walk(data, mm_ops, childp, level + 1); - if (ret) - goto out; - - if (ctx.flags & KVM_PGTABLE_WALK_TABLE_POST) - ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_TABLE_POST); + if (ctx.flags & KVM_PGTABLE_WALK_POST) + ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_POST); out: return ret; @@ -494,7 +493,7 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) struct kvm_pgtable_walker walker = { .cb = hyp_unmap_walker, .arg = &unmapped, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, }; if (!pgt->mm_ops->page_count) @@ -542,7 +541,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) { struct kvm_pgtable_walker walker = { .cb = hyp_free_walker, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); @@ -1003,7 +1002,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, }; return kvm_pgtable_walk(pgt, addr, size, &walker); @@ -1234,7 +1233,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) struct kvm_pgtable_walker walker = { .cb = stage2_free_walker, .flags = KVM_PGTABLE_WALK_LEAF | - KVM_PGTABLE_WALK_TABLE_POST, + KVM_PGTABLE_WALK_POST, }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); @@ -1249,7 +1248,7 @@ void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pg struct kvm_pgtable_walker walker = { .cb = stage2_free_walker, .flags = KVM_PGTABLE_WALK_LEAF | - KVM_PGTABLE_WALK_TABLE_POST, + KVM_PGTABLE_WALK_POST, }; struct kvm_pgtable_walk_data data = { .walker = &walker, -- 2.38.1.431.g37b22c650d-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
WARNING: multiple messages have this Message-ID (diff)
From: Ricardo Koller <ricarkol@google.com> To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, dmatlack@google.com, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, ricarkol@gmail.com, Ricardo Koller <ricarkol@google.com> Subject: [RFC PATCH 02/12] KVM: arm64: Allow visiting block PTEs in post-order Date: Sat, 12 Nov 2022 08:17:04 +0000 [thread overview] Message-ID: <20221112081714.2169495-3-ricarkol@google.com> (raw) Message-ID: <20221112081704.rJqI7Pxe9_TGK_rsb8_AhOJvX7HKORARZTKWk961fpE@z> (raw) In-Reply-To: <20221112081714.2169495-1-ricarkol@google.com> The page table walker does not visit block PTEs in post-order. But there are some cases where doing so would be beneficial, for example: breaking a 1G block PTE into a full tree in post-order avoids visiting the new tree. Allow post order visits of block PTEs. This will be used in a subsequent commit for eagerly breaking huge pages. Signed-off-by: Ricardo Koller <ricarkol@google.com> --- arch/arm64/include/asm/kvm_pgtable.h | 4 ++-- arch/arm64/kvm/hyp/nvhe/setup.c | 2 +- arch/arm64/kvm/hyp/pgtable.c | 25 ++++++++++++------------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index e2edeed462e8..d2e4a5032146 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -255,7 +255,7 @@ struct kvm_pgtable { * entries. * @KVM_PGTABLE_WALK_TABLE_PRE: Visit table entries before their * children. - * @KVM_PGTABLE_WALK_TABLE_POST: Visit table entries after their + * @KVM_PGTABLE_WALK_POST: Visit leaf or table entries after their * children. * @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared * with other software walkers. @@ -263,7 +263,7 @@ struct kvm_pgtable { enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_LEAF = BIT(0), KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), - KVM_PGTABLE_WALK_TABLE_POST = BIT(2), + KVM_PGTABLE_WALK_POST = BIT(2), KVM_PGTABLE_WALK_SHARED = BIT(3), }; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index b47d969ae4d3..b0c1618d053b 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -265,7 +265,7 @@ static int fix_hyp_pgtable_refcnt(void) { struct kvm_pgtable_walker walker = { .cb = fix_hyp_pgtable_refcnt_walker, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, .arg = pkvm_pgtable.mm_ops, }; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b16107bf917c..1b371f6dbac2 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -206,16 +206,15 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data, if (!table) { data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level)); data->addr += kvm_granule_size(level); - goto out; + } else { + childp = (kvm_pteref_t)kvm_pte_follow(ctx.old, mm_ops); + ret = __kvm_pgtable_walk(data, mm_ops, childp, level + 1); + if (ret) + goto out; } - childp = (kvm_pteref_t)kvm_pte_follow(ctx.old, mm_ops); - ret = __kvm_pgtable_walk(data, mm_ops, childp, level + 1); - if (ret) - goto out; - - if (ctx.flags & KVM_PGTABLE_WALK_TABLE_POST) - ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_TABLE_POST); + if (ctx.flags & KVM_PGTABLE_WALK_POST) + ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_POST); out: return ret; @@ -494,7 +493,7 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) struct kvm_pgtable_walker walker = { .cb = hyp_unmap_walker, .arg = &unmapped, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, }; if (!pgt->mm_ops->page_count) @@ -542,7 +541,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) { struct kvm_pgtable_walker walker = { .cb = hyp_free_walker, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); @@ -1003,7 +1002,7 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, .arg = pgt, - .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_POST, }; return kvm_pgtable_walk(pgt, addr, size, &walker); @@ -1234,7 +1233,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) struct kvm_pgtable_walker walker = { .cb = stage2_free_walker, .flags = KVM_PGTABLE_WALK_LEAF | - KVM_PGTABLE_WALK_TABLE_POST, + KVM_PGTABLE_WALK_POST, }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); @@ -1249,7 +1248,7 @@ void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pg struct kvm_pgtable_walker walker = { .cb = stage2_free_walker, .flags = KVM_PGTABLE_WALK_LEAF | - KVM_PGTABLE_WALK_TABLE_POST, + KVM_PGTABLE_WALK_POST, }; struct kvm_pgtable_walk_data data = { .walker = &walker, -- 2.38.1.431.g37b22c650d-goog
next prev parent reply other threads:[~2022-11-12 8:17 UTC|newest] Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-11-12 8:17 [RFC PATCH 00/12] KVM: arm64: Eager huge-page splitting for dirty-logging Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 01/12] KVM: arm64: Relax WARN check in stage2_make_pte() Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-14 20:59 ` Oliver Upton 2022-11-14 20:59 ` Oliver Upton 2022-11-12 8:17 ` Ricardo Koller [this message] 2022-11-12 8:17 ` [RFC PATCH 02/12] KVM: arm64: Allow visiting block PTEs in post-order Ricardo Koller 2022-11-14 18:48 ` Oliver Upton 2022-11-14 18:48 ` Oliver Upton 2023-01-13 3:44 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 03/12] KVM: arm64: Add stage2_create_removed() Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 04/12] KVM: arm64: Add kvm_pgtable_stage2_split() Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-14 20:54 ` Oliver Upton 2022-11-14 20:54 ` Oliver Upton 2022-11-15 23:03 ` Ricardo Koller 2022-11-15 23:03 ` Ricardo Koller 2022-11-15 23:27 ` Ricardo Koller 2022-11-15 23:27 ` Ricardo Koller 2022-11-15 23:54 ` Oliver Upton 2022-11-15 23:54 ` Oliver Upton 2022-11-17 21:50 ` Ricardo Koller 2022-11-17 21:50 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 05/12] arm64: Add a capability for FEAT_BBM level 2 Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 06/12] KVM: arm64: Split block PTEs without using break-before-make Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-14 18:56 ` Oliver Upton 2022-11-14 18:56 ` Oliver Upton 2022-11-12 8:17 ` [RFC PATCH 07/12] KVM: arm64: Refactor kvm_arch_commit_memory_region() Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 08/12] KVM: arm64: Add kvm_uninit_stage2_mmu() Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 09/12] KVM: arm64: Split huge pages when dirty logging is enabled Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 10/12] KVM: arm64: Open-code kvm_mmu_write_protect_pt_masked() Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 11/12] KVM: arm64: Split huge pages during KVM_CLEAR_DIRTY_LOG Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-12 8:17 ` [RFC PATCH 12/12] KVM: arm64: Use local TLBI on permission relaxation Ricardo Koller 2022-11-12 8:17 ` Ricardo Koller 2022-11-14 18:42 ` [RFC PATCH 00/12] KVM: arm64: Eager huge-page splitting for dirty-logging Oliver Upton 2022-11-14 18:42 ` Oliver Upton 2023-01-13 3:42 ` Ricardo Koller
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20221112081714.2169495-3-ricarkol@google.com \ --to=ricarkol@google.com \ --cc=alexandru.elisei@arm.com \ --cc=andrew.jones@linux.dev \ --cc=bgardon@google.com \ --cc=catalin.marinas@arm.com \ --cc=dmatlack@google.com \ --cc=eric.auger@redhat.com \ --cc=gshan@redhat.com \ --cc=kvm@vger.kernel.org \ --cc=kvmarm@lists.cs.columbia.edu \ --cc=kvmarm@lists.linux.dev \ --cc=maz@kernel.org \ --cc=oupton@google.com \ --cc=pbonzini@redhat.com \ --cc=qperret@google.com \ --cc=rananta@google.com \ --cc=reijiw@google.com \ --cc=ricarkol@gmail.com \ --cc=seanjc@google.com \ --cc=suzuki.poulose@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).