From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3240C43219 for ; Wed, 9 Nov 2022 22:27:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232137AbiKIW1D (ORCPT ); Wed, 9 Nov 2022 17:27:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232179AbiKIW0s (ORCPT ); Wed, 9 Nov 2022 17:26:48 -0500 Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com [IPv6:2607:f8b0:4864:20::82d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA42820F72 for ; Wed, 9 Nov 2022 14:26:47 -0800 (PST) Received: by mail-qt1-x82d.google.com with SMTP id cg5so29227qtb.12 for ; Wed, 09 Nov 2022 14:26:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ys/jQ8RbuqT3YumCTMbd+2452VktWwQQOObS7LSk5p4=; b=mk9lRpuokDL/uAOGOKs/+T+PEfPR48B5jrdBCU4h3IJuESuNqf45ir8gw9bRIeOeS2 5boDsxOTz8DpWvHPjQhCVVSwOM6ovQKP64R6KI0RCxiP7tMx1y1DV02kaqZQLWfTIIIK TbjJqgkN+FpzGJDz9nIIK8l9JHwSamFn0UQVHIRm3S8ypg8qo2oRFMAbGNEOyPMoHgVs guGP5sgv0zVSjtqTSdAgYZhxbtjyozc30kfsiDsPFZZvSEgusnttdCQYnX0wMsBIhRsK PS2Rwu4d/kFG+09LQxPgUcg1wT8Uk5LZsZZvFYYJUXzlidk4zjrNqzmeBbQj7G25Mb/i N6DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ys/jQ8RbuqT3YumCTMbd+2452VktWwQQOObS7LSk5p4=; b=X4NA9zkLbMhqF5MOKpmxa1tuZvz7Xq4wfCOPv+srTmmg1nhB7RwnEmIyQciZZsw19m 6H4humzwGukS6MAc5Rihmmr3M3vMHHmwIx4Th0nwTdElb4YMRjm/7IlD6Ma1Z/C3B81Z V5Qy2e+MqoP//HyYdQ2KXOTf4RENnfCSdFxsb8dBkUVlRRN20aU9ORKbJkXKkS1UExBO MHNyUhoiXFHwM/rU2PfzvQnKr3BL9sTeb3Njgf+zf0xpf8gtk4KSLa2ybbui7W8XI1c6 8Tj3Y90NgeXuXWIfHsUn6cdsbCTQnfbXLOXtfq/X6nJHGgjrPIiP16JVsuZdV33WpIZl JiMw== X-Gm-Message-State: ANoB5pkym1Zc+twJvF0ZDOsnvhlRtoj3qH3xh3FwR4M6600v9o++Xo6/ Jz1vOhtyhUF73v5yJihP8rQZNpitQJvu1qn7ZVvlUA== X-Google-Smtp-Source: AA0mqf78NDQ8xLt/Zae6fTGp38dnPf5BMr45qBFnT7kZ5Z23obSQYwTbVOjufSO4/KwQlp/1DoVnG+QFF53vnN4/bwY= X-Received: by 2002:ac8:5ac2:0:b0:3a5:afca:2322 with SMTP id d2-20020ac85ac2000000b003a5afca2322mr4284025qtd.500.1668032807310; Wed, 09 Nov 2022 14:26:47 -0800 (PST) MIME-Version: 1.0 References: <20221107215644.1895162-1-oliver.upton@linux.dev> <20221107215855.1895367-1-oliver.upton@linux.dev> In-Reply-To: <20221107215855.1895367-1-oliver.upton@linux.dev> From: Ben Gardon Date: Wed, 9 Nov 2022 14:26:36 -0800 Message-ID: Subject: Re: [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware To: Oliver Upton Cc: Marc Zyngier , James Morse , Alexandru Elisei , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Reiji Watanabe , Ricardo Koller , David Matlack , Quentin Perret , Gavin Shan , Peter Xu , Will Deacon , Sean Christopherson , kvmarm@lists.linux.dev Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Mon, Nov 7, 2022 at 1:59 PM Oliver Upton wrote: > > In order to service stage-2 faults in parallel, stage-2 table walkers > must take exclusive ownership of the PTE being worked on. An additional > requirement of the architecture is that software must perform a > 'break-before-make' operation when changing the block size used for > mapping memory. > > Roll these two concepts together into helpers for performing a > 'break-before-make' sequence. Use a special PTE value to indicate a PTE > has been locked by a software walker. Additionally, use an atomic > compare-exchange to 'break' the PTE when the stage-2 page tables are > possibly shared with another software walker. Elide the DSB + TLBI if > the evicted PTE was invalid (and thus not subject to break-before-make). > > All of the atomics do nothing for now, as the stage-2 walker isn't fully > ready to perform parallel walks. > > Signed-off-by: Oliver Upton > --- > arch/arm64/kvm/hyp/pgtable.c | 80 +++++++++++++++++++++++++++++++++--- > 1 file changed, 75 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index f4dd77c6c97d..b9f0d792b8d9 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -49,6 +49,12 @@ > #define KVM_INVALID_PTE_OWNER_MASK GENMASK(9, 2) > #define KVM_MAX_OWNER_ID 1 > > +/* > + * Used to indicate a pte for which a 'break-before-make' sequence is in > + * progress. > + */ > +#define KVM_INVALID_PTE_LOCKED BIT(10) > + > struct kvm_pgtable_walk_data { > struct kvm_pgtable_walker *walker; > > @@ -674,6 +680,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte) > return !!pte; > } > > +static bool stage2_pte_is_locked(kvm_pte_t pte) > +{ > + return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED); > +} > + > static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > { > if (!kvm_pgtable_walk_shared(ctx)) { > @@ -684,6 +695,64 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ > return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old; > } > > +/** > + * stage2_try_break_pte() - Invalidates a pte according to the > + * 'break-before-make' requirements of the > + * architecture. > + * > + * @ctx: context of the visited pte. > + * @mmu: stage-2 mmu > + * > + * Returns: true if the pte was successfully broken. > + * > + * If the removed pte was valid, performs the necessary serialization and TLB > + * invalidation for the old value. For counted ptes, drops the reference count > + * on the containing table page. > + */ > +static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, > + struct kvm_s2_mmu *mmu) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + > + if (stage2_pte_is_locked(ctx->old)) { > + /* > + * Should never occur if this walker has exclusive access to the > + * page tables. > + */ > + WARN_ON(!kvm_pgtable_walk_shared(ctx)); > + return false; > + } > + > + if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) > + return false; > + > + /* > + * Perform the appropriate TLB invalidation based on the evicted pte > + * value (if any). > + */ > + if (kvm_pte_table(ctx->old, ctx->level)) > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > + else if (kvm_pte_valid(ctx->old)) > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); > + > + if (stage2_pte_is_counted(ctx->old)) > + mm_ops->put_page(ctx->ptep); > + > + return true; > +} > + > +static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + > + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > + > + if (stage2_pte_is_counted(new)) > + mm_ops->get_page(ctx->ptep); > + > + smp_store_release(ctx->ptep, new); > +} > + > static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, > struct kvm_pgtable_mm_ops *mm_ops) > { > @@ -812,17 +881,18 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx, > if (!childp) > return -ENOMEM; > > + if (!stage2_try_break_pte(ctx, data->mmu)) { > + mm_ops->put_page(childp); > + return -EAGAIN; > + } > + > /* > * If we've run into an existing block mapping then replace it with > * a table. Accesses beyond 'end' that fall within the new table > * will be mapped lazily. > */ > - if (stage2_pte_is_counted(ctx->old)) > - stage2_put_pte(ctx, data->mmu, mm_ops); > - > new = kvm_init_table_pte(childp, mm_ops); Does it make any sense to move this before the "break" to minimize the critical section in which the PTE is locked? > - mm_ops->get_page(ctx->ptep); > - smp_store_release(ctx->ptep, new); > + stage2_make_pte(ctx, new); > > return 0; > } > -- > 2.38.1.431.g37b22c650d-goog > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D63AC4332F for ; Wed, 9 Nov 2022 22:29:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uT+nCqpteHE+FKNWfSQGuRvXubHEr8ECyp0fzepBvLM=; b=u88vi0tWS+x9B7 B60GL6sIQl1aMDzWO9ToXX5chUa4FKovRwzVUwrhPGOA6QGF2iHHK5uJIL/+DcTw5RD8R+NCn6jUi mLV3NRLp50xO8s0zjm7hhsjhiPREhONvToJi5ec1v1I1EwxgtRkgBP8yTlm92LX/3Hpc59ama5k7k 8ZwC7RoQJrIH7zzlwuEC5Gs+XIfFaEVgApTCJ3MtCQPDPF5zL4g6eghLONBLFCpkaePzlVXsIPfi2 aix8wHOQEWpANmPZsjdLD/1r9W3QdDnxdeJ2reTaPF95FjSH5MGfeWaJVs4dIgks7B2OCLmadqmw0 Mom3peALKsZtCAgZXABg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ostY2-000DFO-Ec; Wed, 09 Nov 2022 22:27:54 +0000 Received: from mail-qt1-x82e.google.com ([2607:f8b0:4864:20::82e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ostWy-000Chj-9b for linux-arm-kernel@lists.infradead.org; Wed, 09 Nov 2022 22:26:49 +0000 Received: by mail-qt1-x82e.google.com with SMTP id l15so49848qtv.4 for ; Wed, 09 Nov 2022 14:26:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ys/jQ8RbuqT3YumCTMbd+2452VktWwQQOObS7LSk5p4=; b=mk9lRpuokDL/uAOGOKs/+T+PEfPR48B5jrdBCU4h3IJuESuNqf45ir8gw9bRIeOeS2 5boDsxOTz8DpWvHPjQhCVVSwOM6ovQKP64R6KI0RCxiP7tMx1y1DV02kaqZQLWfTIIIK TbjJqgkN+FpzGJDz9nIIK8l9JHwSamFn0UQVHIRm3S8ypg8qo2oRFMAbGNEOyPMoHgVs guGP5sgv0zVSjtqTSdAgYZhxbtjyozc30kfsiDsPFZZvSEgusnttdCQYnX0wMsBIhRsK PS2Rwu4d/kFG+09LQxPgUcg1wT8Uk5LZsZZvFYYJUXzlidk4zjrNqzmeBbQj7G25Mb/i N6DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ys/jQ8RbuqT3YumCTMbd+2452VktWwQQOObS7LSk5p4=; b=vBryyhYliv6htDZxtSJJFh5eXsRvu6zRc2y8BvYfE7nLEqIGVjVeqMbdN1NClv6laL +68FLYQf1Kw0Ucm+DxvOJQbRCdm2akRUv82JjwKiXJEG5DHbaX4skaH0+NgM8SLXTETo ErJ33DK/8MMH9FkMtuU/jKJaP8GWKDuszOa+7x1KIdKyVYYJ5kiE+PaP6fZa0jz92gu4 6s86xi534toDpRbclecWNBb1t4Die8IYAgNtrtt55D6SCHpAxNH0E5GbvERP9FbEvh/q yGUW8E4tTDK8DgwHwQv8Hwg/5JyWpb8vZ+UPAGJKd6O2zxTCDFUCvBLCshxJArT3e9Ju Uwvg== X-Gm-Message-State: ANoB5pmpGHjcEFsJ9fw9JSr1CtQ/I6ENmfGk0Zm4008zdyM9fXoql0+Q VT01meAXX8ZuO3FrxZ4T5X2B9Gt5MpQshS8PpzKQDA== X-Google-Smtp-Source: AA0mqf78NDQ8xLt/Zae6fTGp38dnPf5BMr45qBFnT7kZ5Z23obSQYwTbVOjufSO4/KwQlp/1DoVnG+QFF53vnN4/bwY= X-Received: by 2002:ac8:5ac2:0:b0:3a5:afca:2322 with SMTP id d2-20020ac85ac2000000b003a5afca2322mr4284025qtd.500.1668032807310; Wed, 09 Nov 2022 14:26:47 -0800 (PST) MIME-Version: 1.0 References: <20221107215644.1895162-1-oliver.upton@linux.dev> <20221107215855.1895367-1-oliver.upton@linux.dev> In-Reply-To: <20221107215855.1895367-1-oliver.upton@linux.dev> From: Ben Gardon Date: Wed, 9 Nov 2022 14:26:36 -0800 Message-ID: Subject: Re: [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware To: Oliver Upton Cc: Marc Zyngier , James Morse , Alexandru Elisei , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Reiji Watanabe , Ricardo Koller , David Matlack , Quentin Perret , Gavin Shan , Peter Xu , Will Deacon , Sean Christopherson , kvmarm@lists.linux.dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221109_142648_370389_B90EF569 X-CRM114-Status: GOOD ( 31.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Nov 7, 2022 at 1:59 PM Oliver Upton wrote: > > In order to service stage-2 faults in parallel, stage-2 table walkers > must take exclusive ownership of the PTE being worked on. An additional > requirement of the architecture is that software must perform a > 'break-before-make' operation when changing the block size used for > mapping memory. > > Roll these two concepts together into helpers for performing a > 'break-before-make' sequence. Use a special PTE value to indicate a PTE > has been locked by a software walker. Additionally, use an atomic > compare-exchange to 'break' the PTE when the stage-2 page tables are > possibly shared with another software walker. Elide the DSB + TLBI if > the evicted PTE was invalid (and thus not subject to break-before-make). > > All of the atomics do nothing for now, as the stage-2 walker isn't fully > ready to perform parallel walks. > > Signed-off-by: Oliver Upton > --- > arch/arm64/kvm/hyp/pgtable.c | 80 +++++++++++++++++++++++++++++++++--- > 1 file changed, 75 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index f4dd77c6c97d..b9f0d792b8d9 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -49,6 +49,12 @@ > #define KVM_INVALID_PTE_OWNER_MASK GENMASK(9, 2) > #define KVM_MAX_OWNER_ID 1 > > +/* > + * Used to indicate a pte for which a 'break-before-make' sequence is in > + * progress. > + */ > +#define KVM_INVALID_PTE_LOCKED BIT(10) > + > struct kvm_pgtable_walk_data { > struct kvm_pgtable_walker *walker; > > @@ -674,6 +680,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte) > return !!pte; > } > > +static bool stage2_pte_is_locked(kvm_pte_t pte) > +{ > + return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED); > +} > + > static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > { > if (!kvm_pgtable_walk_shared(ctx)) { > @@ -684,6 +695,64 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ > return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old; > } > > +/** > + * stage2_try_break_pte() - Invalidates a pte according to the > + * 'break-before-make' requirements of the > + * architecture. > + * > + * @ctx: context of the visited pte. > + * @mmu: stage-2 mmu > + * > + * Returns: true if the pte was successfully broken. > + * > + * If the removed pte was valid, performs the necessary serialization and TLB > + * invalidation for the old value. For counted ptes, drops the reference count > + * on the containing table page. > + */ > +static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, > + struct kvm_s2_mmu *mmu) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + > + if (stage2_pte_is_locked(ctx->old)) { > + /* > + * Should never occur if this walker has exclusive access to the > + * page tables. > + */ > + WARN_ON(!kvm_pgtable_walk_shared(ctx)); > + return false; > + } > + > + if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) > + return false; > + > + /* > + * Perform the appropriate TLB invalidation based on the evicted pte > + * value (if any). > + */ > + if (kvm_pte_table(ctx->old, ctx->level)) > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > + else if (kvm_pte_valid(ctx->old)) > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); > + > + if (stage2_pte_is_counted(ctx->old)) > + mm_ops->put_page(ctx->ptep); > + > + return true; > +} > + > +static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + > + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > + > + if (stage2_pte_is_counted(new)) > + mm_ops->get_page(ctx->ptep); > + > + smp_store_release(ctx->ptep, new); > +} > + > static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, > struct kvm_pgtable_mm_ops *mm_ops) > { > @@ -812,17 +881,18 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx, > if (!childp) > return -ENOMEM; > > + if (!stage2_try_break_pte(ctx, data->mmu)) { > + mm_ops->put_page(childp); > + return -EAGAIN; > + } > + > /* > * If we've run into an existing block mapping then replace it with > * a table. Accesses beyond 'end' that fall within the new table > * will be mapped lazily. > */ > - if (stage2_pte_is_counted(ctx->old)) > - stage2_put_pte(ctx, data->mmu, mm_ops); > - > new = kvm_init_table_pte(childp, mm_ops); Does it make any sense to move this before the "break" to minimize the critical section in which the PTE is locked? > - mm_ops->get_page(ctx->ptep); > - smp_store_release(ctx->ptep, new); > + stage2_make_pte(ctx, new); > > return 0; > } > -- > 2.38.1.431.g37b22c650d-goog > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F4C1C43217 for ; Thu, 10 Nov 2022 14:52:19 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D2AB94BAEB; Thu, 10 Nov 2022 09:52:18 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cGD1ojnB8JGl; Thu, 10 Nov 2022 09:52:14 -0500 (EST) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E56614BB77; Thu, 10 Nov 2022 09:51:48 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0B1134B9FF for ; Wed, 9 Nov 2022 17:26:49 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DFP6z71fCnyw for ; Wed, 9 Nov 2022 17:26:47 -0500 (EST) Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id CBD274B9A7 for ; Wed, 9 Nov 2022 17:26:47 -0500 (EST) Received: by mail-qt1-f172.google.com with SMTP id hh9so25829qtb.13 for ; Wed, 09 Nov 2022 14:26:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ys/jQ8RbuqT3YumCTMbd+2452VktWwQQOObS7LSk5p4=; b=mk9lRpuokDL/uAOGOKs/+T+PEfPR48B5jrdBCU4h3IJuESuNqf45ir8gw9bRIeOeS2 5boDsxOTz8DpWvHPjQhCVVSwOM6ovQKP64R6KI0RCxiP7tMx1y1DV02kaqZQLWfTIIIK TbjJqgkN+FpzGJDz9nIIK8l9JHwSamFn0UQVHIRm3S8ypg8qo2oRFMAbGNEOyPMoHgVs guGP5sgv0zVSjtqTSdAgYZhxbtjyozc30kfsiDsPFZZvSEgusnttdCQYnX0wMsBIhRsK PS2Rwu4d/kFG+09LQxPgUcg1wT8Uk5LZsZZvFYYJUXzlidk4zjrNqzmeBbQj7G25Mb/i N6DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ys/jQ8RbuqT3YumCTMbd+2452VktWwQQOObS7LSk5p4=; b=6+hXTZugIgVQ+J2GZYoZfZulZShn0OKkie5RVkxFzhbyUbEErVIx069Y+1siB3SCJd ySCUWq03pYR8/s/GUR/U5joGIFGWeQz9NaTZRS22ewAnyGCZO0AC7ez8HpcPKpszKHnF lRp+7PCHveShRaDiIK1lEY9pHnuWVLrwKodhpZnOoGpKXR0GIOAPVdPs/5SUKGvAwEpw oS48RV/QWpAqSew4Joz0i2i14Q3gm5ToamrD32QoZCQaE2ol8kHLZTzQ5krEHfCIhE6w PI5pA9rpuifhccLAZpwBDJPPA6mom3MMhKAfagp6+QXFqF7GYhARoBhHXOEigDX0JvZs uVgw== X-Gm-Message-State: ANoB5plCCamHLAtAez+0NOZ6bqjgUEeyquiCOAbyWMaa4CqUAtoOTktj ERX1GvP5C2rc+XqBsoaDYFFyzEBF1jISaI7fq0ysDA== X-Google-Smtp-Source: AA0mqf78NDQ8xLt/Zae6fTGp38dnPf5BMr45qBFnT7kZ5Z23obSQYwTbVOjufSO4/KwQlp/1DoVnG+QFF53vnN4/bwY= X-Received: by 2002:ac8:5ac2:0:b0:3a5:afca:2322 with SMTP id d2-20020ac85ac2000000b003a5afca2322mr4284025qtd.500.1668032807310; Wed, 09 Nov 2022 14:26:47 -0800 (PST) MIME-Version: 1.0 References: <20221107215644.1895162-1-oliver.upton@linux.dev> <20221107215855.1895367-1-oliver.upton@linux.dev> In-Reply-To: <20221107215855.1895367-1-oliver.upton@linux.dev> From: Ben Gardon Date: Wed, 9 Nov 2022 14:26:36 -0800 Message-ID: Subject: Re: [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware To: Oliver Upton X-Mailman-Approved-At: Thu, 10 Nov 2022 09:51:44 -0500 Cc: kvm@vger.kernel.org, Marc Zyngier , Will Deacon , kvmarm@lists.linux.dev, David Matlack , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Mon, Nov 7, 2022 at 1:59 PM Oliver Upton wrote: > > In order to service stage-2 faults in parallel, stage-2 table walkers > must take exclusive ownership of the PTE being worked on. An additional > requirement of the architecture is that software must perform a > 'break-before-make' operation when changing the block size used for > mapping memory. > > Roll these two concepts together into helpers for performing a > 'break-before-make' sequence. Use a special PTE value to indicate a PTE > has been locked by a software walker. Additionally, use an atomic > compare-exchange to 'break' the PTE when the stage-2 page tables are > possibly shared with another software walker. Elide the DSB + TLBI if > the evicted PTE was invalid (and thus not subject to break-before-make). > > All of the atomics do nothing for now, as the stage-2 walker isn't fully > ready to perform parallel walks. > > Signed-off-by: Oliver Upton > --- > arch/arm64/kvm/hyp/pgtable.c | 80 +++++++++++++++++++++++++++++++++--- > 1 file changed, 75 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index f4dd77c6c97d..b9f0d792b8d9 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -49,6 +49,12 @@ > #define KVM_INVALID_PTE_OWNER_MASK GENMASK(9, 2) > #define KVM_MAX_OWNER_ID 1 > > +/* > + * Used to indicate a pte for which a 'break-before-make' sequence is in > + * progress. > + */ > +#define KVM_INVALID_PTE_LOCKED BIT(10) > + > struct kvm_pgtable_walk_data { > struct kvm_pgtable_walker *walker; > > @@ -674,6 +680,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte) > return !!pte; > } > > +static bool stage2_pte_is_locked(kvm_pte_t pte) > +{ > + return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED); > +} > + > static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > { > if (!kvm_pgtable_walk_shared(ctx)) { > @@ -684,6 +695,64 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ > return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old; > } > > +/** > + * stage2_try_break_pte() - Invalidates a pte according to the > + * 'break-before-make' requirements of the > + * architecture. > + * > + * @ctx: context of the visited pte. > + * @mmu: stage-2 mmu > + * > + * Returns: true if the pte was successfully broken. > + * > + * If the removed pte was valid, performs the necessary serialization and TLB > + * invalidation for the old value. For counted ptes, drops the reference count > + * on the containing table page. > + */ > +static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, > + struct kvm_s2_mmu *mmu) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + > + if (stage2_pte_is_locked(ctx->old)) { > + /* > + * Should never occur if this walker has exclusive access to the > + * page tables. > + */ > + WARN_ON(!kvm_pgtable_walk_shared(ctx)); > + return false; > + } > + > + if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) > + return false; > + > + /* > + * Perform the appropriate TLB invalidation based on the evicted pte > + * value (if any). > + */ > + if (kvm_pte_table(ctx->old, ctx->level)) > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > + else if (kvm_pte_valid(ctx->old)) > + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); > + > + if (stage2_pte_is_counted(ctx->old)) > + mm_ops->put_page(ctx->ptep); > + > + return true; > +} > + > +static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) > +{ > + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > + > + WARN_ON(!stage2_pte_is_locked(*ctx->ptep)); > + > + if (stage2_pte_is_counted(new)) > + mm_ops->get_page(ctx->ptep); > + > + smp_store_release(ctx->ptep, new); > +} > + > static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, > struct kvm_pgtable_mm_ops *mm_ops) > { > @@ -812,17 +881,18 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx, > if (!childp) > return -ENOMEM; > > + if (!stage2_try_break_pte(ctx, data->mmu)) { > + mm_ops->put_page(childp); > + return -EAGAIN; > + } > + > /* > * If we've run into an existing block mapping then replace it with > * a table. Accesses beyond 'end' that fall within the new table > * will be mapped lazily. > */ > - if (stage2_pte_is_counted(ctx->old)) > - stage2_put_pte(ctx, data->mmu, mm_ops); > - > new = kvm_init_table_pte(childp, mm_ops); Does it make any sense to move this before the "break" to minimize the critical section in which the PTE is locked? > - mm_ops->get_page(ctx->ptep); > - smp_store_release(ctx->ptep, new); > + stage2_make_pte(ctx, new); > > return 0; > } > -- > 2.38.1.431.g37b22c650d-goog > _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm