All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	Reiji Watanabe <reijiw@google.com>,
	Ricardo Koller <ricarkol@google.com>,
	David Matlack <dmatlack@google.com>,
	Quentin Perret <qperret@google.com>,
	Gavin Shan <gshan@redhat.com>, Peter Xu <peterx@redhat.com>,
	Will Deacon <will@kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	kvmarm@lists.linux.dev
Subject: Re: [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware
Date: Wed, 9 Nov 2022 14:26:36 -0800	[thread overview]
Message-ID: <CANgfPd9OSUfDGCQG8tHXTCYtrrCDnkgPZM6qPDaQF90bZsVCkA@mail.gmail.com> (raw)
In-Reply-To: <20221107215855.1895367-1-oliver.upton@linux.dev>

On Mon, Nov 7, 2022 at 1:59 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> In order to service stage-2 faults in parallel, stage-2 table walkers
> must take exclusive ownership of the PTE being worked on. An additional
> requirement of the architecture is that software must perform a
> 'break-before-make' operation when changing the block size used for
> mapping memory.
>
> Roll these two concepts together into helpers for performing a
> 'break-before-make' sequence. Use a special PTE value to indicate a PTE
> has been locked by a software walker. Additionally, use an atomic
> compare-exchange to 'break' the PTE when the stage-2 page tables are
> possibly shared with another software walker. Elide the DSB + TLBI if
> the evicted PTE was invalid (and thus not subject to break-before-make).
>
> All of the atomics do nothing for now, as the stage-2 walker isn't fully
> ready to perform parallel walks.
>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 80 +++++++++++++++++++++++++++++++++---
>  1 file changed, 75 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index f4dd77c6c97d..b9f0d792b8d9 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -49,6 +49,12 @@
>  #define KVM_INVALID_PTE_OWNER_MASK     GENMASK(9, 2)
>  #define KVM_MAX_OWNER_ID               1
>
> +/*
> + * Used to indicate a pte for which a 'break-before-make' sequence is in
> + * progress.
> + */
> +#define KVM_INVALID_PTE_LOCKED         BIT(10)
> +
>  struct kvm_pgtable_walk_data {
>         struct kvm_pgtable_walker       *walker;
>
> @@ -674,6 +680,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte)
>         return !!pte;
>  }
>
> +static bool stage2_pte_is_locked(kvm_pte_t pte)
> +{
> +       return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED);
> +}
> +
>  static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
>  {
>         if (!kvm_pgtable_walk_shared(ctx)) {
> @@ -684,6 +695,64 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_
>         return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old;
>  }
>
> +/**
> + * stage2_try_break_pte() - Invalidates a pte according to the
> + *                         'break-before-make' requirements of the
> + *                         architecture.
> + *
> + * @ctx: context of the visited pte.
> + * @mmu: stage-2 mmu
> + *
> + * Returns: true if the pte was successfully broken.
> + *
> + * If the removed pte was valid, performs the necessary serialization and TLB
> + * invalidation for the old value. For counted ptes, drops the reference count
> + * on the containing table page.
> + */
> +static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
> +                                struct kvm_s2_mmu *mmu)
> +{
> +       struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> +
> +       if (stage2_pte_is_locked(ctx->old)) {
> +               /*
> +                * Should never occur if this walker has exclusive access to the
> +                * page tables.
> +                */
> +               WARN_ON(!kvm_pgtable_walk_shared(ctx));
> +               return false;
> +       }
> +
> +       if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED))
> +               return false;
> +
> +       /*
> +        * Perform the appropriate TLB invalidation based on the evicted pte
> +        * value (if any).
> +        */
> +       if (kvm_pte_table(ctx->old, ctx->level))
> +               kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
> +       else if (kvm_pte_valid(ctx->old))
> +               kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level);
> +
> +       if (stage2_pte_is_counted(ctx->old))
> +               mm_ops->put_page(ctx->ptep);
> +
> +       return true;
> +}
> +
> +static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
> +{
> +       struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> +
> +       WARN_ON(!stage2_pte_is_locked(*ctx->ptep));
> +
> +       if (stage2_pte_is_counted(new))
> +               mm_ops->get_page(ctx->ptep);
> +
> +       smp_store_release(ctx->ptep, new);
> +}
> +
>  static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu,
>                            struct kvm_pgtable_mm_ops *mm_ops)
>  {
> @@ -812,17 +881,18 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx,
>         if (!childp)
>                 return -ENOMEM;
>
> +       if (!stage2_try_break_pte(ctx, data->mmu)) {
> +               mm_ops->put_page(childp);
> +               return -EAGAIN;
> +       }
> +
>         /*
>          * If we've run into an existing block mapping then replace it with
>          * a table. Accesses beyond 'end' that fall within the new table
>          * will be mapped lazily.
>          */
> -       if (stage2_pte_is_counted(ctx->old))
> -               stage2_put_pte(ctx, data->mmu, mm_ops);
> -
>         new = kvm_init_table_pte(childp, mm_ops);

Does it make any sense to move this before the "break" to minimize the
critical section in which the PTE is locked?


> -       mm_ops->get_page(ctx->ptep);
> -       smp_store_release(ctx->ptep, new);
> +       stage2_make_pte(ctx, new);
>
>         return 0;
>  }
> --
> 2.38.1.431.g37b22c650d-goog
>

WARNING: multiple messages have this Message-ID (diff)
From: Ben Gardon <bgardon@google.com>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	 Alexandru Elisei <alexandru.elisei@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	 kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	 Reiji Watanabe <reijiw@google.com>,
	Ricardo Koller <ricarkol@google.com>,
	 David Matlack <dmatlack@google.com>,
	Quentin Perret <qperret@google.com>,
	Gavin Shan <gshan@redhat.com>,  Peter Xu <peterx@redhat.com>,
	Will Deacon <will@kernel.org>,
	 Sean Christopherson <seanjc@google.com>,
	kvmarm@lists.linux.dev
Subject: Re: [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware
Date: Wed, 9 Nov 2022 14:26:36 -0800	[thread overview]
Message-ID: <CANgfPd9OSUfDGCQG8tHXTCYtrrCDnkgPZM6qPDaQF90bZsVCkA@mail.gmail.com> (raw)
In-Reply-To: <20221107215855.1895367-1-oliver.upton@linux.dev>

On Mon, Nov 7, 2022 at 1:59 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> In order to service stage-2 faults in parallel, stage-2 table walkers
> must take exclusive ownership of the PTE being worked on. An additional
> requirement of the architecture is that software must perform a
> 'break-before-make' operation when changing the block size used for
> mapping memory.
>
> Roll these two concepts together into helpers for performing a
> 'break-before-make' sequence. Use a special PTE value to indicate a PTE
> has been locked by a software walker. Additionally, use an atomic
> compare-exchange to 'break' the PTE when the stage-2 page tables are
> possibly shared with another software walker. Elide the DSB + TLBI if
> the evicted PTE was invalid (and thus not subject to break-before-make).
>
> All of the atomics do nothing for now, as the stage-2 walker isn't fully
> ready to perform parallel walks.
>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 80 +++++++++++++++++++++++++++++++++---
>  1 file changed, 75 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index f4dd77c6c97d..b9f0d792b8d9 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -49,6 +49,12 @@
>  #define KVM_INVALID_PTE_OWNER_MASK     GENMASK(9, 2)
>  #define KVM_MAX_OWNER_ID               1
>
> +/*
> + * Used to indicate a pte for which a 'break-before-make' sequence is in
> + * progress.
> + */
> +#define KVM_INVALID_PTE_LOCKED         BIT(10)
> +
>  struct kvm_pgtable_walk_data {
>         struct kvm_pgtable_walker       *walker;
>
> @@ -674,6 +680,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte)
>         return !!pte;
>  }
>
> +static bool stage2_pte_is_locked(kvm_pte_t pte)
> +{
> +       return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED);
> +}
> +
>  static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
>  {
>         if (!kvm_pgtable_walk_shared(ctx)) {
> @@ -684,6 +695,64 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_
>         return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old;
>  }
>
> +/**
> + * stage2_try_break_pte() - Invalidates a pte according to the
> + *                         'break-before-make' requirements of the
> + *                         architecture.
> + *
> + * @ctx: context of the visited pte.
> + * @mmu: stage-2 mmu
> + *
> + * Returns: true if the pte was successfully broken.
> + *
> + * If the removed pte was valid, performs the necessary serialization and TLB
> + * invalidation for the old value. For counted ptes, drops the reference count
> + * on the containing table page.
> + */
> +static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
> +                                struct kvm_s2_mmu *mmu)
> +{
> +       struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> +
> +       if (stage2_pte_is_locked(ctx->old)) {
> +               /*
> +                * Should never occur if this walker has exclusive access to the
> +                * page tables.
> +                */
> +               WARN_ON(!kvm_pgtable_walk_shared(ctx));
> +               return false;
> +       }
> +
> +       if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED))
> +               return false;
> +
> +       /*
> +        * Perform the appropriate TLB invalidation based on the evicted pte
> +        * value (if any).
> +        */
> +       if (kvm_pte_table(ctx->old, ctx->level))
> +               kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
> +       else if (kvm_pte_valid(ctx->old))
> +               kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level);
> +
> +       if (stage2_pte_is_counted(ctx->old))
> +               mm_ops->put_page(ctx->ptep);
> +
> +       return true;
> +}
> +
> +static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
> +{
> +       struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> +
> +       WARN_ON(!stage2_pte_is_locked(*ctx->ptep));
> +
> +       if (stage2_pte_is_counted(new))
> +               mm_ops->get_page(ctx->ptep);
> +
> +       smp_store_release(ctx->ptep, new);
> +}
> +
>  static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu,
>                            struct kvm_pgtable_mm_ops *mm_ops)
>  {
> @@ -812,17 +881,18 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx,
>         if (!childp)
>                 return -ENOMEM;
>
> +       if (!stage2_try_break_pte(ctx, data->mmu)) {
> +               mm_ops->put_page(childp);
> +               return -EAGAIN;
> +       }
> +
>         /*
>          * If we've run into an existing block mapping then replace it with
>          * a table. Accesses beyond 'end' that fall within the new table
>          * will be mapped lazily.
>          */
> -       if (stage2_pte_is_counted(ctx->old))
> -               stage2_put_pte(ctx, data->mmu, mm_ops);
> -
>         new = kvm_init_table_pte(childp, mm_ops);

Does it make any sense to move this before the "break" to minimize the
critical section in which the PTE is locked?


> -       mm_ops->get_page(ctx->ptep);
> -       smp_store_release(ctx->ptep, new);
> +       stage2_make_pte(ctx, new);
>
>         return 0;
>  }
> --
> 2.38.1.431.g37b22c650d-goog
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Ben Gardon <bgardon@google.com>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: kvm@vger.kernel.org, Marc Zyngier <maz@kernel.org>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.linux.dev, David Matlack <dmatlack@google.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware
Date: Wed, 9 Nov 2022 14:26:36 -0800	[thread overview]
Message-ID: <CANgfPd9OSUfDGCQG8tHXTCYtrrCDnkgPZM6qPDaQF90bZsVCkA@mail.gmail.com> (raw)
In-Reply-To: <20221107215855.1895367-1-oliver.upton@linux.dev>

On Mon, Nov 7, 2022 at 1:59 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> In order to service stage-2 faults in parallel, stage-2 table walkers
> must take exclusive ownership of the PTE being worked on. An additional
> requirement of the architecture is that software must perform a
> 'break-before-make' operation when changing the block size used for
> mapping memory.
>
> Roll these two concepts together into helpers for performing a
> 'break-before-make' sequence. Use a special PTE value to indicate a PTE
> has been locked by a software walker. Additionally, use an atomic
> compare-exchange to 'break' the PTE when the stage-2 page tables are
> possibly shared with another software walker. Elide the DSB + TLBI if
> the evicted PTE was invalid (and thus not subject to break-before-make).
>
> All of the atomics do nothing for now, as the stage-2 walker isn't fully
> ready to perform parallel walks.
>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 80 +++++++++++++++++++++++++++++++++---
>  1 file changed, 75 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index f4dd77c6c97d..b9f0d792b8d9 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -49,6 +49,12 @@
>  #define KVM_INVALID_PTE_OWNER_MASK     GENMASK(9, 2)
>  #define KVM_MAX_OWNER_ID               1
>
> +/*
> + * Used to indicate a pte for which a 'break-before-make' sequence is in
> + * progress.
> + */
> +#define KVM_INVALID_PTE_LOCKED         BIT(10)
> +
>  struct kvm_pgtable_walk_data {
>         struct kvm_pgtable_walker       *walker;
>
> @@ -674,6 +680,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte)
>         return !!pte;
>  }
>
> +static bool stage2_pte_is_locked(kvm_pte_t pte)
> +{
> +       return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED);
> +}
> +
>  static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
>  {
>         if (!kvm_pgtable_walk_shared(ctx)) {
> @@ -684,6 +695,64 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_
>         return cmpxchg(ctx->ptep, ctx->old, new) == ctx->old;
>  }
>
> +/**
> + * stage2_try_break_pte() - Invalidates a pte according to the
> + *                         'break-before-make' requirements of the
> + *                         architecture.
> + *
> + * @ctx: context of the visited pte.
> + * @mmu: stage-2 mmu
> + *
> + * Returns: true if the pte was successfully broken.
> + *
> + * If the removed pte was valid, performs the necessary serialization and TLB
> + * invalidation for the old value. For counted ptes, drops the reference count
> + * on the containing table page.
> + */
> +static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
> +                                struct kvm_s2_mmu *mmu)
> +{
> +       struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> +
> +       if (stage2_pte_is_locked(ctx->old)) {
> +               /*
> +                * Should never occur if this walker has exclusive access to the
> +                * page tables.
> +                */
> +               WARN_ON(!kvm_pgtable_walk_shared(ctx));
> +               return false;
> +       }
> +
> +       if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED))
> +               return false;
> +
> +       /*
> +        * Perform the appropriate TLB invalidation based on the evicted pte
> +        * value (if any).
> +        */
> +       if (kvm_pte_table(ctx->old, ctx->level))
> +               kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
> +       else if (kvm_pte_valid(ctx->old))
> +               kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level);
> +
> +       if (stage2_pte_is_counted(ctx->old))
> +               mm_ops->put_page(ctx->ptep);
> +
> +       return true;
> +}
> +
> +static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
> +{
> +       struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
> +
> +       WARN_ON(!stage2_pte_is_locked(*ctx->ptep));
> +
> +       if (stage2_pte_is_counted(new))
> +               mm_ops->get_page(ctx->ptep);
> +
> +       smp_store_release(ctx->ptep, new);
> +}
> +
>  static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu,
>                            struct kvm_pgtable_mm_ops *mm_ops)
>  {
> @@ -812,17 +881,18 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx,
>         if (!childp)
>                 return -ENOMEM;
>
> +       if (!stage2_try_break_pte(ctx, data->mmu)) {
> +               mm_ops->put_page(childp);
> +               return -EAGAIN;
> +       }
> +
>         /*
>          * If we've run into an existing block mapping then replace it with
>          * a table. Accesses beyond 'end' that fall within the new table
>          * will be mapped lazily.
>          */
> -       if (stage2_pte_is_counted(ctx->old))
> -               stage2_put_pte(ctx, data->mmu, mm_ops);
> -
>         new = kvm_init_table_pte(childp, mm_ops);

Does it make any sense to move this before the "break" to minimize the
critical section in which the PTE is locked?


> -       mm_ops->get_page(ctx->ptep);
> -       smp_store_release(ctx->ptep, new);
> +       stage2_make_pte(ctx, new);
>
>         return 0;
>  }
> --
> 2.38.1.431.g37b22c650d-goog
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-11-09 22:27 UTC|newest]

Thread overview: 156+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-07 21:56 [PATCH v5 00/14] KVM: arm64: Parallel stage-2 fault handling Oliver Upton
2022-11-07 21:56 ` Oliver Upton
2022-11-07 21:56 ` Oliver Upton
2022-11-07 21:56 ` [PATCH v5 01/14] KVM: arm64: Combine visitor arguments into a context structure Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:23   ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:48     ` Oliver Upton
2022-11-09 22:48       ` Oliver Upton
2022-11-09 22:48       ` Oliver Upton
2022-11-10  0:23   ` Gavin Shan
2022-11-10  0:23     ` Gavin Shan
2022-11-10  0:23     ` Gavin Shan
2022-11-10  0:42     ` Oliver Upton
2022-11-10  0:42       ` Oliver Upton
2022-11-10  0:42       ` Oliver Upton
2022-11-10  3:40       ` Gavin Shan
2022-11-10  3:40         ` Gavin Shan
2022-11-10  3:40         ` Gavin Shan
2022-11-07 21:56 ` [PATCH v5 02/14] KVM: arm64: Stash observed pte value in visitor context Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:23   ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-10  4:55   ` Gavin Shan
2022-11-10  4:55     ` Gavin Shan
2022-11-10  4:55     ` Gavin Shan
2022-11-07 21:56 ` [PATCH v5 03/14] KVM: arm64: Pass mm_ops through the " Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:23   ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-10  5:22   ` Gavin Shan
2022-11-10  5:22     ` Gavin Shan
2022-11-10  5:22     ` Gavin Shan
2022-11-10  5:30   ` Gavin Shan
2022-11-10  5:30     ` Gavin Shan
2022-11-10  5:30     ` Gavin Shan
2022-11-07 21:56 ` [PATCH v5 04/14] KVM: arm64: Don't pass kvm_pgtable through kvm_pgtable_walk_data Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:23   ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-10  5:30   ` Gavin Shan
2022-11-10  5:30     ` Gavin Shan
2022-11-10  5:30     ` Gavin Shan
2022-11-10  5:38     ` Oliver Upton
2022-11-10  5:38       ` Oliver Upton
2022-11-10  5:38       ` Oliver Upton
2022-11-07 21:56 ` [PATCH v5 05/14] KVM: arm64: Add a helper to tear down unlinked stage-2 subtrees Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:23   ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:54     ` Oliver Upton
2022-11-09 22:54       ` Oliver Upton
2022-11-09 22:54       ` Oliver Upton
2022-11-07 21:56 ` [PATCH v5 06/14] KVM: arm64: Use an opaque type for pteps Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:23   ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-09 22:23     ` Ben Gardon
2022-11-07 21:56 ` [PATCH v5 07/14] KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:24   ` Ben Gardon
2022-11-09 22:24     ` Ben Gardon
2022-11-09 22:24     ` Ben Gardon
2022-11-07 21:56 ` [PATCH v5 08/14] KVM: arm64: Protect stage-2 traversal with RCU Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 21:53   ` Sean Christopherson
2022-11-09 21:53     ` Sean Christopherson
2022-11-09 21:53     ` Sean Christopherson
2022-11-09 23:55     ` Oliver Upton
2022-11-09 23:55       ` Oliver Upton
2022-11-09 23:55       ` Oliver Upton
2022-11-15 18:47       ` Ricardo Koller
2022-11-15 18:47         ` Ricardo Koller
2022-11-15 18:47         ` Ricardo Koller
2022-11-15 18:57         ` Oliver Upton
2022-11-15 18:57           ` Oliver Upton
2022-11-15 18:57           ` Oliver Upton
2022-11-09 22:25   ` Ben Gardon
2022-11-09 22:25     ` Ben Gardon
2022-11-09 22:25     ` Ben Gardon
2022-11-10 13:34     ` Marc Zyngier
2022-11-10 13:34       ` Marc Zyngier
2022-11-10 13:34       ` Marc Zyngier
     [not found]   ` <CGME20221114142915eucas1p258f3ca2c536bde712c068e96851468fd@eucas1p2.samsung.com>
2022-11-14 14:29     ` Marek Szyprowski
2022-11-14 14:29       ` Marek Szyprowski
2022-11-14 14:29       ` Marek Szyprowski
2022-11-14 17:42       ` Oliver Upton
2022-11-14 17:42         ` Oliver Upton
2022-11-14 17:42         ` Oliver Upton
2022-12-05  5:51         ` Mingwei Zhang
2022-12-05  5:51           ` Mingwei Zhang
2022-12-05  5:51           ` Mingwei Zhang
2022-12-05  7:47           ` Oliver Upton
2022-12-05  7:47             ` Oliver Upton
2022-12-05  7:47             ` Oliver Upton
2022-11-07 21:56 ` [PATCH v5 09/14] KVM: arm64: Atomically update stage 2 leaf attributes in parallel walks Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:26   ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-09 22:42     ` Sean Christopherson
2022-11-09 22:42       ` Sean Christopherson
2022-11-09 22:42       ` Sean Christopherson
2022-11-09 23:00       ` Ben Gardon
2022-11-09 23:00         ` Ben Gardon
2022-11-09 23:00         ` Ben Gardon
2022-11-10 13:40         ` Marc Zyngier
2022-11-10 13:40           ` Marc Zyngier
2022-11-10 13:40           ` Marc Zyngier
2022-11-07 21:56 ` [PATCH v5 10/14] KVM: arm64: Split init and set for table PTE Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-07 21:56   ` Oliver Upton
2022-11-09 22:26   ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-09 23:00     ` Oliver Upton
2022-11-09 23:00       ` Oliver Upton
2022-11-09 23:00       ` Oliver Upton
2022-11-07 21:58 ` [PATCH v5 11/14] KVM: arm64: Make block->table PTE changes parallel-aware Oliver Upton
2022-11-07 21:58   ` Oliver Upton
2022-11-07 21:58   ` Oliver Upton
2022-11-09 22:26   ` Ben Gardon [this message]
2022-11-09 22:26     ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-09 23:03     ` Oliver Upton
2022-11-09 23:03       ` Oliver Upton
2022-11-09 23:03       ` Oliver Upton
2022-11-07 21:59 ` [PATCH v5 12/14] KVM: arm64: Make leaf->leaf " Oliver Upton
2022-11-07 21:59   ` Oliver Upton
2022-11-07 21:59   ` Oliver Upton
2022-11-09 22:26   ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-09 22:26     ` Ben Gardon
2022-11-07 22:00 ` [PATCH v5 13/14] KVM: arm64: Make table->block " Oliver Upton
2022-11-07 22:00   ` Oliver Upton
2022-11-07 22:00   ` Oliver Upton
2022-11-07 22:00 ` [PATCH v5 14/14] KVM: arm64: Handle stage-2 faults in parallel Oliver Upton
2022-11-07 22:00   ` Oliver Upton
2022-11-07 22:00   ` Oliver Upton
2022-11-11 15:47 ` [PATCH v5 00/14] KVM: arm64: Parallel stage-2 fault handling Marc Zyngier
2022-11-11 15:47   ` Marc Zyngier
2022-11-11 15:47   ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANgfPd9OSUfDGCQG8tHXTCYtrrCDnkgPZM6qPDaQF90bZsVCkA@mail.gmail.com \
    --to=bgardon@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=dmatlack@google.com \
    --cc=gshan@redhat.com \
    --cc=james.morse@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=peterx@redhat.com \
    --cc=qperret@google.com \
    --cc=reijiw@google.com \
    --cc=ricarkol@google.com \
    --cc=seanjc@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.