All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Oliver Upton <oupton@google.com>
Cc: kvmarm@lists.cs.columbia.edu, kvm <kvm@vger.kernel.org>,
	 Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	 Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	 linux-arm-kernel@lists.infradead.org,
	Peter Shier <pshier@google.com>,
	 Ricardo Koller <ricarkol@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	 Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	 David Matlack <dmatlack@google.com>
Subject: Re: [RFC PATCH 10/17] KVM: arm64: Assume a table pte is already owned in post-order traversal
Date: Thu, 21 Apr 2022 09:11:37 -0700	[thread overview]
Message-ID: <CANgfPd-LZf1tkSiFTkJ-rww4Cmaign4bJRsg1KWm5eA2P5=j+A@mail.gmail.com> (raw)
In-Reply-To: <20220415215901.1737897-11-oupton@google.com>

On Fri, Apr 15, 2022 at 2:59 PM Oliver Upton <oupton@google.com> wrote:
>
> For parallel walks that collapse a table into a block KVM ensures a
> locked invalid pte is visible to all observers in pre-order traversal.
> As such, there is no need to try breaking the pte again.

When you're doing the pre and post-order traversals, are they
implemented as separate traversals from the root, or is it a kind of
pre and post-order where non-leaf nodes are visited on the way down
and on the way up?
I assume either could be made to work, but the re-traversal from the
root probably minimizes TLB flushes, whereas the pre-and-post-order
would be a more efficient walk?

>
> Directly set the pte if it has already been broken.
>
> Signed-off-by: Oliver Upton <oupton@google.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 22 ++++++++++++++++------
>  1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 146fc44acf31..121818d4c33e 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -924,7 +924,7 @@ static bool stage2_leaf_mapping_allowed(u64 addr, u64 end, u32 level,
>  static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>                                       kvm_pte_t *ptep, kvm_pte_t old,
>                                       struct stage2_map_data *data,
> -                                     bool shared)
> +                                     bool shared, bool locked)
>  {
>         kvm_pte_t new;
>         u64 granule = kvm_granule_size(level), phys = data->phys;
> @@ -948,7 +948,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>         if (!stage2_pte_needs_update(old, new))
>                 return -EAGAIN;
>
> -       if (!stage2_try_break_pte(ptep, old, addr, level, shared, data))
> +       if (!locked && !stage2_try_break_pte(ptep, old, addr, level, shared, data))
>                 return -EAGAIN;
>
>         /* Perform CMOs before installation of the guest stage-2 PTE */
> @@ -987,7 +987,8 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
>  }
>
>  static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> -                               kvm_pte_t *old, struct stage2_map_data *data, bool shared)
> +                               kvm_pte_t *old, struct stage2_map_data *data, bool shared,
> +                               bool locked)
>  {
>         struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops;
>         kvm_pte_t *childp, pte;
> @@ -998,10 +999,13 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>                 return 0;
>         }
>
> -       ret = stage2_map_walker_try_leaf(addr, end, level, ptep, *old, data, shared);
> +       ret = stage2_map_walker_try_leaf(addr, end, level, ptep, *old, data, shared, locked);
>         if (ret != -E2BIG)
>                 return ret;
>
> +       /* We should never attempt installing a table in post-order */
> +       WARN_ON(locked);
> +
>         if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
>                 return -EINVAL;
>
> @@ -1048,7 +1052,13 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level,
>                 childp = data->childp;
>                 data->anchor = NULL;
>                 data->childp = NULL;
> -               ret = stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared);
> +
> +               /*
> +                * We are guaranteed exclusive access to the pte in post-order
> +                * traversal since the locked value was made visible to all
> +                * observers in stage2_map_walk_table_pre.
> +                */
> +               ret = stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared, true);
>         } else {
>                 childp = kvm_pte_follow(*old, mm_ops);
>         }
> @@ -1087,7 +1097,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, kvm_
>         case KVM_PGTABLE_WALK_TABLE_PRE:
>                 return stage2_map_walk_table_pre(addr, end, level, ptep, old, data, shared);
>         case KVM_PGTABLE_WALK_LEAF:
> -               return stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared);
> +               return stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared, false);
>         case KVM_PGTABLE_WALK_TABLE_POST:
>                 return stage2_map_walk_table_post(addr, end, level, ptep, old, data, shared);
>         }
> --
> 2.36.0.rc0.470.gd361397f0d-goog
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Ben Gardon <bgardon@google.com>
To: Oliver Upton <oupton@google.com>
Cc: kvm <kvm@vger.kernel.org>, Marc Zyngier <maz@kernel.org>,
	Peter Shier <pshier@google.com>,
	David Matlack <dmatlack@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [RFC PATCH 10/17] KVM: arm64: Assume a table pte is already owned in post-order traversal
Date: Thu, 21 Apr 2022 09:11:37 -0700	[thread overview]
Message-ID: <CANgfPd-LZf1tkSiFTkJ-rww4Cmaign4bJRsg1KWm5eA2P5=j+A@mail.gmail.com> (raw)
In-Reply-To: <20220415215901.1737897-11-oupton@google.com>

On Fri, Apr 15, 2022 at 2:59 PM Oliver Upton <oupton@google.com> wrote:
>
> For parallel walks that collapse a table into a block KVM ensures a
> locked invalid pte is visible to all observers in pre-order traversal.
> As such, there is no need to try breaking the pte again.

When you're doing the pre and post-order traversals, are they
implemented as separate traversals from the root, or is it a kind of
pre and post-order where non-leaf nodes are visited on the way down
and on the way up?
I assume either could be made to work, but the re-traversal from the
root probably minimizes TLB flushes, whereas the pre-and-post-order
would be a more efficient walk?

>
> Directly set the pte if it has already been broken.
>
> Signed-off-by: Oliver Upton <oupton@google.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 22 ++++++++++++++++------
>  1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 146fc44acf31..121818d4c33e 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -924,7 +924,7 @@ static bool stage2_leaf_mapping_allowed(u64 addr, u64 end, u32 level,
>  static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>                                       kvm_pte_t *ptep, kvm_pte_t old,
>                                       struct stage2_map_data *data,
> -                                     bool shared)
> +                                     bool shared, bool locked)
>  {
>         kvm_pte_t new;
>         u64 granule = kvm_granule_size(level), phys = data->phys;
> @@ -948,7 +948,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>         if (!stage2_pte_needs_update(old, new))
>                 return -EAGAIN;
>
> -       if (!stage2_try_break_pte(ptep, old, addr, level, shared, data))
> +       if (!locked && !stage2_try_break_pte(ptep, old, addr, level, shared, data))
>                 return -EAGAIN;
>
>         /* Perform CMOs before installation of the guest stage-2 PTE */
> @@ -987,7 +987,8 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
>  }
>
>  static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> -                               kvm_pte_t *old, struct stage2_map_data *data, bool shared)
> +                               kvm_pte_t *old, struct stage2_map_data *data, bool shared,
> +                               bool locked)
>  {
>         struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops;
>         kvm_pte_t *childp, pte;
> @@ -998,10 +999,13 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>                 return 0;
>         }
>
> -       ret = stage2_map_walker_try_leaf(addr, end, level, ptep, *old, data, shared);
> +       ret = stage2_map_walker_try_leaf(addr, end, level, ptep, *old, data, shared, locked);
>         if (ret != -E2BIG)
>                 return ret;
>
> +       /* We should never attempt installing a table in post-order */
> +       WARN_ON(locked);
> +
>         if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
>                 return -EINVAL;
>
> @@ -1048,7 +1052,13 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level,
>                 childp = data->childp;
>                 data->anchor = NULL;
>                 data->childp = NULL;
> -               ret = stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared);
> +
> +               /*
> +                * We are guaranteed exclusive access to the pte in post-order
> +                * traversal since the locked value was made visible to all
> +                * observers in stage2_map_walk_table_pre.
> +                */
> +               ret = stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared, true);
>         } else {
>                 childp = kvm_pte_follow(*old, mm_ops);
>         }
> @@ -1087,7 +1097,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, kvm_
>         case KVM_PGTABLE_WALK_TABLE_PRE:
>                 return stage2_map_walk_table_pre(addr, end, level, ptep, old, data, shared);
>         case KVM_PGTABLE_WALK_LEAF:
> -               return stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared);
> +               return stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared, false);
>         case KVM_PGTABLE_WALK_TABLE_POST:
>                 return stage2_map_walk_table_post(addr, end, level, ptep, old, data, shared);
>         }
> --
> 2.36.0.rc0.470.gd361397f0d-goog
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Ben Gardon <bgardon@google.com>
To: Oliver Upton <oupton@google.com>
Cc: kvmarm@lists.cs.columbia.edu, kvm <kvm@vger.kernel.org>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Peter Shier <pshier@google.com>,
	Ricardo Koller <ricarkol@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	David Matlack <dmatlack@google.com>
Subject: Re: [RFC PATCH 10/17] KVM: arm64: Assume a table pte is already owned in post-order traversal
Date: Thu, 21 Apr 2022 09:11:37 -0700	[thread overview]
Message-ID: <CANgfPd-LZf1tkSiFTkJ-rww4Cmaign4bJRsg1KWm5eA2P5=j+A@mail.gmail.com> (raw)
In-Reply-To: <20220415215901.1737897-11-oupton@google.com>

On Fri, Apr 15, 2022 at 2:59 PM Oliver Upton <oupton@google.com> wrote:
>
> For parallel walks that collapse a table into a block KVM ensures a
> locked invalid pte is visible to all observers in pre-order traversal.
> As such, there is no need to try breaking the pte again.

When you're doing the pre and post-order traversals, are they
implemented as separate traversals from the root, or is it a kind of
pre and post-order where non-leaf nodes are visited on the way down
and on the way up?
I assume either could be made to work, but the re-traversal from the
root probably minimizes TLB flushes, whereas the pre-and-post-order
would be a more efficient walk?

>
> Directly set the pte if it has already been broken.
>
> Signed-off-by: Oliver Upton <oupton@google.com>
> ---
>  arch/arm64/kvm/hyp/pgtable.c | 22 ++++++++++++++++------
>  1 file changed, 16 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 146fc44acf31..121818d4c33e 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -924,7 +924,7 @@ static bool stage2_leaf_mapping_allowed(u64 addr, u64 end, u32 level,
>  static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>                                       kvm_pte_t *ptep, kvm_pte_t old,
>                                       struct stage2_map_data *data,
> -                                     bool shared)
> +                                     bool shared, bool locked)
>  {
>         kvm_pte_t new;
>         u64 granule = kvm_granule_size(level), phys = data->phys;
> @@ -948,7 +948,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
>         if (!stage2_pte_needs_update(old, new))
>                 return -EAGAIN;
>
> -       if (!stage2_try_break_pte(ptep, old, addr, level, shared, data))
> +       if (!locked && !stage2_try_break_pte(ptep, old, addr, level, shared, data))
>                 return -EAGAIN;
>
>         /* Perform CMOs before installation of the guest stage-2 PTE */
> @@ -987,7 +987,8 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
>  }
>
>  static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> -                               kvm_pte_t *old, struct stage2_map_data *data, bool shared)
> +                               kvm_pte_t *old, struct stage2_map_data *data, bool shared,
> +                               bool locked)
>  {
>         struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops;
>         kvm_pte_t *childp, pte;
> @@ -998,10 +999,13 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>                 return 0;
>         }
>
> -       ret = stage2_map_walker_try_leaf(addr, end, level, ptep, *old, data, shared);
> +       ret = stage2_map_walker_try_leaf(addr, end, level, ptep, *old, data, shared, locked);
>         if (ret != -E2BIG)
>                 return ret;
>
> +       /* We should never attempt installing a table in post-order */
> +       WARN_ON(locked);
> +
>         if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
>                 return -EINVAL;
>
> @@ -1048,7 +1052,13 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level,
>                 childp = data->childp;
>                 data->anchor = NULL;
>                 data->childp = NULL;
> -               ret = stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared);
> +
> +               /*
> +                * We are guaranteed exclusive access to the pte in post-order
> +                * traversal since the locked value was made visible to all
> +                * observers in stage2_map_walk_table_pre.
> +                */
> +               ret = stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared, true);
>         } else {
>                 childp = kvm_pte_follow(*old, mm_ops);
>         }
> @@ -1087,7 +1097,7 @@ static int stage2_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, kvm_
>         case KVM_PGTABLE_WALK_TABLE_PRE:
>                 return stage2_map_walk_table_pre(addr, end, level, ptep, old, data, shared);
>         case KVM_PGTABLE_WALK_LEAF:
> -               return stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared);
> +               return stage2_map_walk_leaf(addr, end, level, ptep, old, data, shared, false);
>         case KVM_PGTABLE_WALK_TABLE_POST:
>                 return stage2_map_walk_table_post(addr, end, level, ptep, old, data, shared);
>         }
> --
> 2.36.0.rc0.470.gd361397f0d-goog
>

  reply	other threads:[~2022-04-21 16:13 UTC|newest]

Thread overview: 165+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-15 21:58 [RFC PATCH 00/17] KVM: arm64: Parallelize stage 2 fault handling Oliver Upton
2022-04-15 21:58 ` Oliver Upton
2022-04-15 21:58 ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 01/17] KVM: arm64: Directly read owner id field in stage2_pte_is_counted() Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 02/17] KVM: arm64: Only read the pte once per visit Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-21 16:12   ` Ben Gardon
2022-04-21 16:12     ` Ben Gardon
2022-04-21 16:12     ` Ben Gardon
2022-04-15 21:58 ` [RFC PATCH 03/17] KVM: arm64: Return the next table from map callbacks Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 04/17] KVM: arm64: Protect page table traversal with RCU Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-19  2:55   ` Ricardo Koller
2022-04-19  2:55     ` Ricardo Koller
2022-04-19  2:55     ` Ricardo Koller
2022-04-19  3:01     ` Oliver Upton
2022-04-19  3:01       ` Oliver Upton
2022-04-19  3:01       ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 05/17] KVM: arm64: Take an argument to indicate parallel walk Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-16 11:30   ` Marc Zyngier
2022-04-16 11:30     ` Marc Zyngier
2022-04-16 11:30     ` Marc Zyngier
2022-04-16 16:03     ` Oliver Upton
2022-04-16 16:03       ` Oliver Upton
2022-04-16 16:03       ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 06/17] KVM: arm64: Implement break-before-make sequence for parallel walks Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-20 16:55   ` Quentin Perret
2022-04-20 16:55     ` Quentin Perret
2022-04-20 16:55     ` Quentin Perret
2022-04-20 17:06     ` Oliver Upton
2022-04-20 17:06       ` Oliver Upton
2022-04-20 17:06       ` Oliver Upton
2022-04-21 16:57   ` Ben Gardon
2022-04-21 16:57     ` Ben Gardon
2022-04-21 16:57     ` Ben Gardon
2022-04-21 18:52     ` Oliver Upton
2022-04-21 18:52       ` Oliver Upton
2022-04-21 18:52       ` Oliver Upton
2022-04-26 21:32       ` Ben Gardon
2022-04-26 21:32         ` Ben Gardon
2022-04-26 21:32         ` Ben Gardon
2022-04-25 15:13   ` Sean Christopherson
2022-04-25 15:13     ` Sean Christopherson
2022-04-25 15:13     ` Sean Christopherson
2022-04-25 16:53     ` Oliver Upton
2022-04-25 16:53       ` Oliver Upton
2022-04-25 16:53       ` Oliver Upton
2022-04-25 18:16       ` Sean Christopherson
2022-04-25 18:16         ` Sean Christopherson
2022-04-25 18:16         ` Sean Christopherson
2022-04-15 21:58 ` [RFC PATCH 07/17] KVM: arm64: Enlighten perm relax path about " Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 08/17] KVM: arm64: Spin off helper for initializing table pte Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 09/17] KVM: arm64: Tear down unlinked page tables in parallel walk Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-21 13:21   ` Quentin Perret
2022-04-21 13:21     ` Quentin Perret
2022-04-21 13:21     ` Quentin Perret
2022-04-21 16:40     ` Oliver Upton
2022-04-21 16:40       ` Oliver Upton
2022-04-21 16:40       ` Oliver Upton
2022-04-22 16:00       ` Quentin Perret
2022-04-22 16:00         ` Quentin Perret
2022-04-22 16:00         ` Quentin Perret
2022-04-22 20:41         ` Oliver Upton
2022-04-22 20:41           ` Oliver Upton
2022-04-22 20:41           ` Oliver Upton
2022-05-03 14:17           ` Quentin Perret
2022-05-03 14:17             ` Quentin Perret
2022-05-03 14:17             ` Quentin Perret
2022-05-04  6:03             ` Oliver Upton
2022-05-04  6:03               ` Oliver Upton
2022-05-04  6:03               ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 10/17] KVM: arm64: Assume a table pte is already owned in post-order traversal Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-21 16:11   ` Ben Gardon [this message]
2022-04-21 16:11     ` Ben Gardon
2022-04-21 16:11     ` Ben Gardon
2022-04-21 17:16     ` Oliver Upton
2022-04-21 17:16       ` Oliver Upton
2022-04-21 17:16       ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 11/17] KVM: arm64: Move MMU cache init/destroy into helpers Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 12/17] KVM: arm64: Stuff mmu page cache in sub struct Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 13/17] KVM: arm64: Setup cache for stage2 page headers Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58 ` [RFC PATCH 14/17] KVM: arm64: Punt last page reference to rcu callback for parallel walk Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-19  2:59   ` Ricardo Koller
2022-04-19  2:59     ` Ricardo Koller
2022-04-19  2:59     ` Ricardo Koller
2022-04-19  3:09     ` Ricardo Koller
2022-04-19  3:09       ` Ricardo Koller
2022-04-19  3:09       ` Ricardo Koller
2022-04-20  0:53       ` Oliver Upton
2022-04-20  0:53         ` Oliver Upton
2022-04-20  0:53         ` Oliver Upton
2022-09-08  0:52         ` David Matlack
2022-09-08  0:52           ` David Matlack
2022-09-08  0:52           ` David Matlack
2022-04-21 16:28   ` Ben Gardon
2022-04-21 16:28     ` Ben Gardon
2022-04-21 16:28     ` Ben Gardon
2022-04-15 21:58 ` [RFC PATCH 15/17] KVM: arm64: Allow parallel calls to kvm_pgtable_stage2_map() Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:58   ` Oliver Upton
2022-04-15 21:59 ` [RFC PATCH 16/17] KVM: arm64: Enable parallel stage 2 MMU faults Oliver Upton
2022-04-15 21:59   ` Oliver Upton
2022-04-15 21:59   ` Oliver Upton
2022-04-21 16:35   ` Ben Gardon
2022-04-21 16:35     ` Ben Gardon
2022-04-21 16:35     ` Ben Gardon
2022-04-21 16:46     ` Oliver Upton
2022-04-21 16:46       ` Oliver Upton
2022-04-21 16:46       ` Oliver Upton
2022-04-21 17:03       ` Ben Gardon
2022-04-21 17:03         ` Ben Gardon
2022-04-21 17:03         ` Ben Gardon
2022-04-15 21:59 ` [RFC PATCH 17/17] TESTONLY: KVM: arm64: Add super lazy accounting of stage 2 table pages Oliver Upton
2022-04-15 21:59   ` Oliver Upton
2022-04-15 21:59   ` Oliver Upton
2022-04-15 23:35 ` [RFC PATCH 00/17] KVM: arm64: Parallelize stage 2 fault handling David Matlack
2022-04-15 23:35   ` David Matlack
2022-04-15 23:35   ` David Matlack
2022-04-16  0:04   ` Oliver Upton
2022-04-16  0:04     ` Oliver Upton
2022-04-16  0:04     ` Oliver Upton
2022-04-21 16:43     ` David Matlack
2022-04-21 16:43       ` David Matlack
2022-04-21 16:43       ` David Matlack
2022-04-16  6:23 ` Oliver Upton
2022-04-16  6:23   ` Oliver Upton
2022-04-16  6:23   ` Oliver Upton
2022-04-19 17:57 ` Ben Gardon
2022-04-19 17:57   ` Ben Gardon
2022-04-19 17:57   ` Ben Gardon
2022-04-19 18:36   ` Oliver Upton
2022-04-19 18:36     ` Oliver Upton
2022-04-19 18:36     ` Oliver Upton
2022-04-21 16:30     ` Ben Gardon
2022-04-21 16:30       ` Ben Gardon
2022-04-21 16:30       ` Ben Gardon
2022-04-21 16:37       ` Paolo Bonzini
2022-04-21 16:37         ` Paolo Bonzini
2022-04-21 16:37         ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANgfPd-LZf1tkSiFTkJ-rww4Cmaign4bJRsg1KWm5eA2P5=j+A@mail.gmail.com' \
    --to=bgardon@google.com \
    --cc=alexandru.elisei@arm.com \
    --cc=dmatlack@google.com \
    --cc=james.morse@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=pshier@google.com \
    --cc=reijiw@google.com \
    --cc=ricarkol@google.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.