kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: LKML <linux-kernel@vger.kernel.org>, kvm <kvm@vger.kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>,
	Peter Shier <pshier@google.com>,
	Peter Feiner <pfeiner@google.com>,
	Junaid Shahid <junaids@google.com>,
	Jim Mattson <jmattson@google.com>,
	Yulei Zhang <yulei.kernel@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>
Subject: Re: [PATCH 12/24] kvm: x86/kvm: RCU dereference tdp mmu page table links
Date: Tue, 26 Jan 2021 10:17:40 -0800	[thread overview]
Message-ID: <CANgfPd8kVn8Oho+CXJ_4RY6Jn7MLo8hUh_8JkuMF3NgMQq3cag@mail.gmail.com> (raw)
In-Reply-To: <YAsaIAhbjzoKcQlR@google.com>

On Fri, Jan 22, 2021 at 10:32 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Jan 12, 2021, Ben Gardon wrote:
> > In order to protect TDP MMU PT memory with RCU, ensure that page table
> > links are properly rcu_derefenced.
> >
> > Reviewed-by: Peter Feiner <pfeiner@google.com>
> >
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> > ---
> >  arch/x86/kvm/mmu/tdp_iter.c | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> > index 87b7e16911db..82855613ffa0 100644
> > --- a/arch/x86/kvm/mmu/tdp_iter.c
> > +++ b/arch/x86/kvm/mmu/tdp_iter.c
> > @@ -49,6 +49,8 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
> >   */
> >  u64 *spte_to_child_pt(u64 spte, int level)
> >  {
> > +     u64 *child_pt;
> > +
> >       /*
> >        * There's no child entry if this entry isn't present or is a
> >        * last-level entry.
> > @@ -56,7 +58,9 @@ u64 *spte_to_child_pt(u64 spte, int level)
> >       if (!is_shadow_present_pte(spte) || is_last_spte(spte, level))
> >               return NULL;
> >
> > -     return __va(spte_to_pfn(spte) << PAGE_SHIFT);
> > +     child_pt = __va(spte_to_pfn(spte) << PAGE_SHIFT);
> > +
> > +     return rcu_dereference(child_pt);
>
> This is what bugs me the most about the RCU usage.  We're reaping the functional
> benefits of RCU without doing the grunt work to truly RCU-ify the TDP MMU.  The
> above rcu_dereference() barely scratches the surface of what's being protected
> by RCU.  There are already multiple mechanisms that protect the page tables,
> throwing RCU into the mix without fully integrating RCU makes for simple code
> and avoids reinventing the wheel (big thumbs up), but ends up adding complexity
> to an already complex system.  E.g. the lockless walks in the old MMU are
> complex on the surface, but I find them easier to think through because they
> explicitly rely on the same mechanism (remote TLB flush) that is used to protect
> guest usage of the page tables.
>
> Ideally, I think struct kvm_mmu_page's 'u64 *spt' would be annotated with __rcu,
> as that would provide a high level of enforcement and would also highlight where
> we're using other mechanisms to ensure correctness.  E.g. dereferencing root->spt
> in kvm_tdp_mmu_get_vcpu_root_hpa() relies on the root being pinned by
> get_tdp_mmu_vcpu_root(), and _that_ in turn relies on hold rwlock for write.
> Unfortunately since kvm_mmu_page is shared with the old mmu, annotating ->spt
> that doesn't work well.  We could employ a union to make it work, but that'd
> probably do more harm than good.
>
> The middle ground would be to annotate pt_path and sptep in struct tdp_iter.
> That gets a decent chunk of the enforcement and also helps highlight what's
> being protected with RCU.  Assuming we end up going with RCU, I think this
> single rcu_dereference should be replace with something like the below patch.

Thank you for explaining your thought process here. You make an
excellent point that this results in code that is substantially less
self-documenting than it could be. It seems like your patch below will
substantially improve the automated checker's ability to validate the
RCU usage as well. I'll happily include it in the next version of this
series. I appreciate the way that the patch below makes all references
to the entries of the page table RCU dereferences. Not doing those
dereferences was certainly an error in the original patch.

>
> diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c
> index 82855613ffa0..e000642d938d 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.c
> +++ b/arch/x86/kvm/mmu/tdp_iter.c
> @@ -12,7 +12,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter)
>  {
>         iter->sptep = iter->pt_path[iter->level - 1] +
>                 SHADOW_PT_INDEX(iter->gfn << PAGE_SHIFT, iter->level);
> -       iter->old_spte = READ_ONCE(*iter->sptep);
> +       iter->old_spte = READ_ONCE(*rcu_dereference(iter->sptep));
>  }
>
>  static gfn_t round_gfn_for_level(gfn_t gfn, int level)
> @@ -34,7 +34,7 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
>         iter->root_level = root_level;
>         iter->min_level = min_level;
>         iter->level = root_level;
> -       iter->pt_path[iter->level - 1] = root_pt;
> +       iter->pt_path[iter->level - 1] = (tdp_ptep_t)root_pt;
>
>         iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level);
>         tdp_iter_refresh_sptep(iter);
> @@ -47,9 +47,9 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
>   * address of the child page table referenced by the SPTE. Returns null if
>   * there is no such entry.
>   */
> -u64 *spte_to_child_pt(u64 spte, int level)
> +tdp_ptep_t spte_to_child_pt(u64 spte, int level)
>  {
> -       u64 *child_pt;
> +       tdp_ptep_t child_pt;
>
>         /*
>          * There's no child entry if this entry isn't present or is a
> @@ -58,9 +58,9 @@ u64 *spte_to_child_pt(u64 spte, int level)
>         if (!is_shadow_present_pte(spte) || is_last_spte(spte, level))
>                 return NULL;
>
> -       child_pt = __va(spte_to_pfn(spte) << PAGE_SHIFT);
> +       child_pt = (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT);
>
> -       return rcu_dereference(child_pt);
> +       return child_pt;
>  }
>
>  /*
> @@ -69,7 +69,7 @@ u64 *spte_to_child_pt(u64 spte, int level)
>   */
>  static bool try_step_down(struct tdp_iter *iter)
>  {
> -       u64 *child_pt;
> +       tdp_ptep_t child_pt;
>
>         if (iter->level == iter->min_level)
>                 return false;
> @@ -78,7 +78,7 @@ static bool try_step_down(struct tdp_iter *iter)
>          * Reread the SPTE before stepping down to avoid traversing into page
>          * tables that are no longer linked from this entry.
>          */
> -       iter->old_spte = READ_ONCE(*iter->sptep);
> +       iter->old_spte = READ_ONCE(*rcu_dereference(iter->sptep));
>
>         child_pt = spte_to_child_pt(iter->old_spte, iter->level);
>         if (!child_pt)
> @@ -112,7 +112,7 @@ static bool try_step_side(struct tdp_iter *iter)
>         iter->gfn += KVM_PAGES_PER_HPAGE(iter->level);
>         iter->goal_gfn = iter->gfn;
>         iter->sptep++;
> -       iter->old_spte = READ_ONCE(*iter->sptep);
> +       iter->old_spte = READ_ONCE(*rcu_dereference(iter->sptep));
>
>         return true;
>  }
> @@ -175,11 +175,11 @@ void tdp_iter_refresh_walk(struct tdp_iter *iter)
>         if (iter->gfn > goal_gfn)
>                 goal_gfn = iter->gfn;
>
> -       tdp_iter_start(iter, iter->pt_path[iter->root_level - 1],
> +       tdp_iter_start(iter, rcu_dereference(iter->pt_path[iter->root_level - 1]),
>                        iter->root_level, iter->min_level, goal_gfn);
>  }
>
> -u64 *tdp_iter_root_pt(struct tdp_iter *iter)
> +tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter)
>  {
>         return iter->pt_path[iter->root_level - 1];
>  }
> diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h
> index 47170d0dc98e..bf882dab8ec5 100644
> --- a/arch/x86/kvm/mmu/tdp_iter.h
> +++ b/arch/x86/kvm/mmu/tdp_iter.h
> @@ -7,6 +7,8 @@
>
>  #include "mmu.h"
>
> +typedef u64 __rcu *tdp_ptep_t;
> +
>  /*
>   * A TDP iterator performs a pre-order walk over a TDP paging structure.
>   */
> @@ -17,9 +19,9 @@ struct tdp_iter {
>          */
>         gfn_t goal_gfn;
>         /* Pointers to the page tables traversed to reach the current SPTE */
> -       u64 *pt_path[PT64_ROOT_MAX_LEVEL];
> +       tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL];
>         /* A pointer to the current SPTE */
> -       u64 *sptep;
> +       tdp_ptep_t sptep;
>         /* The lowest GFN mapped by the current SPTE */
>         gfn_t gfn;
>         /* The level of the root page given to the iterator */
> @@ -49,12 +51,12 @@ struct tdp_iter {
>  #define for_each_tdp_pte(iter, root, root_level, start, end) \
>         for_each_tdp_pte_min_level(iter, root, root_level, PG_LEVEL_4K, start, end)
>
> -u64 *spte_to_child_pt(u64 pte, int level);
> +tdp_ptep_t spte_to_child_pt(u64 pte, int level);
>
>  void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level,
>                     int min_level, gfn_t goal_gfn);
>  void tdp_iter_next(struct tdp_iter *iter);
>  void tdp_iter_refresh_walk(struct tdp_iter *iter);
> -u64 *tdp_iter_root_pt(struct tdp_iter *iter);
> +tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter);
>
>  #endif /* __KVM_X86_MMU_TDP_ITER_H */
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 45160ff84e91..27b850904230 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -509,7 +509,7 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
>                                            struct tdp_iter *iter,
>                                            u64 new_spte)
>  {
> -       u64 *root_pt = tdp_iter_root_pt(iter);
> +       tdp_ptep_t root_pt = tdp_iter_root_pt(iter);
>         struct kvm_mmu_page *root = sptep_to_sp(root_pt);
>         int as_id = kvm_mmu_page_as_id(root);
>
>

  reply	other threads:[~2021-01-26 22:48 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-12 18:10 [PATCH 00/24] Allow parallel page faults with TDP MMU Ben Gardon
2021-01-12 18:10 ` [PATCH 01/24] locking/rwlocks: Add contention detection for rwlocks Ben Gardon
2021-01-12 18:10 ` [PATCH 02/24] sched: Add needbreak " Ben Gardon
2021-01-12 18:10 ` [PATCH 03/24] sched: Add cond_resched_rwlock Ben Gardon
2021-01-12 18:10 ` [PATCH 04/24] kvm: x86/mmu: change TDP MMU yield function returns to match cond_resched Ben Gardon
2021-01-20 18:38   ` Sean Christopherson
2021-01-21 20:22     ` Paolo Bonzini
2021-01-26 14:11     ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 05/24] kvm: x86/mmu: Fix yielding in TDP MMU Ben Gardon
2021-01-20 19:28   ` Sean Christopherson
2021-01-22  1:06     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 06/24] kvm: x86/mmu: Skip no-op changes in TDP MMU functions Ben Gardon
2021-01-20 19:51   ` Sean Christopherson
2021-01-25 23:51     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 07/24] kvm: x86/mmu: Add comment on __tdp_mmu_set_spte Ben Gardon
2021-01-26 14:13   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 08/24] kvm: x86/mmu: Add lockdep when setting a TDP MMU SPTE Ben Gardon
2021-01-20 19:58   ` Sean Christopherson
2021-01-26 14:13   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 09/24] kvm: x86/mmu: Don't redundantly clear TDP MMU pt memory Ben Gardon
2021-01-20 20:06   ` Sean Christopherson
2021-01-26 14:14   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 10/24] kvm: x86/mmu: Factor out handle disconnected pt Ben Gardon
2021-01-20 20:30   ` Sean Christopherson
2021-01-26 14:14   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 11/24] kvm: x86/mmu: Put TDP MMU PT walks in RCU read-critical section Ben Gardon
2021-01-20 22:19   ` Sean Christopherson
2021-01-12 18:10 ` [PATCH 12/24] kvm: x86/kvm: RCU dereference tdp mmu page table links Ben Gardon
2021-01-22 18:32   ` Sean Christopherson
2021-01-26 18:17     ` Ben Gardon [this message]
2021-01-12 18:10 ` [PATCH 13/24] kvm: x86/mmu: Only free tdp_mmu pages after a grace period Ben Gardon
2021-01-12 18:10 ` [PATCH 14/24] kvm: mmu: Wrap mmu_lock lock / unlock in a function Ben Gardon
2021-01-12 18:10 ` [PATCH 15/24] kvm: mmu: Wrap mmu_lock cond_resched and needbreak Ben Gardon
2021-01-21  0:19   ` Sean Christopherson
2021-01-21 20:17     ` Paolo Bonzini
2021-01-26 14:38     ` Paolo Bonzini
2021-01-26 17:47       ` Ben Gardon
2021-01-26 17:55         ` Paolo Bonzini
2021-01-26 18:11           ` Ben Gardon
2021-01-26 20:47             ` Paolo Bonzini
2021-01-27 20:08               ` Ben Gardon
2021-01-27 20:55                 ` Paolo Bonzini
2021-01-27 21:20                   ` Ben Gardon
2021-01-28  8:18                     ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 16/24] kvm: mmu: Wrap mmu_lock assertions Ben Gardon
2021-01-26 14:29   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 17/24] kvm: mmu: Move mmu_lock to struct kvm_arch Ben Gardon
2021-01-12 18:10 ` [PATCH 18/24] kvm: x86/mmu: Use an rwlock for the x86 TDP MMU Ben Gardon
2021-01-21  0:45   ` Sean Christopherson
2021-01-12 18:10 ` [PATCH 19/24] kvm: x86/mmu: Protect tdp_mmu_pages with a lock Ben Gardon
2021-01-21 19:22   ` Sean Christopherson
2021-01-21 21:32     ` Sean Christopherson
2021-01-26 14:27       ` Paolo Bonzini
2021-01-26 21:47         ` Ben Gardon
2021-01-26 22:02         ` Sean Christopherson
2021-01-26 22:09           ` Sean Christopherson
2021-01-27 12:40           ` Paolo Bonzini
2021-01-26 13:37   ` Paolo Bonzini
2021-01-26 21:07     ` Ben Gardon
2021-01-12 18:10 ` [PATCH 20/24] kvm: x86/mmu: Add atomic option for setting SPTEs Ben Gardon
2021-01-26 14:21   ` Paolo Bonzini
2021-01-12 18:10 ` [PATCH 21/24] kvm: x86/mmu: Use atomic ops to set SPTEs in TDP MMU map Ben Gardon
2021-01-12 18:10 ` [PATCH 22/24] kvm: x86/mmu: Flush TLBs after zap in TDP MMU PF handler Ben Gardon
2021-01-21  0:05   ` Sean Christopherson
2021-01-12 18:10 ` [PATCH 23/24] kvm: x86/mmu: Freeze SPTEs in disconnected pages Ben Gardon
2021-01-12 18:10 ` [PATCH 24/24] kvm: x86/mmu: Allow parallel page faults for the TDP MMU Ben Gardon
2021-01-21  0:55   ` Sean Christopherson
2021-01-26 21:57     ` Ben Gardon
2021-01-27 17:14       ` Sean Christopherson
2021-01-26 13:37   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANgfPd8kVn8Oho+CXJ_4RY6Jn7MLo8hUh_8JkuMF3NgMQq3cag@mail.gmail.com \
    --to=bgardon@google.com \
    --cc=jmattson@google.com \
    --cc=junaids@google.com \
    --cc=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    --cc=pshier@google.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).