All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>, kvm <kvm@vger.kernel.org>,
	Cannon Matthews <cannonmatthews@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>,
	Peter Shier <pshier@google.com>,
	Peter Feiner <pfeiner@google.com>,
	Junaid Shahid <junaids@google.com>,
	Jim Mattson <jmattson@google.com>,
	Yulei Zhang <yulei.kernel@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>
Subject: Re: [PATCH 13/22] kvm: mmu: Support invalidate range MMU notifier for TDP MMU
Date: Wed, 30 Sep 2020 16:15:17 -0700	[thread overview]
Message-ID: <CANgfPd8mH7XpNzCbObD-XO_Pzc0TK6oNQpTw9rgSdqBV-4trFw@mail.gmail.com> (raw)
In-Reply-To: <20200930170354.GF32672@linux.intel.com>

On Wed, Sep 30, 2020 at 10:04 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Sep 25, 2020 at 02:22:53PM -0700, Ben Gardon wrote:
> > In order to interoperate correctly with the rest of KVM and other Linux
> > subsystems, the TDP MMU must correctly handle various MMU notifiers. Add
> > hooks to handle the invalidate range family of MMU notifiers.
> >
> > Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell
> > machine. This series introduced no new failures.
> >
> > This series can be viewed in Gerrit at:
> >       https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538
> >
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c     |  9 ++++-
> >  arch/x86/kvm/mmu/tdp_mmu.c | 80 +++++++++++++++++++++++++++++++++++---
> >  arch/x86/kvm/mmu/tdp_mmu.h |  3 ++
> >  3 files changed, 86 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 52d661a758585..0ddfdab942554 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -1884,7 +1884,14 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva,
> >  int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end,
> >                       unsigned flags)
> >  {
> > -     return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp);
> > +     int r;
> > +
> > +     r = kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp);
> > +
> > +     if (kvm->arch.tdp_mmu_enabled)
> > +             r |= kvm_tdp_mmu_zap_hva_range(kvm, start, end);
>
> Similar to an earlier question, is this intentionally additive, or can this
> instead by:
>
>         if (kvm->arch.tdp_mmu_enabled)
>                 r = kvm_tdp_mmu_zap_hva_range(kvm, start, end);
>         else
>                 r = kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp);
>

It is intentionally additive so the legacy/shadow MMU can handle nested.

> > +
> > +     return r;
> >  }
> >
> >  int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 557e780bdf9f9..1cea58db78a13 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -60,7 +60,7 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
> >  }
> >
> >  static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> > -                       gfn_t start, gfn_t end);
> > +                       gfn_t start, gfn_t end, bool can_yield);
> >
> >  static void free_tdp_mmu_root(struct kvm *kvm, struct kvm_mmu_page *root)
> >  {
> > @@ -73,7 +73,7 @@ static void free_tdp_mmu_root(struct kvm *kvm, struct kvm_mmu_page *root)
> >
> >       list_del(&root->link);
> >
> > -     zap_gfn_range(kvm, root, 0, max_gfn);
> > +     zap_gfn_range(kvm, root, 0, max_gfn, false);
> >
> >       free_page((unsigned long)root->spt);
> >       kmem_cache_free(mmu_page_header_cache, root);
> > @@ -361,9 +361,14 @@ static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
> >   * non-root pages mapping GFNs strictly within that range. Returns true if
> >   * SPTEs have been cleared and a TLB flush is needed before releasing the
> >   * MMU lock.
> > + * If can_yield is true, will release the MMU lock and reschedule if the
> > + * scheduler needs the CPU or there is contention on the MMU lock. If this
> > + * function cannot yield, it will not release the MMU lock or reschedule and
> > + * the caller must ensure it does not supply too large a GFN range, or the
> > + * operation can cause a soft lockup.
> >   */
> >  static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> > -                       gfn_t start, gfn_t end)
> > +                       gfn_t start, gfn_t end, bool can_yield)
> >  {
> >       struct tdp_iter iter;
> >       bool flush_needed = false;
> > @@ -387,7 +392,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> >               handle_changed_spte(kvm, as_id, iter.gfn, iter.old_spte, 0,
> >                                   iter.level);
> >
> > -             flush_needed = !tdp_mmu_iter_cond_resched(kvm, &iter);
> > +             if (can_yield)
> > +                     flush_needed = !tdp_mmu_iter_cond_resched(kvm, &iter);
>
>                 flush_needed = !can_yield || !tdp_mmu_iter_cond_resched(kvm, &iter);
>
> > +             else
> > +                     flush_needed = true;
> >       }
> >       return flush_needed;
> >  }
> > @@ -410,7 +418,7 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)
> >                */
> >               get_tdp_mmu_root(kvm, root);
> >
> > -             flush = zap_gfn_range(kvm, root, start, end) || flush;
> > +             flush = zap_gfn_range(kvm, root, start, end, true) || flush;
> >
> >               put_tdp_mmu_root(kvm, root);
> >       }
> > @@ -551,3 +559,65 @@ int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, int write, int map_writable,
> >
> >       return ret;
> >  }
> > +
> > +static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start,
> > +             unsigned long end, unsigned long data,
> > +             int (*handler)(struct kvm *kvm, struct kvm_memory_slot *slot,
> > +                            struct kvm_mmu_page *root, gfn_t start,
> > +                            gfn_t end, unsigned long data))
> > +{
> > +     struct kvm_memslots *slots;
> > +     struct kvm_memory_slot *memslot;
> > +     struct kvm_mmu_page *root;
> > +     int ret = 0;
> > +     int as_id;
> > +
> > +     for_each_tdp_mmu_root(kvm, root) {
> > +             /*
> > +              * Take a reference on the root so that it cannot be freed if
> > +              * this thread releases the MMU lock and yields in this loop.
> > +              */
> > +             get_tdp_mmu_root(kvm, root);
> > +
> > +             as_id = kvm_mmu_page_as_id(root);
> > +             slots = __kvm_memslots(kvm, as_id);
> > +             kvm_for_each_memslot(memslot, slots) {
> > +                     unsigned long hva_start, hva_end;
> > +                     gfn_t gfn_start, gfn_end;
> > +
> > +                     hva_start = max(start, memslot->userspace_addr);
> > +                     hva_end = min(end, memslot->userspace_addr +
> > +                                   (memslot->npages << PAGE_SHIFT));
> > +                     if (hva_start >= hva_end)
> > +                             continue;
> > +                     /*
> > +                      * {gfn(page) | page intersects with [hva_start, hva_end)} =
> > +                      * {gfn_start, gfn_start+1, ..., gfn_end-1}.
> > +                      */
> > +                     gfn_start = hva_to_gfn_memslot(hva_start, memslot);
> > +                     gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot);
> > +
> > +                     ret |= handler(kvm, memslot, root, gfn_start,
> > +                                    gfn_end, data);
>
> Eh, I'd say let this one poke out, the above hva_to_gfn_memslot() already
> overruns 80 chars.  IMO it's more readable without the wraps.

Will do.

>
> > +             }
> > +
> > +             put_tdp_mmu_root(kvm, root);
> > +     }
> > +
> > +     return ret;
> > +}
> > +
> > +static int zap_gfn_range_hva_wrapper(struct kvm *kvm,
> > +                                  struct kvm_memory_slot *slot,
> > +                                  struct kvm_mmu_page *root, gfn_t start,
> > +                                  gfn_t end, unsigned long unused)
> > +{
> > +     return zap_gfn_range(kvm, root, start, end, false);
> > +}
> > +
> > +int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
> > +                           unsigned long end)
> > +{
> > +     return kvm_tdp_mmu_handle_hva_range(kvm, start, end, 0,
> > +                                         zap_gfn_range_hva_wrapper);
> > +}
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> > index abf23dc0ab7ad..ce804a97bfa1d 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.h
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> > @@ -18,4 +18,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm);
> >  int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, int write, int map_writable,
> >                          int level, gpa_t gpa, kvm_pfn_t pfn, bool prefault,
> >                          bool lpage_disallowed);
> > +
> > +int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
> > +                           unsigned long end);
> >  #endif /* __KVM_X86_MMU_TDP_MMU_H */
> > --
> > 2.28.0.709.gb0816b6eb0-goog
> >

  reply	other threads:[~2020-09-30 23:18 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-25 21:22 [PATCH 00/22] Introduce the TDP MMU Ben Gardon
2020-09-25 21:22 ` [PATCH 01/22] kvm: mmu: Separate making SPTEs from set_spte Ben Gardon
2020-09-30  4:55   ` Sean Christopherson
2020-09-30 23:03     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 02/22] kvm: mmu: Introduce tdp_iter Ben Gardon
2020-09-26  0:04   ` Paolo Bonzini
2020-09-30  5:06     ` Sean Christopherson
2020-09-26  0:54   ` Paolo Bonzini
2020-09-30  5:08   ` Sean Christopherson
2020-09-30  5:24   ` Sean Christopherson
2020-09-30  6:24     ` Paolo Bonzini
2020-09-30 23:20   ` Eric van Tassell
2020-09-30 23:34     ` Paolo Bonzini
2020-10-01  0:07       ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 03/22] kvm: mmu: Init / Uninit the TDP MMU Ben Gardon
2020-09-26  0:06   ` Paolo Bonzini
2020-09-30  5:34   ` Sean Christopherson
2020-09-30 18:36     ` Ben Gardon
2020-09-30 16:57   ` Sean Christopherson
2020-09-30 17:39     ` Paolo Bonzini
2020-09-30 18:42       ` Ben Gardon
2020-09-25 21:22 ` [PATCH 04/22] kvm: mmu: Allocate and free TDP MMU roots Ben Gardon
2020-09-30  6:06   ` Sean Christopherson
2020-09-30  6:26     ` Paolo Bonzini
2020-09-30 15:38       ` Sean Christopherson
2020-10-12 22:59     ` Ben Gardon
2020-10-12 23:59       ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 05/22] kvm: mmu: Add functions to handle changed TDP SPTEs Ben Gardon
2020-09-26  0:39   ` Paolo Bonzini
2020-09-28 17:23     ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 06/22] kvm: mmu: Make address space ID a property of memslots Ben Gardon
2020-09-30  6:10   ` Sean Christopherson
2020-09-30 23:11     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 07/22] kvm: mmu: Support zapping SPTEs in the TDP MMU Ben Gardon
2020-09-26  0:14   ` Paolo Bonzini
2020-09-30  6:15   ` Sean Christopherson
2020-09-30  6:28     ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 08/22] kvm: mmu: Separate making non-leaf sptes from link_shadow_page Ben Gardon
2020-09-25 21:22 ` [PATCH 09/22] kvm: mmu: Remove disallowed_hugepage_adjust shadow_walk_iterator arg Ben Gardon
2020-09-30 16:19   ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 10/22] kvm: mmu: Add TDP MMU PF handler Ben Gardon
2020-09-26  0:24   ` Paolo Bonzini
2020-09-30 16:37   ` Sean Christopherson
2020-09-30 16:55     ` Paolo Bonzini
2020-09-30 17:37     ` Paolo Bonzini
2020-10-06 22:35       ` Ben Gardon
2020-10-06 22:33     ` Ben Gardon
2020-10-07 20:55       ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 11/22] kvm: mmu: Factor out allocating a new tdp_mmu_page Ben Gardon
2020-09-26  0:22   ` Paolo Bonzini
2020-09-30 18:53     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 12/22] kvm: mmu: Allocate struct kvm_mmu_pages for all pages in TDP MMU Ben Gardon
2020-09-25 21:22 ` [PATCH 13/22] kvm: mmu: Support invalidate range MMU notifier for " Ben Gardon
2020-09-30 17:03   ` Sean Christopherson
2020-09-30 23:15     ` Ben Gardon [this message]
2020-09-30 23:24       ` Sean Christopherson
2020-09-30 23:27         ` Ben Gardon
2020-09-25 21:22 ` [PATCH 14/22] kvm: mmu: Add access tracking for tdp_mmu Ben Gardon
2020-09-26  0:32   ` Paolo Bonzini
2020-09-30 17:48   ` Sean Christopherson
2020-10-06 23:38     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 15/22] kvm: mmu: Support changed pte notifier in tdp MMU Ben Gardon
2020-09-26  0:33   ` Paolo Bonzini
2020-09-28 15:11   ` Paolo Bonzini
2020-10-07 16:53     ` Ben Gardon
2020-10-07 17:18       ` Paolo Bonzini
2020-10-07 17:30         ` Ben Gardon
2020-10-07 17:54           ` Paolo Bonzini
2020-10-09 10:59   ` Dan Carpenter
2020-10-09 10:59     ` Dan Carpenter
2020-09-25 21:22 ` [PATCH 16/22] kvm: mmu: Add dirty logging handler for changed sptes Ben Gardon
2020-09-26  0:45   ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 17/22] kvm: mmu: Support dirty logging for the TDP MMU Ben Gardon
2020-09-26  1:04   ` Paolo Bonzini
2020-10-08 18:27     ` Ben Gardon
2020-09-29 15:07   ` Paolo Bonzini
2020-09-30 18:04   ` Sean Christopherson
2020-09-30 18:08     ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 18/22] kvm: mmu: Support disabling dirty logging for the tdp MMU Ben Gardon
2020-09-26  1:09   ` Paolo Bonzini
2020-10-07 16:30     ` Ben Gardon
2020-10-07 17:21       ` Paolo Bonzini
2020-10-07 17:28         ` Ben Gardon
2020-10-07 17:53           ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 19/22] kvm: mmu: Support write protection for nesting in " Ben Gardon
2020-09-30 18:06   ` Sean Christopherson
2020-09-25 21:23 ` [PATCH 20/22] kvm: mmu: NX largepage recovery for TDP MMU Ben Gardon
2020-09-26  1:14   ` Paolo Bonzini
2020-09-30 22:23     ` Ben Gardon
2020-09-29 18:24   ` Paolo Bonzini
2020-09-30 18:15   ` Sean Christopherson
2020-09-30 19:56     ` Paolo Bonzini
2020-09-30 22:33       ` Ben Gardon
2020-09-30 22:27     ` Ben Gardon
2020-10-09 11:03   ` Dan Carpenter
2020-10-09 11:03     ` Dan Carpenter
2020-09-25 21:23 ` [PATCH 21/22] kvm: mmu: Support MMIO in the " Ben Gardon
2020-09-30 18:19   ` Sean Christopherson
2020-10-09 11:43   ` Dan Carpenter
2020-10-09 11:43     ` Dan Carpenter
2020-09-25 21:23 ` [PATCH 22/22] kvm: mmu: Don't clear write flooding count for direct roots Ben Gardon
2020-09-26  1:25   ` Paolo Bonzini
2020-10-05 22:48     ` Ben Gardon
2020-10-05 23:44       ` Sean Christopherson
2020-10-06 16:19         ` Ben Gardon
2020-09-26  1:14 ` [PATCH 00/22] Introduce the TDP MMU Paolo Bonzini
2020-09-28 17:31 ` Paolo Bonzini
2020-09-29 17:40   ` Ben Gardon
2020-09-29 18:10     ` Paolo Bonzini
2020-09-30  6:19 ` Sean Christopherson
2020-09-30  6:30   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANgfPd8mH7XpNzCbObD-XO_Pzc0TK6oNQpTw9rgSdqBV-4trFw@mail.gmail.com \
    --to=bgardon@google.com \
    --cc=cannonmatthews@google.com \
    --cc=jmattson@google.com \
    --cc=junaids@google.com \
    --cc=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    --cc=pshier@google.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.