All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Matlack <dmatlack@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	kvm list <kvm@vger.kernel.org>, Ben Gardon <bgardon@google.com>,
	Joerg Roedel <joro@8bytes.org>, Jim Mattson <jmattson@google.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Janis Schoetterl-Glausch <scgl@linux.vnet.ibm.com>,
	Junaid Shahid <junaids@google.com>,
	Oliver Upton <oupton@google.com>,
	Harish Barathvajasankar <hbarath@google.com>,
	Peter Xu <peterx@redhat.com>, Peter Shier <pshier@google.com>,
	"Nikunj A . Dadhania" <nikunj@amd.com>
Subject: Re: [PATCH v1 04/13] KVM: x86/mmu: Factor out logic to atomically install a new page table
Date: Fri, 7 Jan 2022 10:24:36 -0800	[thread overview]
Message-ID: <CALzav=coNhq-+Q1c3+H5xyFMYVLNgE=w=hgSWFeUQyNANOLxFQ@mail.gmail.com> (raw)
In-Reply-To: <CALzav=cW9jB49gdKa6xYVy-7Jh1PK8NLChwPMNwK_bK-55a=3w@mail.gmail.com>

On Thu, Jan 6, 2022 at 2:56 PM David Matlack <dmatlack@google.com> wrote:
>
> On Thu, Jan 6, 2022 at 12:12 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Mon, Dec 13, 2021, David Matlack wrote:
> > > Factor out the logic to atomically replace an SPTE with an SPTE that
> > > points to a new page table. This will be used in a follow-up commit to
> > > split a large page SPTE into one level lower.
> > >
> > > Opportunistically drop the kvm_mmu_get_page tracepoint in
> > > kvm_tdp_mmu_map() since it is redundant with the identical tracepoint in
> > > alloc_tdp_mmu_page().
> > >
> > > Signed-off-by: David Matlack <dmatlack@google.com>
> > > ---
> > >  arch/x86/kvm/mmu/tdp_mmu.c | 48 +++++++++++++++++++++++++++-----------
> > >  1 file changed, 34 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > > index 656ebf5b20dc..dbd07c10d11a 100644
> > > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > > @@ -950,6 +950,36 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
> > >       return ret;
> > >  }
> > >
> > > +/*
> > > + * tdp_mmu_install_sp_atomic - Atomically replace the given spte with an
> > > + * spte pointing to the provided page table.
> > > + *
> > > + * @kvm: kvm instance
> > > + * @iter: a tdp_iter instance currently on the SPTE that should be set
> > > + * @sp: The new TDP page table to install.
> > > + * @account_nx: True if this page table is being installed to split a
> > > + *              non-executable huge page.
> > > + *
> > > + * Returns: True if the new page table was installed. False if spte being
> > > + *          replaced changed, causing the atomic compare-exchange to fail.
> >
> > I'd prefer to return an int with 0/-EBUSY on success/fail.  Ditto for the existing
> > tdp_mmu_set_spte_atomic().  Actually, if you add a prep patch to make that happen,
> > then this can be:
> >
> >         u64 spte = make_nonleaf_spte(sp->spt, !shadow_accessed_mask);
> >         int ret;
> >
> >         ret = tdp_mmu_set_spte_atomic(kvm, iter, spte);
> >         if (ret)
> >                 return ret;
> >
> >         tdp_mmu_link_page(kvm, sp, account_nx);
> >         return 0;
>
> Will do.
>
> >
> >
> >
> > > + *          If this function returns false the sp will be freed before
> > > + *          returning.
> >
> > Uh, no it's not?  The call to tdp_mmu_free_sp() is still done by kvm_tdp_mmu_map().
>
> Correct. I missed cleaning up this comment after I pulled the
> tdp_mmu_free_sp() call up a level from where it was in the RFC.
>
> >
> > > + */
> > > +static bool tdp_mmu_install_sp_atomic(struct kvm *kvm,
> >
> > Hmm, so this helper is the only user of tdp_mmu_link_page(), and _that_ helper
> > is rather tiny.  And this would also be a good opportunity to clean up the
> > "(un)link_page" verbiage, as the bare "page" doesn't communicate to the reader
> > that it's for linking shadow pages, e.g. not struct page.
> >
> > So, what about folding in tdp_mmu_link_page(), naming this helper either
> > tdp_mmu_link_sp_atomic() or tdp_mmu_link_shadow_page_atomic(), and then renaming
> > tdp_mmu_unlink_page() accordingly?  And for bonus points, add a blurb in the
> > function comment like:
> >
> >         * Note the lack of a non-atomic variant!  The TDP MMU always builds its
> >         * page tables while holding mmu_lock for read.
>
> Sure, I'll include that cleanup as part of the next version of this series.

While I'm here how do you feel about renaming alloc_tdp_mmu_page() to
tdp_mmu_alloc_sp()? First to increase consistency that "tdp_mmu" is a
prefix before the verb, and to clarify that we are allocating a shadow
page.
>
> >
> > > +                                   struct tdp_iter *iter,
> > > +                                   struct kvm_mmu_page *sp,
> > > +                                   bool account_nx)
> > > +{
> > > +     u64 spte = make_nonleaf_spte(sp->spt, !shadow_accessed_mask);
> > > +
> > > +     if (!tdp_mmu_set_spte_atomic(kvm, iter, spte))
> > > +             return false;
> > > +
> > > +     tdp_mmu_link_page(kvm, sp, account_nx);
> > > +
> > > +     return true;
> > > +}
> > > +
> > >  /*
> > >   * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing
> > >   * page tables and SPTEs to translate the faulting guest physical address.
> > > @@ -959,8 +989,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > >       struct kvm_mmu *mmu = vcpu->arch.mmu;
> > >       struct tdp_iter iter;
> > >       struct kvm_mmu_page *sp;
> > > -     u64 *child_pt;
> > > -     u64 new_spte;
> > >       int ret;
> > >
> > >       kvm_mmu_hugepage_adjust(vcpu, fault);
> > > @@ -996,6 +1024,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > >               }
> > >
> > >               if (!is_shadow_present_pte(iter.old_spte)) {
> > > +                     bool account_nx = fault->huge_page_disallowed &&
> > > +                                       fault->req_level >= iter.level;
> > > +
> > >                       /*
> > >                        * If SPTE has been frozen by another thread, just
> > >                        * give up and retry, avoiding unnecessary page table
> > > @@ -1005,18 +1036,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > >                               break;
> > >
> > >                       sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level - 1);
> > > -                     child_pt = sp->spt;
> > > -
> > > -                     new_spte = make_nonleaf_spte(child_pt,
> > > -                                                  !shadow_accessed_mask);
> > > -
> > > -                     if (tdp_mmu_set_spte_atomic(vcpu->kvm, &iter, new_spte)) {
> > > -                             tdp_mmu_link_page(vcpu->kvm, sp,
> > > -                                               fault->huge_page_disallowed &&
> > > -                                               fault->req_level >= iter.level);
> > > -
> > > -                             trace_kvm_mmu_get_page(sp, true);
> > > -                     } else {
> > > +                     if (!tdp_mmu_install_sp_atomic(vcpu->kvm, &iter, sp, account_nx)) {
> > >                               tdp_mmu_free_sp(sp);
> > >                               break;
> > >                       }
> > > --
> > > 2.34.1.173.g76aa8bc2d0-goog
> > >

  reply	other threads:[~2022-01-07 18:25 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-13 22:59 [PATCH v1 00/13] KVM: x86/mmu: Eager Page Splitting for the TDP MMU David Matlack
2021-12-13 22:59 ` [PATCH v1 01/13] KVM: x86/mmu: Rename rmap_write_protect to kvm_vcpu_write_protect_gfn David Matlack
2022-01-06  0:35   ` Sean Christopherson
2021-12-13 22:59 ` [PATCH v1 02/13] KVM: x86/mmu: Rename __rmap_write_protect to rmap_write_protect David Matlack
2022-01-06  0:35   ` Sean Christopherson
2021-12-13 22:59 ` [PATCH v1 03/13] KVM: x86/mmu: Automatically update iter->old_spte if cmpxchg fails David Matlack
2022-01-04 10:13   ` Peter Xu
2022-01-04 17:29     ` Ben Gardon
2022-01-06  0:54   ` Sean Christopherson
2022-01-06 18:04     ` David Matlack
2021-12-13 22:59 ` [PATCH v1 04/13] KVM: x86/mmu: Factor out logic to atomically install a new page table David Matlack
2022-01-04 10:32   ` Peter Xu
2022-01-04 18:26     ` David Matlack
2022-01-05  1:00       ` Peter Xu
2022-01-06 20:12   ` Sean Christopherson
2022-01-06 22:56     ` David Matlack
2022-01-07 18:24       ` David Matlack [this message]
2022-01-07 21:39         ` Sean Christopherson
2021-12-13 22:59 ` [PATCH v1 05/13] KVM: x86/mmu: Move restore_acc_track_spte to spte.c David Matlack
2022-01-04 10:33   ` Peter Xu
2022-01-06 20:27   ` Sean Christopherson
2022-01-06 22:58     ` David Matlack
2021-12-13 22:59 ` [PATCH v1 06/13] KVM: x86/mmu: Refactor tdp_mmu iterators to take kvm_mmu_page root David Matlack
2022-01-04 10:35   ` Peter Xu
2022-01-06 20:34   ` Sean Christopherson
2022-01-06 22:57     ` David Matlack
2021-12-13 22:59 ` [PATCH v1 07/13] KVM: x86/mmu: Derive page role from parent David Matlack
2022-01-05  7:51   ` Peter Xu
2022-01-06 20:45   ` Sean Christopherson
2022-01-06 23:00     ` David Matlack
2021-12-13 22:59 ` [PATCH v1 08/13] KVM: x86/mmu: Refactor TDP MMU child page initialization David Matlack
2022-01-05  7:51   ` Peter Xu
2022-01-06 20:59   ` Sean Christopherson
2022-01-06 22:08     ` David Matlack
2022-01-06 23:02       ` Sean Christopherson
2021-12-13 22:59 ` [PATCH v1 09/13] KVM: x86/mmu: Split huge pages when dirty logging is enabled David Matlack
2022-01-05  7:54   ` Peter Xu
2022-01-05 17:49     ` David Matlack
2022-01-06 22:48       ` Sean Christopherson
2022-01-06 21:28   ` Sean Christopherson
2022-01-06 22:20     ` David Matlack
2022-01-06 22:56       ` Sean Christopherson
2022-01-07  2:02       ` Peter Xu
2022-01-07  2:06   ` Peter Xu
2021-12-13 22:59 ` [PATCH v1 10/13] KVM: Push MMU locking down into kvm_arch_mmu_enable_log_dirty_pt_masked David Matlack
2021-12-13 22:59 ` [PATCH v1 11/13] KVM: x86/mmu: Split huge pages during CLEAR_DIRTY_LOG David Matlack
2022-01-05  9:02   ` Peter Xu
2022-01-05 17:55     ` David Matlack
2022-01-05 17:57       ` David Matlack
2021-12-13 22:59 ` [PATCH v1 12/13] KVM: x86/mmu: Add tracepoint for splitting huge pages David Matlack
2022-01-05  8:38   ` Peter Xu
2022-01-06 23:14   ` Sean Christopherson
2022-01-07  0:54     ` David Matlack
2021-12-13 22:59 ` [PATCH v1 13/13] KVM: selftests: Add an option to disable MANUAL_PROTECT_ENABLE and INITIALLY_SET David Matlack
2022-01-05  8:38   ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALzav=coNhq-+Q1c3+H5xyFMYVLNgE=w=hgSWFeUQyNANOLxFQ@mail.gmail.com' \
    --to=dmatlack@google.com \
    --cc=bgardon@google.com \
    --cc=hbarath@google.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=junaids@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=nikunj@amd.com \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pshier@google.com \
    --cc=scgl@linux.vnet.ibm.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.