All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Matlack <dmatlack@google.com>
To: Ben Gardon <bgardon@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	leksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Sean Christopherson <seanjc@google.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Peter Xu <peterx@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Peter Feiner <pfeiner@google.com>,
	Andrew Jones <drjones@redhat.com>,
	"Maciej S . Szmigiero" <maciej.szmigiero@oracle.com>,
	kvm <kvm@vger.kernel.org>
Subject: Re: [PATCH 17/23] KVM: x86/mmu: Pass bool flush parameter to drop_large_spte()
Date: Thu, 3 Mar 2022 11:52:46 -0800	[thread overview]
Message-ID: <CALzav=fuLgXJ3Krr8JYXA0Bd1KdPeh+thJnLyvMMZtqsNeSu3w@mail.gmail.com> (raw)
In-Reply-To: <CANgfPd90UA2_RRRWzwE6D_FtKiExSkbqktKiPpcYV0MmJxagWQ@mail.gmail.com>

On Mon, Feb 28, 2022 at 12:47 PM Ben Gardon <bgardon@google.com> wrote:
>
> On Wed, Feb 2, 2022 at 5:02 PM David Matlack <dmatlack@google.com> wrote:
> >
> > drop_large_spte() drops a large SPTE if it exists and then flushes TLBs.
> > Its helper function, __drop_large_spte(), does the drop without the
> > flush. This difference is not obvious from the name.
> >
> > To make the code more readable, pass an explicit flush parameter. Also
> > replace the vCPU pointer with a KVM pointer so we can get rid of the
> > double-underscore helper function.
> >
> > This is also in preparation for a future commit that will conditionally
> > flush after dropping a large SPTE.
> >
> > No functional change intended.
> >
> > Signed-off-by: David Matlack <dmatlack@google.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c         | 25 +++++++++++--------------
> >  arch/x86/kvm/mmu/paging_tmpl.h |  4 ++--
> >  2 files changed, 13 insertions(+), 16 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 99ad7cc8683f..2d47a54e62a5 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -1162,23 +1162,20 @@ static void drop_spte(struct kvm *kvm, u64 *sptep)
> >  }
> >
> >
> > -static bool __drop_large_spte(struct kvm *kvm, u64 *sptep)
> > +static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush)
>
> Since there are no callers of __drop_large_spte, I'd be inclined to
> hold off on adding the flush parameter in this commit and just add it
> when it's needed,

The same argument about waiting until there's a user could be said
about "KVM: x86/mmu: Pass access information to
make_huge_page_split_spte()". I agree with this advice when the future
user is entirely theoretical or some future series. But when the
future user is literally the next commit in the series, I think it's
ok to do things this way since it distributes the net diff more evenly
among patches, which eases reviewing.

But, you've got me thinking and I think I want to change this commit
slightly: I'll keep __drop_larg_spte() but push all the implementation
into it and add a bool flush parameter there. That way we don't have
to change all the call sites of drop_large_spte() in this commit. The
implementation of drop_large_spte() will just be
__drop_large_spte(..., true). And the next commit can call
__drop_large_spte(..., false) with a comment.

> or better yet after you add the new user with the
> conditional flush so that there's a commit explaining why it's safe to
> not always flush in that case.
>
> >  {
> > -       if (is_large_pte(*sptep)) {
> > -               WARN_ON(sptep_to_sp(sptep)->role.level == PG_LEVEL_4K);
> > -               drop_spte(kvm, sptep);
> > -               return true;
> > -       }
> > +       struct kvm_mmu_page *sp;
> >
> > -       return false;
> > -}
> > +       if (!is_large_pte(*sptep))
> > +               return;
> >
> > -static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep)
> > -{
> > -       if (__drop_large_spte(vcpu->kvm, sptep)) {
> > -               struct kvm_mmu_page *sp = sptep_to_sp(sptep);
> > +       sp = sptep_to_sp(sptep);
> > +       WARN_ON(sp->role.level == PG_LEVEL_4K);
> > +
> > +       drop_spte(kvm, sptep);
> >
> > -               kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn,
> > +       if (flush) {
> > +               kvm_flush_remote_tlbs_with_address(kvm, sp->gfn,
> >                         KVM_PAGES_PER_HPAGE(sp->role.level));
> >         }
> >  }
> > @@ -3051,7 +3048,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> >                 if (it.level == fault->goal_level)
> >                         break;
> >
> > -               drop_large_spte(vcpu, it.sptep);
> > +               drop_large_spte(vcpu->kvm, it.sptep, true);
> >                 if (is_shadow_present_pte(*it.sptep))
> >                         continue;
> >
> > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> > index 703dfb062bf0..ba61de29f2e5 100644
> > --- a/arch/x86/kvm/mmu/paging_tmpl.h
> > +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> > @@ -677,7 +677,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> >                 gfn_t table_gfn;
> >
> >                 clear_sp_write_flooding_count(it.sptep);
> > -               drop_large_spte(vcpu, it.sptep);
> > +               drop_large_spte(vcpu->kvm, it.sptep, true);
> >
> >                 sp = NULL;
> >                 if (!is_shadow_present_pte(*it.sptep)) {
> > @@ -739,7 +739,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
> >
> >                 validate_direct_spte(vcpu, it.sptep, direct_access);
> >
> > -               drop_large_spte(vcpu, it.sptep);
> > +               drop_large_spte(vcpu->kvm, it.sptep, true);
> >
> >                 if (!is_shadow_present_pte(*it.sptep)) {
> >                         sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn,
> > --
> > 2.35.0.rc2.247.g8bbb082509-goog
> >

  reply	other threads:[~2022-03-03 19:53 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-03  1:00 [PATCH 00/23] Extend Eager Page Splitting to the shadow MMU David Matlack
2022-02-03  1:00 ` [PATCH 01/23] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-02-19  0:57   ` Sean Christopherson
2022-02-03  1:00 ` [PATCH 02/23] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-02-19  1:14   ` Sean Christopherson
2022-02-24 18:45     ` David Matlack
2022-03-04  0:22     ` David Matlack
2022-02-03  1:00 ` [PATCH 03/23] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-02-19  1:25   ` Sean Christopherson
2022-02-24 18:54     ` David Matlack
2022-02-03  1:00 ` [PATCH 04/23] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-02-03  1:00 ` [PATCH 05/23] KVM: x86/mmu: Pass memslot to kvm_mmu_create_sp() David Matlack
2022-02-03  1:00 ` [PATCH 06/23] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-02-16 19:37   ` Ben Gardon
2022-02-16 21:42     ` David Matlack
2022-02-03  1:00 ` [PATCH 07/23] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-02-03  1:00 ` [PATCH 08/23] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-02-03  1:00 ` [PATCH 09/23] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches David Matlack
2022-02-03  1:00 ` [PATCH 10/23] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-02-23 23:25   ` Ben Gardon
2022-02-03  1:00 ` [PATCH 11/23] KVM: x86/mmu: Pass const memslot to kvm_mmu_init_sp() and descendants David Matlack
2022-02-23 23:27   ` Ben Gardon
2022-02-03  1:00 ` [PATCH 12/23] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-02-23 23:30   ` Ben Gardon
2022-02-03  1:00 ` [PATCH 13/23] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-02-23 23:32   ` Ben Gardon
2022-02-23 23:35     ` Ben Gardon
2022-02-03  1:00 ` [PATCH 14/23] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-02-28 20:30   ` Ben Gardon
2022-02-03  1:00 ` [PATCH 15/23] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() David Matlack
2022-02-28 20:32   ` Ben Gardon
2022-02-03  1:00 ` [PATCH 16/23] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU David Matlack
2022-02-28 20:39   ` Ben Gardon
2022-03-03 19:42     ` David Matlack
2022-02-03  1:00 ` [PATCH 17/23] KVM: x86/mmu: Pass bool flush parameter to drop_large_spte() David Matlack
2022-02-28 20:47   ` Ben Gardon
2022-03-03 19:52     ` David Matlack [this message]
2022-02-03  1:00 ` [PATCH 18/23] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-02-28 21:09   ` Ben Gardon
2022-02-28 23:29     ` David Matlack
2022-02-03  1:00 ` [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-02-24 11:28   ` Marc Zyngier
2022-02-24 19:20     ` David Matlack
2022-03-04 21:59       ` David Matlack
2022-03-04 22:24         ` David Matlack
2022-03-05 16:55         ` Marc Zyngier
2022-03-07 23:49           ` David Matlack
2022-03-08  7:42             ` Marc Zyngier
2022-03-09 21:49             ` David Matlack
2022-03-10  8:30               ` Marc Zyngier
2022-02-03  1:00 ` [PATCH 20/23] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-02-28 21:12   ` Ben Gardon
2022-02-03  1:00 ` [PATCH 21/23] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs David Matlack
2022-02-28 21:22   ` Ben Gardon
2022-02-28 23:41     ` David Matlack
2022-03-01  0:37       ` Ben Gardon
2022-03-03 19:59         ` David Matlack
2022-02-03  1:00 ` [PATCH 22/23] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs David Matlack
2022-02-03  1:00 ` [PATCH 23/23] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack
2022-03-07  5:21 ` [PATCH 00/23] Extend Eager Page Splitting to the shadow MMU Peter Xu
2022-03-07 23:39   ` David Matlack
2022-03-09  7:31     ` Peter Xu
2022-03-09 23:39       ` David Matlack
2022-03-10  7:03         ` Peter Xu
2022-03-10 19:26           ` David Matlack

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALzav=fuLgXJ3Krr8JYXA0Bd1KdPeh+thJnLyvMMZtqsNeSu3w@mail.gmail.com' \
    --to=dmatlack@google.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=bgardon@google.com \
    --cc=chenhuacai@kernel.org \
    --cc=drjones@redhat.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.