All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Matlack <dmatlack@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: kvm@vger.kernel.org, Ben Gardon <bgardon@google.com>,
	Joerg Roedel <joro@8bytes.org>, Jim Mattson <jmattson@google.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Junaid Shahid <junaids@google.com>,
	Andrew Jones <drjones@redhat.com>
Subject: Re: [PATCH 1/8] KVM: x86/mmu: Refactor is_tdp_mmu_root()
Date: Mon, 14 Jun 2021 21:23:15 +0000	[thread overview]
Message-ID: <YMfIw3Evf6JK78JR@google.com> (raw)
In-Reply-To: <YMepDK40DLkD4DSy@google.com>

On Mon, Jun 14, 2021 at 07:07:56PM +0000, Sean Christopherson wrote:
> On Fri, Jun 11, 2021, David Matlack wrote:
> > Refactor is_tdp_mmu_root() into is_vcpu_using_tdp_mmu() to reduce
> > duplicated code at call sites and make the code more readable.
> > 
> > Signed-off-by: David Matlack <dmatlack@google.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c     | 10 +++++-----
> >  arch/x86/kvm/mmu/tdp_mmu.c |  2 +-
> >  arch/x86/kvm/mmu/tdp_mmu.h |  8 +++++---
> >  3 files changed, 11 insertions(+), 9 deletions(-)
> > 
> 
> ...

I'm not sure how to interpret this.

> 
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> > index 5fdf63090451..c8cf12809fcf 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.h
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> > @@ -91,16 +91,18 @@ static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; }
> >  static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; }
> >  #endif
> >  
> > -static inline bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
> > +static inline bool is_vcpu_using_tdp_mmu(struct kvm_vcpu *vcpu)
> 
> I normally like code reduction, but I think in this case we should stay with
> something closer to the existing code.
> 
> If interpreted literally as "is the vCPU using the TDP MMU", the helper is flat
> out broken if arch.mmu points at guest_mmu, in which case the function will
> evaluate "false" even if the _vCPU_ is using the TDP MMU for the non-nested MMU.

Good point. is_vcpu_using_tdp_mmu is ambiguous: It's not clear if it's
checking if the vcpu is using the TDP MMU "right now" versus "at all".

> 
> We could dance around that by doing something like:
> 
> 	if (is_current_mmu_tdp_mmu(vcpu))
> 		....
> 
> but IMO that's a net negative versus:
> 
> 	if (is_tdp_mmu(vcpu, vcpu->arch.mmu))
> 		...
> 
> because it's specifically the MMU that is of interest.

I agree this is more clear.

> 
> >  {
> > +	struct kvm *kvm = vcpu->kvm;
> >  	struct kvm_mmu_page *sp;
> > +	hpa_t root_hpa = vcpu->arch.mmu->root_hpa;
> >  
> >  	if (!is_tdp_mmu_enabled(kvm))
> 
> If we want to eliminate the second param from the prototype, I think we'd be
> better off evaluating whether or not this check buys us anything.  Or even better,
> eliminate redundant calls to minimize the benefits so that the "fast" check is
> unnecessary.
> 
> The extra check in kvm_tdp_mmu_map() can simply be dropped; it's sole caller
> literally wraps it with a is_tdp_mmu... check.
> 
> >  		return false;
> > -	if (WARN_ON(!VALID_PAGE(hpa)))
> > +	if (WARN_ON(!VALID_PAGE(root_hpa)))
> 
> And hoist this check out of is_tdp_mmu().  I think it's entirely reasonable to
> expect the caller to first test the validity of the root before caring about
> whether it's a TDP MMU or legacy MMU.
> 
> The VALID_PAGE() checks here and in__direct_map() and FNAME(page_fault) are also
> mostly worthless, but they're at least a marginally not-awful sanity check and
> not fully redundant.
> 
> E.g. achieve this over 2-4 patches:

Thanks for the suggestions, I'll take a look at cleaning that up. I am
thinking of making that a separate patch series (including removing this
patch from this series) as the interaction between the two is entirely
superficial. Let me know if that makes sense.

> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index f96fc3cd5c18..527305c8cede 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2899,9 +2899,6 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>         gfn_t gfn = gpa >> PAGE_SHIFT;
>         gfn_t base_gfn = gfn;
> 
> -       if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
> -               return RET_PF_RETRY;
> -
>         level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn,
>                                         huge_page_disallowed, &req_level);
> 
> @@ -3635,7 +3632,7 @@ static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep)
>                 return reserved;
>         }
> 
> -       if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
> +       if (is_tdp_mmu(vcpu->arch.mmu))
>                 leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes, &root);
>         else
>                 leaf = get_walk(vcpu, addr, sptes, &root);
> @@ -3801,6 +3798,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
>  static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>                              bool prefault, int max_level, bool is_tdp)
>  {
> +       bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);
>         bool write = error_code & PFERR_WRITE_MASK;
>         bool map_writable;
> 
> @@ -3810,10 +3808,13 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>         hva_t hva;
>         int r;
> 
> +       if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
> +               return RET_PF_RETRY;
> +
>         if (page_fault_handle_page_track(vcpu, error_code, gfn))
>                 return RET_PF_EMULATE;
> 
> -       if (!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) {
> +       if (!is_tdp_mmu_fault) {
>                 r = fast_page_fault(vcpu, gpa, error_code);
>                 if (r != RET_PF_INVALID)
>                         return r;
> @@ -3835,7 +3836,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
> 
>         r = RET_PF_RETRY;
> 
> -       if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
> +       if (is_tdp_mmu_fault)
>                 read_lock(&vcpu->kvm->mmu_lock);
>         else
>                 write_lock(&vcpu->kvm->mmu_lock);
> @@ -3846,7 +3847,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>         if (r)
>                 goto out_unlock;
> 
> -       if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
> +       if (is_tdp_mmu_fault)
>                 r = kvm_tdp_mmu_map(vcpu, gpa, error_code, map_writable, max_level,
>                                     pfn, prefault);
>         else
> @@ -3854,7 +3855,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>                                  prefault, is_tdp);
> 
>  out_unlock:
> -       if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa))
> +       if (is_tdp_mmu_fault)
>                 read_unlock(&vcpu->kvm->mmu_lock);
>         else
>                 write_unlock(&vcpu->kvm->mmu_lock);
> 		diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index cc13e001f3de..7ce90bc50774 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -979,11 +979,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>         int level;
>         int req_level;
> 
> -       if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)))
> -               return RET_PF_RETRY;
> -       if (WARN_ON(!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)))
> -               return RET_PF_RETRY;
> -
>         level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn,
>                                         huge_page_disallowed, &req_level);
> 
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index f7a7990da11d..6ebe2331a641 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -92,16 +92,11 @@ static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; }
>  static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; }
>  #endif
> 
> -static inline bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
> +static inline bool is_tdp_mmu(struct kvm_mmu *mmu)
>  {
>         struct kvm_mmu_page *sp;
> 
> -       if (!is_tdp_mmu_enabled(kvm))
> -               return false;
> -       if (WARN_ON(!VALID_PAGE(hpa)))
> -               return false;
> -
> -       sp = to_shadow_page(hpa);
> +       sp = to_shadow_page(mmu->root_hpa);
>         if (WARN_ON(!sp))
>                 return false;
> 

  reply	other threads:[~2021-06-14 21:23 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-11 23:56 [PATCH 0/8] KVM: x86/mmu: Fast page fault support for the TDP MMU David Matlack
2021-06-11 23:56 ` [PATCH 1/8] KVM: x86/mmu: Refactor is_tdp_mmu_root() David Matlack
2021-06-14 17:56   ` Ben Gardon
2021-06-14 19:07   ` Sean Christopherson
2021-06-14 21:23     ` David Matlack [this message]
2021-06-14 21:39       ` Sean Christopherson
2021-06-14 22:01         ` David Matlack
2021-06-11 23:56 ` [PATCH 2/8] KVM: x86/mmu: Rename cr2_or_gpa to gpa in fast_page_fault David Matlack
2021-06-14 17:56   ` Ben Gardon
2021-06-11 23:56 ` [PATCH 3/8] KVM: x86/mmu: Fix use of enums in trace_fast_page_fault David Matlack
2021-06-11 23:56 ` [PATCH 4/8] KVM: x86/mmu: Common API for lockless shadow page walks David Matlack
2021-06-14 17:56   ` Ben Gardon
2021-06-11 23:56 ` [PATCH 5/8] KVM: x86/mmu: Also record spteps in shadow_page_walk David Matlack
2021-06-14 17:56   ` Ben Gardon
2021-06-14 22:27   ` David Matlack
2021-06-14 22:59   ` Sean Christopherson
2021-06-14 23:39     ` David Matlack
2021-06-15  0:22       ` Sean Christopherson
2021-06-11 23:56 ` [PATCH 6/8] KVM: x86/mmu: fast_page_fault support for the TDP MMU David Matlack
2021-06-11 23:59   ` David Matlack
2021-06-14 17:56     ` Ben Gardon
2021-06-14 22:34       ` David Matlack
2021-06-11 23:57 ` [PATCH 7/8] KVM: selftests: Fix missing break in dirty_log_perf_test arg parsing David Matlack
2021-06-14 17:56   ` Ben Gardon
2021-06-11 23:57 ` [PATCH 8/8] KVM: selftests: Introduce access_tracking_perf_test David Matlack
2021-06-14 17:56   ` Ben Gardon
2021-06-14 21:47     ` David Matlack
2021-06-14  9:54 ` [PATCH 0/8] KVM: x86/mmu: Fast page fault support for the TDP MMU Paolo Bonzini
2021-06-14 21:08   ` David Matlack
2021-06-15  7:16     ` Paolo Bonzini
2021-06-16 19:27       ` David Matlack
2021-06-16 19:31         ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YMfIw3Evf6JK78JR@google.com \
    --to=dmatlack@google.com \
    --cc=bgardon@google.com \
    --cc=drjones@redhat.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=junaids@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.