All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jon Cargille <jcargill@google.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH 2/2] KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs
Date: Tue, 23 Jun 2020 13:28:18 -0700	[thread overview]
Message-ID: <CANxmaygUwYDT38zde=hoMw+xE2PgVE+eG-dDYguneX=-i=ML+Q@mail.gmail.com> (raw)
In-Reply-To: <20200623194027.23135-3-sean.j.christopherson@intel.com>

LGTM.

Reviewed-By: Jon Cargille <jcargill@google.com>


On Tue, Jun 23, 2020 at 12:40 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Skip the unsync checks and the write flooding clearing for fully direct
> MMUs, which are guaranteed to not have unsync'd or indirect pages (write
> flooding detection only applies to indirect pages).  For TDP, this
> avoids unnecessary memory reads and writes, and for the write flooding
> count will also avoid dirtying a cache line (unsync_child_bitmap itself
> consumes a cache line, i.e. write_flooding_count is guaranteed to be in
> a different cache line than parent_ptes).
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 67f8f82e9783..c568a5c55276 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2475,6 +2475,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>                                              int direct,
>                                              unsigned int access)
>  {
> +       bool direct_mmu = vcpu->arch.mmu->direct_map;
>         union kvm_mmu_page_role role;
>         struct hlist_head *sp_list;
>         unsigned quadrant;
> @@ -2490,8 +2491,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>         if (role.direct)
>                 role.gpte_is_8_bytes = true;
>         role.access = access;
> -       if (!vcpu->arch.mmu->direct_map
> -           && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
> +       if (!direct_mmu && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
>                 quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level));
>                 quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1;
>                 role.quadrant = quadrant;
> @@ -2510,6 +2510,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>                 if (sp->role.word != role.word)
>                         continue;
>
> +               if (direct_mmu)
> +                       goto trace_get_page;
> +
>                 if (sp->unsync) {
>                         /* The page is good, but __kvm_sync_page might still end
>                          * up zapping it.  If so, break in order to rebuild it.
> @@ -2525,6 +2528,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>                         kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
>
>                 __clear_sp_write_flooding_count(sp);
> +
> +trace_get_page:
>                 trace_kvm_mmu_get_page(sp, false);
>                 goto out;
>         }
> --
> 2.26.0
>

  reply	other threads:[~2020-06-23 21:15 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-23 19:40 [PATCH 0/2] KVM: x86/mmu: Optimizations for kvm_get_mmu_page() Sean Christopherson
2020-06-23 19:40 ` [PATCH 1/2] KVM: x86/mmu: Avoid multiple hash lookups in kvm_get_mmu_page() Sean Christopherson
2020-06-23 20:27   ` Jon Cargille
2020-06-23 19:40 ` [PATCH 2/2] KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs Sean Christopherson
2020-06-23 20:28   ` Jon Cargille [this message]
2020-07-03 17:17 ` [PATCH 0/2] KVM: x86/mmu: Optimizations for kvm_get_mmu_page() Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANxmaygUwYDT38zde=hoMw+xE2PgVE+eG-dDYguneX=-i=ML+Q@mail.gmail.com' \
    --to=jcargill@google.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=pfeiner@google.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.