linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jon Cargille <jcargill@google.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Peter Feiner <pfeiner@google.com>
Subject: Re: [PATCH 1/2] KVM: x86/mmu: Avoid multiple hash lookups in kvm_get_mmu_page()
Date: Tue, 23 Jun 2020 13:27:58 -0700	[thread overview]
Message-ID: <CANxmayj_08OsLst_oSczhYphQ3t4m+inf5-4k0_qfKUbzWU3fQ@mail.gmail.com> (raw)
In-Reply-To: <20200623194027.23135-2-sean.j.christopherson@intel.com>

LGTM.

Reviewed-By: Jon Cargille <jcargill@google.com>


On Tue, Jun 23, 2020 at 12:40 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Refactor for_each_valid_sp() to take the list of shadow pages instead of
> retrieving it from a gfn to avoid doing the gfn->list hash and lookup
> multiple times during kvm_get_mmu_page().
>
> Cc: Peter Feiner <pfeiner@google.com>
> Cc: Jon Cargille <jcargill@google.com>
> Cc: Jim Mattson <jmattson@google.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/mmu/mmu.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 3dd0af7e7515..67f8f82e9783 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2258,15 +2258,14 @@ static bool kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
>  static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>                                     struct list_head *invalid_list);
>
> -
> -#define for_each_valid_sp(_kvm, _sp, _gfn)                             \
> -       hlist_for_each_entry(_sp,                                       \
> -         &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \
> +#define for_each_valid_sp(_kvm, _sp, _list)                            \
> +       hlist_for_each_entry(_sp, _list, hash_link)                     \
>                 if (is_obsolete_sp((_kvm), (_sp))) {                    \
>                 } else
>
>  #define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn)                        \
> -       for_each_valid_sp(_kvm, _sp, _gfn)                              \
> +       for_each_valid_sp(_kvm, _sp,                                    \
> +         &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)])     \
>                 if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else
>
>  static inline bool is_ept_sp(struct kvm_mmu_page *sp)
> @@ -2477,6 +2476,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>                                              unsigned int access)
>  {
>         union kvm_mmu_page_role role;
> +       struct hlist_head *sp_list;
>         unsigned quadrant;
>         struct kvm_mmu_page *sp;
>         bool need_sync = false;
> @@ -2496,7 +2496,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>                 quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1;
>                 role.quadrant = quadrant;
>         }
> -       for_each_valid_sp(vcpu->kvm, sp, gfn) {
> +
> +       sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
> +       for_each_valid_sp(vcpu->kvm, sp, sp_list) {
>                 if (sp->gfn != gfn) {
>                         collisions++;
>                         continue;
> @@ -2533,8 +2535,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>
>         sp->gfn = gfn;
>         sp->role = role;
> -       hlist_add_head(&sp->hash_link,
> -               &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]);
> +       hlist_add_head(&sp->hash_link, sp_list);
>         if (!direct) {
>                 /*
>                  * we should do write protection before syncing pages
> --
> 2.26.0
>

  reply	other threads:[~2020-06-23 21:16 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-23 19:40 [PATCH 0/2] KVM: x86/mmu: Optimizations for kvm_get_mmu_page() Sean Christopherson
2020-06-23 19:40 ` [PATCH 1/2] KVM: x86/mmu: Avoid multiple hash lookups in kvm_get_mmu_page() Sean Christopherson
2020-06-23 20:27   ` Jon Cargille [this message]
2020-06-23 19:40 ` [PATCH 2/2] KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs Sean Christopherson
2020-06-23 20:28   ` Jon Cargille
2020-07-03 17:17 ` [PATCH 0/2] KVM: x86/mmu: Optimizations for kvm_get_mmu_page() Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANxmayj_08OsLst_oSczhYphQ3t4m+inf5-4k0_qfKUbzWU3fQ@mail.gmail.com \
    --to=jcargill@google.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=pfeiner@google.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).