All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>,
	Peter Shier <pshier@google.com>,
	David Matlack <dmatlack@google.com>,
	Mingwei Zhang <mizhang@google.com>,
	Yulei Zhang <yulei.kernel@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>,
	Kai Huang <kai.huang@intel.com>,
	Keqian Zhu <zhukeqian1@huawei.com>,
	David Hildenbrand <david@redhat.com>
Subject: Re: [RFC 07/19] KVM: x86/mmu: Factor wrprot for nested PML out of make_spte
Date: Thu, 18 Nov 2021 09:43:46 -0800	[thread overview]
Message-ID: <CANgfPd-f+VXQJnz-LPuiy+rTDkSdw3zjUfozaqzgb8n0rv9STA@mail.gmail.com> (raw)
In-Reply-To: <YZW2i7GnORD+X5NT@google.com>

On Wed, Nov 17, 2021 at 6:12 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Nov 10, 2021, Ben Gardon wrote:
> > When running a nested VM, KVM write protects SPTEs in the EPT/NPT02
> > instead of using PML for dirty tracking. This avoids expensive
> > translation later, when emptying the Page Modification Log. In service
> > of removing the vCPU pointer from make_spte, factor the check for nested
> > PML out of the function.
>
> Aha!  The dependency on @vcpu can be avoided without having to take a flag from
> the caller.  The shadow page has everything we need.  The check is really "is this
> a page for L2 EPT".  The kvm_x86_ops.cpu_dirty_log_size gets us the EPT part, and
> kvm_mmu_page.guest_mode gets us the L2 part.

Haha that's way cleaner than what I was doing! Seems like an obvious
solution in retrospect. I'll include this in the next version of the
series I send out unless Paolo beats me and just merges it directly.
Happy to give this my reviewed-by.

>
> Compile tested only...
>
> From 773414e4fd7010c38ac89221d16089f3dcc57467 Mon Sep 17 00:00:00 2001
> From: Sean Christopherson <seanjc@google.com>
> Date: Wed, 17 Nov 2021 18:08:42 -0800
> Subject: [PATCH] KVM: x86/mmu: Use shadow page role to detect PML-unfriendly
>  pages for L2
>
> Rework make_spte() to query the shadow page's role, specifically whether
> or not it's a guest_mode page, a.k.a. a page for L2, when determining if
> the SPTE is compatible with PML.  This eliminates a dependency on @vcpu,
> with a future goal of being able to create SPTEs without a specific vCPU.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/kvm/mmu/mmu_internal.h | 7 +++----
>  arch/x86/kvm/mmu/spte.c         | 2 +-
>  2 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 8ede43a826af..03882b2624c8 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -109,7 +109,7 @@ static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp)
>         return kvm_mmu_role_as_id(sp->role);
>  }
>
> -static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
> +static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp)
>  {
>         /*
>          * When using the EPT page-modification log, the GPAs in the CPU dirty
> @@ -117,10 +117,9 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu)
>          * on write protection to record dirty pages, which bypasses PML, since
>          * writes now result in a vmexit.  Note, the check on CPU dirty logging
>          * being enabled is mandatory as the bits used to denote WP-only SPTEs
> -        * are reserved for NPT w/ PAE (32-bit KVM).
> +        * are reserved for PAE paging (32-bit KVM).
>          */
> -       return vcpu->arch.mmu == &vcpu->arch.guest_mmu &&
> -              kvm_x86_ops.cpu_dirty_log_size;
> +       return kvm_x86_ops.cpu_dirty_log_size && sp->role.guest_mode;
>  }
>
>  int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 0c76c45fdb68..84e64dbdd89e 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -101,7 +101,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
>
>         if (sp->role.ad_disabled)
>                 spte |= SPTE_TDP_AD_DISABLED_MASK;
> -       else if (kvm_vcpu_ad_need_write_protect(vcpu))
> +       else if (kvm_mmu_page_ad_need_write_protect(sp))
>                 spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK;
>
>         /*
> --

  reply	other threads:[~2021-11-18 17:44 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-10 22:29 [RFC 00/19] KVM: x86/mmu: Optimize disabling dirty logging Ben Gardon
2021-11-10 22:29 ` [RFC 01/19] KVM: x86/mmu: Fix TLB flush range when handling disconnected pt Ben Gardon
2021-11-11 17:44   ` David Matlack
2021-11-10 22:29 ` [RFC 02/19] KVM: x86/mmu: Batch TLB flushes for a single zap Ben Gardon
2021-11-11 18:06   ` David Matlack
2021-11-12 23:53   ` Sean Christopherson
2021-11-10 22:29 ` [RFC 03/19] KVM: x86/mmu: Factor flush and free up when zapping under MMU write lock Ben Gardon
2021-11-11 18:31   ` David Matlack
2021-11-10 22:29 ` [RFC 04/19] KVM: x86/mmu: Yield while processing disconnected_sps Ben Gardon
2021-11-11 18:50   ` David Matlack
2021-11-10 22:29 ` [RFC 05/19] KVM: x86/mmu: Remove redundant flushes when disabling dirty logging Ben Gardon
2021-11-11 18:55   ` David Matlack
2021-11-10 22:29 ` [RFC 06/19] KVM: x86/mmu: Introduce vcpu_make_spte Ben Gardon
2021-11-10 22:29 ` [RFC 07/19] KVM: x86/mmu: Factor wrprot for nested PML out of make_spte Ben Gardon
2021-11-18  2:12   ` Sean Christopherson
2021-11-18 17:43     ` Ben Gardon [this message]
2021-11-18 18:04       ` Paolo Bonzini
2021-11-10 22:29 ` [RFC 08/19] KVM: x86/mmu: Factor mt_mask " Ben Gardon
2021-11-10 22:30 ` [RFC 09/19] KVM: x86/mmu: Remove need for a vcpu from kvm_slot_page_track_is_active Ben Gardon
2021-11-10 22:30 ` [RFC 10/19] KVM: x86/mmu: Remove need for a vcpu from mmu_try_to_unsync_pages Ben Gardon
2021-11-10 22:30 ` [RFC 11/19] KVM: x86/mmu: Factor shadow_zero_check out of make_spte Ben Gardon
2021-11-10 22:44   ` Paolo Bonzini
2021-11-10 23:49     ` Ben Gardon
2021-11-11  1:18       ` Sean Christopherson
2021-11-11  1:44         ` Sean Christopherson
2021-11-11  7:06         ` Paolo Bonzini
2021-11-18  2:05   ` Sean Christopherson
2021-11-18  3:29     ` Sean Christopherson
2021-11-18 16:37       ` Sean Christopherson
2021-11-18 17:19         ` Paolo Bonzini
2021-11-18 18:02           ` Sean Christopherson
2021-11-18 18:07             ` Paolo Bonzini
2021-11-18 18:14               ` Sean Christopherson
2021-11-10 22:30 ` [RFC 12/19] KVM: x86/mmu: Replace vcpu argument with kvm pointer in make_spte Ben Gardon
2021-11-10 22:30 ` [RFC 13/19] KVM: x86/mmu: Factor out the meat of reset_tdp_shadow_zero_bits_mask Ben Gardon
2021-11-10 22:30 ` [RFC 14/19] KVM: x86/mmu: Propagate memslot const qualifier Ben Gardon
2021-11-10 22:30 ` [RFC 15/19] KVM: x86/MMU: Refactor vmx_get_mt_mask Ben Gardon
2021-11-10 22:30 ` [RFC 16/19] KVM: x86/mmu: Factor out part of vmx_get_mt_mask which does not depend on vcpu Ben Gardon
2021-11-10 22:30 ` [RFC 17/19] KVM: x86/mmu: Add try_get_mt_mask to x86_ops Ben Gardon
2021-11-10 22:30 ` [RFC 18/19] KVM: x86/mmu: Make kvm_is_mmio_pfn usable outside of spte.c Ben Gardon
2021-11-10 22:30 ` [RFC 19/19] KVM: x86/mmu: Promote pages in-place when disabling dirty logging Ben Gardon
2021-11-15 21:24 ` [RFC 00/19] KVM: x86/mmu: Optimize " Ben Gardon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANgfPd-f+VXQJnz-LPuiy+rTDkSdw3zjUfozaqzgb8n0rv9STA@mail.gmail.com \
    --to=bgardon@google.com \
    --cc=david@redhat.com \
    --cc=dmatlack@google.com \
    --cc=kai.huang@intel.com \
    --cc=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mizhang@google.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pshier@google.com \
    --cc=seanjc@google.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    --cc=zhukeqian1@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.