All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ben Gardon <bgardon@google.com>
To: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>, kvm <kvm@vger.kernel.org>,
	Cannon Matthews <cannonmatthews@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>, Peter Xu <peterx@redhat.com>,
	Peter Shier <pshier@google.com>,
	Peter Feiner <pfeiner@google.com>,
	Junaid Shahid <junaids@google.com>,
	Jim Mattson <jmattson@google.com>,
	Yulei Zhang <yulei.kernel@gmail.com>,
	Wanpeng Li <kernellwp@gmail.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Xiao Guangrong <xiaoguangrong.eric@gmail.com>
Subject: Re: [PATCH 14/22] kvm: mmu: Add access tracking for tdp_mmu
Date: Tue, 6 Oct 2020 16:38:21 -0700	[thread overview]
Message-ID: <CANgfPd8u5-Lzj0Mb58cU8so4ZeHCmTG8DCAvkL2uPWeK6rDBfA@mail.gmail.com> (raw)
In-Reply-To: <20200930174858.GG32672@linux.intel.com>

On Wed, Sep 30, 2020 at 10:49 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Fri, Sep 25, 2020 at 02:22:54PM -0700, Ben Gardon wrote:
> > @@ -1945,12 +1944,24 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn)
> >
> >  int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
> >  {
> > -     return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
> > +     int young = false;
> > +
> > +     young = kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp);
> > +     if (kvm->arch.tdp_mmu_enabled)
>
> If we end up with a per-VM flag, would it make sense to add a static key
> wrapper similar to the in-kernel lapic?  I assume once this lands the vast
> majority of VMs will use the TDP MMU.
>
> > +             young |= kvm_tdp_mmu_age_hva_range(kvm, start, end);
> > +
> > +     return young;
> >  }
>
> ...
>
> > +
> > +/*
> > + * Mark the SPTEs range of GFNs [start, end) unaccessed and return non-zero
> > + * if any of the GFNs in the range have been accessed.
> > + */
> > +static int age_gfn_range(struct kvm *kvm, struct kvm_memory_slot *slot,
> > +                      struct kvm_mmu_page *root, gfn_t start, gfn_t end,
> > +                      unsigned long unused)
> > +{
> > +     struct tdp_iter iter;
> > +     int young = 0;
> > +     u64 new_spte = 0;
> > +     int as_id = kvm_mmu_page_as_id(root);
> > +
> > +     for_each_tdp_pte_root(iter, root, start, end) {
>
> Ah, I think we should follow the existing shadow iterates by naming this
>
>         for_each_tdp_pte_using_root()
>
> My first reaction was that this was iterating over TDP roots, which was a bit
> confusing.  I suspect others will make the same mistake unless they look at the
> implementation of for_each_tdp_pte_root().
>
> Similar comments on the _vcpu() variant.  For that one I think it'd be
> preferable to take the struct kvm_mmu, i.e. have for_each_tdp_pte_using_mmu(),
> as both kvm_tdp_mmu_page_fault() and kvm_tdp_mmu_get_walk() explicitly
> reference vcpu->arch.mmu in the surrounding code.
>
> E.g. I find this more intuitive
>
>         struct kvm_mmu *mmu = vcpu->arch.mmu;
>         int leaf = mmu->shadow_root_level;
>
>         for_each_tdp_pte_using_mmu(iter, mmu, gfn, gfn + 1) {
>                 leaf = iter.level;
>                 sptes[leaf - 1] = iter.old_spte;
>         }
>
>         return leaf
>
> versus this, which makes me want to look at the implementation of for_each().
>
>
>         int leaf = vcpu->arch.mmu->shadow_root_level;
>
>         for_each_tdp_pte_vcpu(iter, vcpu, gfn, gfn + 1) {
>                 ...
>         }

I will change these macros as you suggested. I agree adding _using_
makes them clearer.

>
> > +             if (!is_shadow_present_pte(iter.old_spte) ||
> > +                 !is_last_spte(iter.old_spte, iter.level))
> > +                     continue;
> > +
> > +             /*
> > +              * If we have a non-accessed entry we don't need to change the
> > +              * pte.
> > +              */
> > +             if (!is_accessed_spte(iter.old_spte))
> > +                     continue;
> > +
> > +             new_spte = iter.old_spte;
> > +
> > +             if (spte_ad_enabled(new_spte)) {
> > +                     clear_bit((ffs(shadow_accessed_mask) - 1),
> > +                               (unsigned long *)&new_spte);
> > +             } else {
> > +                     /*
> > +                      * Capture the dirty status of the page, so that it doesn't get
> > +                      * lost when the SPTE is marked for access tracking.
> > +                      */
> > +                     if (is_writable_pte(new_spte))
> > +                             kvm_set_pfn_dirty(spte_to_pfn(new_spte));
> > +
> > +                     new_spte = mark_spte_for_access_track(new_spte);
> > +             }
> > +
> > +             *iter.sptep = new_spte;
> > +             __handle_changed_spte(kvm, as_id, iter.gfn, iter.old_spte,
> > +                                   new_spte, iter.level);
> > +             young = true;
>
> young is an int, not a bool.  Not really your fault as KVM has a really bad
> habit of using ints instead of bools.

Yeah, I saw that too. In mmu.c young ends up being set to true as
well, just though a function return so it's less obvious. Do you think
it would be preferable to set young to 1 or convert it to a bool?

>
> > +     }
> > +
> > +     return young;
> > +}
> > +
> > +int kvm_tdp_mmu_age_hva_range(struct kvm *kvm, unsigned long start,
> > +                           unsigned long end)
> > +{
> > +     return kvm_tdp_mmu_handle_hva_range(kvm, start, end, 0,
> > +                                         age_gfn_range);
> > +}
> > +
> > +static int test_age_gfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> > +                     struct kvm_mmu_page *root, gfn_t gfn, gfn_t unused,
> > +                     unsigned long unused2)
> > +{
> > +     struct tdp_iter iter;
> > +     int young = 0;
> > +
> > +     for_each_tdp_pte_root(iter, root, gfn, gfn + 1) {
> > +             if (!is_shadow_present_pte(iter.old_spte) ||
> > +                 !is_last_spte(iter.old_spte, iter.level))
> > +                     continue;
> > +
> > +             if (is_accessed_spte(iter.old_spte))
> > +                     young = true;
>
> Same bool vs. int weirdness here.  Also, |= doesn't short circuit for ints
> or bools, so this can be
>
>                 young |= is_accessed_spte(...)
>
> Actually, can't we just return true immediately?

Great point, I'll do that.

>
> > +     }
> > +
> > +     return young;
> > +}
> > +
> > +int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva)
> > +{
> > +     return kvm_tdp_mmu_handle_hva_range(kvm, hva, hva + 1, 0,
> > +                                         test_age_gfn);
> > +}
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> > index ce804a97bfa1d..f316773b7b5a8 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.h
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> > @@ -21,4 +21,8 @@ int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, int write, int map_writable,
> >
> >  int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
> >                             unsigned long end);
> > +
> > +int kvm_tdp_mmu_age_hva_range(struct kvm *kvm, unsigned long start,
> > +                           unsigned long end);
> > +int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva);
> >  #endif /* __KVM_X86_MMU_TDP_MMU_H */
> > --
> > 2.28.0.709.gb0816b6eb0-goog
> >

  reply	other threads:[~2020-10-06 23:38 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-25 21:22 [PATCH 00/22] Introduce the TDP MMU Ben Gardon
2020-09-25 21:22 ` [PATCH 01/22] kvm: mmu: Separate making SPTEs from set_spte Ben Gardon
2020-09-30  4:55   ` Sean Christopherson
2020-09-30 23:03     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 02/22] kvm: mmu: Introduce tdp_iter Ben Gardon
2020-09-26  0:04   ` Paolo Bonzini
2020-09-30  5:06     ` Sean Christopherson
2020-09-26  0:54   ` Paolo Bonzini
2020-09-30  5:08   ` Sean Christopherson
2020-09-30  5:24   ` Sean Christopherson
2020-09-30  6:24     ` Paolo Bonzini
2020-09-30 23:20   ` Eric van Tassell
2020-09-30 23:34     ` Paolo Bonzini
2020-10-01  0:07       ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 03/22] kvm: mmu: Init / Uninit the TDP MMU Ben Gardon
2020-09-26  0:06   ` Paolo Bonzini
2020-09-30  5:34   ` Sean Christopherson
2020-09-30 18:36     ` Ben Gardon
2020-09-30 16:57   ` Sean Christopherson
2020-09-30 17:39     ` Paolo Bonzini
2020-09-30 18:42       ` Ben Gardon
2020-09-25 21:22 ` [PATCH 04/22] kvm: mmu: Allocate and free TDP MMU roots Ben Gardon
2020-09-30  6:06   ` Sean Christopherson
2020-09-30  6:26     ` Paolo Bonzini
2020-09-30 15:38       ` Sean Christopherson
2020-10-12 22:59     ` Ben Gardon
2020-10-12 23:59       ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 05/22] kvm: mmu: Add functions to handle changed TDP SPTEs Ben Gardon
2020-09-26  0:39   ` Paolo Bonzini
2020-09-28 17:23     ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 06/22] kvm: mmu: Make address space ID a property of memslots Ben Gardon
2020-09-30  6:10   ` Sean Christopherson
2020-09-30 23:11     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 07/22] kvm: mmu: Support zapping SPTEs in the TDP MMU Ben Gardon
2020-09-26  0:14   ` Paolo Bonzini
2020-09-30  6:15   ` Sean Christopherson
2020-09-30  6:28     ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 08/22] kvm: mmu: Separate making non-leaf sptes from link_shadow_page Ben Gardon
2020-09-25 21:22 ` [PATCH 09/22] kvm: mmu: Remove disallowed_hugepage_adjust shadow_walk_iterator arg Ben Gardon
2020-09-30 16:19   ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 10/22] kvm: mmu: Add TDP MMU PF handler Ben Gardon
2020-09-26  0:24   ` Paolo Bonzini
2020-09-30 16:37   ` Sean Christopherson
2020-09-30 16:55     ` Paolo Bonzini
2020-09-30 17:37     ` Paolo Bonzini
2020-10-06 22:35       ` Ben Gardon
2020-10-06 22:33     ` Ben Gardon
2020-10-07 20:55       ` Sean Christopherson
2020-09-25 21:22 ` [PATCH 11/22] kvm: mmu: Factor out allocating a new tdp_mmu_page Ben Gardon
2020-09-26  0:22   ` Paolo Bonzini
2020-09-30 18:53     ` Ben Gardon
2020-09-25 21:22 ` [PATCH 12/22] kvm: mmu: Allocate struct kvm_mmu_pages for all pages in TDP MMU Ben Gardon
2020-09-25 21:22 ` [PATCH 13/22] kvm: mmu: Support invalidate range MMU notifier for " Ben Gardon
2020-09-30 17:03   ` Sean Christopherson
2020-09-30 23:15     ` Ben Gardon
2020-09-30 23:24       ` Sean Christopherson
2020-09-30 23:27         ` Ben Gardon
2020-09-25 21:22 ` [PATCH 14/22] kvm: mmu: Add access tracking for tdp_mmu Ben Gardon
2020-09-26  0:32   ` Paolo Bonzini
2020-09-30 17:48   ` Sean Christopherson
2020-10-06 23:38     ` Ben Gardon [this message]
2020-09-25 21:22 ` [PATCH 15/22] kvm: mmu: Support changed pte notifier in tdp MMU Ben Gardon
2020-09-26  0:33   ` Paolo Bonzini
2020-09-28 15:11   ` Paolo Bonzini
2020-10-07 16:53     ` Ben Gardon
2020-10-07 17:18       ` Paolo Bonzini
2020-10-07 17:30         ` Ben Gardon
2020-10-07 17:54           ` Paolo Bonzini
2020-10-09 10:59   ` Dan Carpenter
2020-10-09 10:59     ` Dan Carpenter
2020-09-25 21:22 ` [PATCH 16/22] kvm: mmu: Add dirty logging handler for changed sptes Ben Gardon
2020-09-26  0:45   ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 17/22] kvm: mmu: Support dirty logging for the TDP MMU Ben Gardon
2020-09-26  1:04   ` Paolo Bonzini
2020-10-08 18:27     ` Ben Gardon
2020-09-29 15:07   ` Paolo Bonzini
2020-09-30 18:04   ` Sean Christopherson
2020-09-30 18:08     ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 18/22] kvm: mmu: Support disabling dirty logging for the tdp MMU Ben Gardon
2020-09-26  1:09   ` Paolo Bonzini
2020-10-07 16:30     ` Ben Gardon
2020-10-07 17:21       ` Paolo Bonzini
2020-10-07 17:28         ` Ben Gardon
2020-10-07 17:53           ` Paolo Bonzini
2020-09-25 21:22 ` [PATCH 19/22] kvm: mmu: Support write protection for nesting in " Ben Gardon
2020-09-30 18:06   ` Sean Christopherson
2020-09-25 21:23 ` [PATCH 20/22] kvm: mmu: NX largepage recovery for TDP MMU Ben Gardon
2020-09-26  1:14   ` Paolo Bonzini
2020-09-30 22:23     ` Ben Gardon
2020-09-29 18:24   ` Paolo Bonzini
2020-09-30 18:15   ` Sean Christopherson
2020-09-30 19:56     ` Paolo Bonzini
2020-09-30 22:33       ` Ben Gardon
2020-09-30 22:27     ` Ben Gardon
2020-10-09 11:03   ` Dan Carpenter
2020-10-09 11:03     ` Dan Carpenter
2020-09-25 21:23 ` [PATCH 21/22] kvm: mmu: Support MMIO in the " Ben Gardon
2020-09-30 18:19   ` Sean Christopherson
2020-10-09 11:43   ` Dan Carpenter
2020-10-09 11:43     ` Dan Carpenter
2020-09-25 21:23 ` [PATCH 22/22] kvm: mmu: Don't clear write flooding count for direct roots Ben Gardon
2020-09-26  1:25   ` Paolo Bonzini
2020-10-05 22:48     ` Ben Gardon
2020-10-05 23:44       ` Sean Christopherson
2020-10-06 16:19         ` Ben Gardon
2020-09-26  1:14 ` [PATCH 00/22] Introduce the TDP MMU Paolo Bonzini
2020-09-28 17:31 ` Paolo Bonzini
2020-09-29 17:40   ` Ben Gardon
2020-09-29 18:10     ` Paolo Bonzini
2020-09-30  6:19 ` Sean Christopherson
2020-09-30  6:30   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANgfPd8u5-Lzj0Mb58cU8so4ZeHCmTG8DCAvkL2uPWeK6rDBfA@mail.gmail.com \
    --to=bgardon@google.com \
    --cc=cannonmatthews@google.com \
    --cc=jmattson@google.com \
    --cc=junaids@google.com \
    --cc=kernellwp@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfeiner@google.com \
    --cc=pshier@google.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=xiaoguangrong.eric@gmail.com \
    --cc=yulei.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.