All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Matlack <dmatlack@google.com>
To: Ben Gardon <bgardon@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	kvm@vger.kernel.org, Joerg Roedel <joro@8bytes.org>,
	Jim Mattson <jmattson@google.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Sean Christopherson <seanjc@google.com>,
	Janis Schoetterl-Glausch <scgl@linux.vnet.ibm.com>,
	Junaid Shahid <junaids@google.com>,
	Oliver Upton <oupton@google.com>,
	Harish Barathvajasankar <hbarath@google.com>,
	Peter Xu <peterx@redhat.com>, Peter Shier <pshier@google.com>
Subject: Re: [RFC PATCH 12/15] KVM: x86/mmu: Split large pages when dirty logging is enabled
Date: Tue, 30 Nov 2021 15:44:39 -0800	[thread overview]
Message-ID: <CALzav=cz+G_3r8T14_LbhVgZYMY4tNnC8LOSvqm2ib0MPb7Q_A@mail.gmail.com> (raw)
In-Reply-To: <CANgfPd9yTZiSOqBXjhFDeB-3rc1+XG204LZZf97Odr2r65Fwwg@mail.gmail.com>

On Mon, Nov 22, 2021 at 11:31 AM Ben Gardon <bgardon@google.com> wrote:
>
> On Fri, Nov 19, 2021 at 3:58 PM David Matlack <dmatlack@google.com> wrote:
> >
> > When dirty logging is enabled without initially-all-set, attempt to
> > split all large pages in the memslot down to 4KB pages so that vCPUs
> > do not have to take expensive write-protection faults to split large
> > pages.
> >
> > Large page splitting is best-effort only. This commit only adds the
> > support for the TDP MMU, and even there splitting may fail due to out
> > of memory conditions. Failures to split a large page is fine from a
> > correctness standpoint because we still always follow it up by write-
> > protecting any remaining large pages.
> >
> > Signed-off-by: David Matlack <dmatlack@google.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |   6 ++
> >  arch/x86/kvm/mmu/mmu.c          |  83 +++++++++++++++++++++
> >  arch/x86/kvm/mmu/mmu_internal.h |   3 +
> >  arch/x86/kvm/mmu/spte.c         |  46 ++++++++++++
> >  arch/x86/kvm/mmu/spte.h         |   1 +
> >  arch/x86/kvm/mmu/tdp_mmu.c      | 123 ++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/mmu/tdp_mmu.h      |   5 ++
> >  arch/x86/kvm/x86.c              |   6 ++
> >  8 files changed, 273 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 2a7564703ea6..432a4df817ec 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -1232,6 +1232,9 @@ struct kvm_arch {
> >         hpa_t   hv_root_tdp;
> >         spinlock_t hv_root_tdp_lock;
> >  #endif
> > +
> > +       /* MMU caches used when splitting large pages during VM-ioctls. */
> > +       struct kvm_mmu_memory_caches split_caches;
> >  };
> >
> >  struct kvm_vm_stat {
> > @@ -1588,6 +1591,9 @@ void kvm_mmu_reset_context(struct kvm_vcpu *vcpu);
> >  void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> >                                       const struct kvm_memory_slot *memslot,
> >                                       int start_level);
> > +void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm,
> > +                                       const struct kvm_memory_slot *memslot,
> > +                                       int target_level);
> >  void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
> >                                    const struct kvm_memory_slot *memslot);
> >  void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 54f0d2228135..6768ef9c0891 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -738,6 +738,66 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect)
> >                                           PT64_ROOT_MAX_LEVEL);
> >  }
> >
> > +static inline void assert_split_caches_invariants(struct kvm *kvm)
> > +{
> > +       /*
> > +        * The split caches must only be modified while holding the slots_lock,
> > +        * since it is only used during memslot VM-ioctls.
> > +        */
> > +       lockdep_assert_held(&kvm->slots_lock);
> > +
> > +       /*
> > +        * Only the TDP MMU supports large page splitting using
> > +        * kvm->arch.split_caches, which is why we only have to allocate
> > +        * page_header_cache and shadow_page_cache. Assert that the TDP
> > +        * MMU is at least enabled when the split cache is allocated.
> > +        */
> > +       BUG_ON(!is_tdp_mmu_enabled(kvm));
> > +}
> > +
> > +int mmu_topup_split_caches(struct kvm *kvm)
> > +{
> > +       struct kvm_mmu_memory_caches *split_caches = &kvm->arch.split_caches;
> > +       int r;
> > +
> > +       assert_split_caches_invariants(kvm);
> > +
> > +       r = kvm_mmu_topup_memory_cache(&split_caches->page_header_cache, 1);
> > +       if (r)
> > +               goto out;
> > +
> > +       r = kvm_mmu_topup_memory_cache(&split_caches->shadow_page_cache, 1);
> > +       if (r)
> > +               goto out;
> > +
> > +       return 0;
> > +
> > +out:
> > +       pr_warn("Failed to top-up split caches. Will not split large pages.\n");
> > +       return r;
> > +}
> > +
> > +static void mmu_free_split_caches(struct kvm *kvm)
> > +{
> > +       assert_split_caches_invariants(kvm);
> > +
> > +       kvm_mmu_free_memory_cache(&kvm->arch.split_caches.pte_list_desc_cache);
> > +       kvm_mmu_free_memory_cache(&kvm->arch.split_caches.shadow_page_cache);
> > +}
> > +
> > +bool mmu_split_caches_need_topup(struct kvm *kvm)
> > +{
> > +       assert_split_caches_invariants(kvm);
> > +
> > +       if (kvm->arch.split_caches.page_header_cache.nobjs == 0)
> > +               return true;
> > +
> > +       if (kvm->arch.split_caches.shadow_page_cache.nobjs == 0)
> > +               return true;
> > +
> > +       return false;
> > +}
> > +
> >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> >         struct kvm_mmu_memory_caches *mmu_caches;
> > @@ -5696,6 +5756,7 @@ void kvm_mmu_init_vm(struct kvm *kvm)
> >
> >         spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
> >
> > +       mmu_init_memory_caches(&kvm->arch.split_caches);
> >         kvm_mmu_init_tdp_mmu(kvm);
> >
> >         node->track_write = kvm_mmu_pte_write;
> > @@ -5819,6 +5880,28 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
> >                 kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);
> >  }
> >
> > +void kvm_mmu_slot_try_split_large_pages(struct kvm *kvm,
> > +                                       const struct kvm_memory_slot *memslot,
> > +                                       int target_level)
> > +{
> > +       u64 start, end;
> > +
> > +       if (!is_tdp_mmu_enabled(kvm))
> > +               return;
> > +
> > +       if (mmu_topup_split_caches(kvm))
> > +               return;
> > +
> > +       start = memslot->base_gfn;
> > +       end = start + memslot->npages;
> > +
> > +       read_lock(&kvm->mmu_lock);
> > +       kvm_tdp_mmu_try_split_large_pages(kvm, memslot, start, end, target_level);
> > +       read_unlock(&kvm->mmu_lock);
> > +
> > +       mmu_free_split_caches(kvm);
> > +}
> > +
> >  static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> >                                          struct kvm_rmap_head *rmap_head,
> >                                          const struct kvm_memory_slot *slot)
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index 52c6527b1a06..89b9b907c567 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -161,4 +161,7 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> >  void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> >  void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp);
> >
> > +int mmu_topup_split_caches(struct kvm *kvm);
> > +bool mmu_split_caches_need_topup(struct kvm *kvm);
> > +
> >  #endif /* __KVM_X86_MMU_INTERNAL_H */
> > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> > index df2cdb8bcf77..6bb9b597a854 100644
> > --- a/arch/x86/kvm/mmu/spte.c
> > +++ b/arch/x86/kvm/mmu/spte.c
> > @@ -191,6 +191,52 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> >         return wrprot;
> >  }
> >
> > +static u64 mark_spte_executable(u64 spte)
> > +{
> > +       bool is_access_track = is_access_track_spte(spte);
> > +
> > +       if (is_access_track)
> > +               spte = restore_acc_track_spte(spte);
> > +
> > +       spte &= ~shadow_nx_mask;
> > +       spte |= shadow_x_mask;
> > +
> > +       if (is_access_track)
> > +               spte = mark_spte_for_access_track(spte);
> > +
> > +       return spte;
> > +}
> > +
> > +/*
> > + * Construct an SPTE that maps a sub-page of the given large SPTE. This is
> > + * used during large page splitting, to build the SPTEs that make up the new
> > + * page table.
> > + */
> > +u64 make_large_page_split_spte(u64 large_spte, int level, int index, unsigned int access)
>
> Just because this always trips me up reading code, I'd suggest naming
> the argument large_spte_level or something.
> Avoiding a variable called "level" in this function makes it much more explicit.

Will do.

>
> > +{
> > +       u64 child_spte;
> > +       int child_level;
> > +
> > +       BUG_ON(is_mmio_spte(large_spte));
> > +       BUG_ON(!is_large_present_pte(large_spte));
>
> In the interest of not crashing the host, I think it would be safe to
> WARN and return 0 here.
> BUG is fine too if that's preferred.

Ack. I'll take a look and see if I can avoid the BUG_ONs. They're
optional sanity checks anyway.

>
> > +
> > +       child_spte = large_spte;
> > +       child_level = level - 1;
> > +
> > +       child_spte += (index * KVM_PAGES_PER_HPAGE(child_level)) << PAGE_SHIFT;
>
> This += makes me nervous. It at least merits a comment explaining
> what's going on.
> I'd find a |= more readable to make it more explicit and since sptes
> aren't numbers.
> You could probably also be really explicit about extracting the PFN
> and adding to it, clearing the PFN bits and then putting it back in
> and I bet the compiler would optimize out the extra bit fiddling.

I can change it to |= and add a comment. I prefer not to extra the PFN
and replace it since there's really no reason to. One of the nice
things about this function in general is that we don't have to
construct the child SPTE from scratch, we just have to slightly adjust
the parent SPTE. For the address, the address in the large SPTE is
already there, we just need to add in the offset to the lower-level
page.

>
> > +
> > +       if (child_level == PG_LEVEL_4K) {
> > +               child_spte &= ~PT_PAGE_SIZE_MASK;
> > +
> > +               /* Allow execution for 4K pages if it was disabled for NX HugePages. */
> > +               if (is_nx_huge_page_enabled() && access & ACC_EXEC_MASK)
> > +                       child_spte = mark_spte_executable(child_spte);
> > +       }
> > +
> > +       return child_spte;
> > +}
> > +
> > +
> >  u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled)
> >  {
> >         u64 spte = SPTE_MMU_PRESENT_MASK;
> > diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> > index 3e4943ee5a01..4efb4837e38d 100644
> > --- a/arch/x86/kvm/mmu/spte.h
> > +++ b/arch/x86/kvm/mmu/spte.h
> > @@ -339,6 +339,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> >                unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
> >                u64 old_spte, bool prefetch, bool can_unsync,
> >                bool host_writable, u64 *new_spte);
> > +u64 make_large_page_split_spte(u64 large_spte, int level, int index, unsigned int access);
> >  u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled);
> >  u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access);
> >  u64 mark_spte_for_access_track(u64 spte);
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 5ca0fa659245..366857b9fb3b 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -695,6 +695,39 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
> >         return false;
> >  }
> >
> > +static inline bool
> > +tdp_mmu_need_split_caches_topup_or_resched(struct kvm *kvm, struct tdp_iter *iter)
> > +{
> > +       if (mmu_split_caches_need_topup(kvm))
> > +               return true;
> > +
> > +       return tdp_mmu_iter_need_resched(kvm, iter);
> > +}
> > +
> > +static inline int
> > +tdp_mmu_topup_split_caches_resched(struct kvm *kvm, struct tdp_iter *iter, bool flush)
>
> This functionality could be shoe-horned into
> tdp_mmu_iter_cond_resched, reducing code duplication.
> I don't know if the extra parameters / complexity on that function
> would be worth it, but I'm slightly inclined in that direction.

Ok I'll take a look and see if I can combine them in a nice way.

>
> > +{
> > +       int r;
> > +
> > +       rcu_read_unlock();
> > +
> > +       if (flush)
> > +               kvm_flush_remote_tlbs(kvm);
> > +
> > +       read_unlock(&kvm->mmu_lock);
> > +
> > +       cond_resched();
> > +       r = mmu_topup_split_caches(kvm);
>
> Ah, right. I was confused by this for a second, but it's safe because
> the caches are protected by the slots lock.
>
> > +
> > +       read_lock(&kvm->mmu_lock);
> > +
> > +       rcu_read_lock();
> > +       WARN_ON(iter->gfn > iter->next_last_level_gfn);
> > +       tdp_iter_restart(iter);
> > +
> > +       return r;
> > +}
> > +
> >  /*
> >   * Tears down the mappings for the range of gfns, [start, end), and frees the
> >   * non-root pages mapping GFNs strictly within that range. Returns true if
> > @@ -1241,6 +1274,96 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
> >         return spte_set;
> >  }
> >
> > +static bool tdp_mmu_split_large_page_atomic(struct kvm *kvm, struct tdp_iter *iter)
> > +{
> > +       const u64 large_spte = iter->old_spte;
> > +       const int level = iter->level;
> > +       struct kvm_mmu_page *child_sp;
> > +       u64 child_spte;
> > +       int i;
> > +
> > +       BUG_ON(mmu_split_caches_need_topup(kvm));
>
> I think it would be safe to just WARN and return here as well.
>
> > +
> > +       child_sp = alloc_child_tdp_mmu_page(&kvm->arch.split_caches, iter);
> > +
> > +       for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > +               child_spte = make_large_page_split_spte(large_spte, level, i, ACC_ALL);
>
> Relating to my other comment above on make_large_page_split_spte, you
> could also iterate through the range of PFNs here and pass that as an
> argument to the helper function.
>
> > +
> > +               /*
> > +                * No need for atomics since child_sp has not been installed
> > +                * in the table yet and thus is not reachable by any other
> > +                * thread.
> > +                */
> > +               child_sp->spt[i] = child_spte;
> > +       }
> > +
> > +       return tdp_mmu_install_sp_atomic(kvm, iter, child_sp, false);
> > +}
> > +
> > +static void tdp_mmu_split_large_pages_root(struct kvm *kvm, struct kvm_mmu_page *root,
> > +                                          gfn_t start, gfn_t end, int target_level)
> > +{
> > +       struct tdp_iter iter;
> > +       bool flush = false;
> > +       int r;
> > +
> > +       rcu_read_lock();
> > +
> > +       /*
> > +        * Traverse the page table splitting all large pages above the target
> > +        * level into one lower level. For example, if we encounter a 1GB page
> > +        * we split it into 512 2MB pages.
> > +        *
> > +        * Since the TDP iterator uses a pre-order traversal, we are guaranteed
> > +        * to visit an SPTE before ever visiting its children, which means we
> > +        * will correctly recursively split large pages that are more than one
> > +        * level above the target level (e.g. splitting 1GB to 2MB to 4KB).
> > +        */
> > +       for_each_tdp_pte_min_level(iter, root, target_level + 1, start, end) {
> > +retry:
> > +               if (tdp_mmu_need_split_caches_topup_or_resched(kvm, &iter)) {
> > +                       r = tdp_mmu_topup_split_caches_resched(kvm, &iter, flush);
> > +                       flush = false;
> > +
> > +                       /*
> > +                        * If topping up the split caches failed, we can't split
> > +                        * any more pages. Bail out of the loop.
> > +                        */
> > +                       if (r)
> > +                               break;
> > +
> > +                       continue;
> > +               }
> > +
> > +               if (!is_large_present_pte(iter.old_spte))
> > +                       continue;
> > +
> > +               if (!tdp_mmu_split_large_page_atomic(kvm, &iter))
> > +                       goto retry;
> > +
> > +               flush = true;
> > +       }
> > +
> > +       rcu_read_unlock();
> > +
> > +       if (flush)
> > +               kvm_flush_remote_tlbs(kvm);
> > +}
> > +
> > +void kvm_tdp_mmu_try_split_large_pages(struct kvm *kvm,
> > +                                      const struct kvm_memory_slot *slot,
> > +                                      gfn_t start, gfn_t end,
> > +                                      int target_level)
> > +{
> > +       struct kvm_mmu_page *root;
> > +
> > +       lockdep_assert_held_read(&kvm->mmu_lock);
> > +
> > +       for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
> > +               tdp_mmu_split_large_pages_root(kvm, root, start, end, target_level);
> > +
> > +}
> > +
> >  /*
> >   * Clear the dirty status of all the SPTEs mapping GFNs in the memslot. If
> >   * AD bits are enabled, this will involve clearing the dirty bit on each SPTE.
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> > index 476b133544dd..7812087836b2 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.h
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> > @@ -72,6 +72,11 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm,
> >                                    struct kvm_memory_slot *slot, gfn_t gfn,
> >                                    int min_level);
> >
> > +void kvm_tdp_mmu_try_split_large_pages(struct kvm *kvm,
> > +                                      const struct kvm_memory_slot *slot,
> > +                                      gfn_t start, gfn_t end,
> > +                                      int target_level);
> > +
> >  static inline void kvm_tdp_mmu_walk_lockless_begin(void)
> >  {
> >         rcu_read_lock();
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 04e8dabc187d..4702ebfd394b 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -11735,6 +11735,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm,
> >                 if (kvm_dirty_log_manual_protect_and_init_set(kvm))
> >                         return;
> >
> > +               /*
> > +                * Attempt to split all large pages into 4K pages so that vCPUs
> > +                * do not have to take write-protection faults.
> > +                */
> > +               kvm_mmu_slot_try_split_large_pages(kvm, new, PG_LEVEL_4K);
>
> Thank you for parameterizing the target level here. I'm working on a
> proof of concept for 2M dirty tracking right now (still in exploratory
> phase) and this parameter will help future-proof the splitting
> algorithm if we ever decide we don't want to split everything to 4k
> for dirty logging.

Exactly my thinking as well! :)

>
> > +
> >                 if (kvm_x86_ops.cpu_dirty_log_size) {
> >                         kvm_mmu_slot_leaf_clear_dirty(kvm, new);
> >                         kvm_mmu_slot_remove_write_access(kvm, new, PG_LEVEL_2M);
> > --
> > 2.34.0.rc2.393.gf8c9666880-goog
> >

  reply	other threads:[~2021-11-30 23:45 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-19 23:57 [RFC PATCH 00/15] KVM: x86/mmu: Eager Page Splitting for the TDP MMU David Matlack
2021-11-19 23:57 ` [RFC PATCH 01/15] KVM: x86/mmu: Rename rmap_write_protect to kvm_vcpu_write_protect_gfn David Matlack
2021-11-22 18:52   ` Ben Gardon
2021-11-26 12:18   ` Peter Xu
2021-11-19 23:57 ` [RFC PATCH 02/15] KVM: x86/mmu: Rename __rmap_write_protect to rmap_write_protect David Matlack
2021-11-22 18:52   ` Ben Gardon
2021-11-26 12:18   ` Peter Xu
2021-11-19 23:57 ` [RFC PATCH 03/15] KVM: x86/mmu: Automatically update iter->old_spte if cmpxchg fails David Matlack
2021-11-22 18:52   ` Ben Gardon
2021-11-30 23:25     ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 04/15] KVM: x86/mmu: Factor out logic to atomically install a new page table David Matlack
2021-11-22 18:52   ` Ben Gardon
2021-11-30 23:27     ` David Matlack
2021-12-01 19:13   ` Sean Christopherson
2021-12-01 21:52     ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 05/15] KVM: x86/mmu: Abstract mmu caches out to a separate struct David Matlack
2021-11-22 18:55   ` Ben Gardon
2021-11-22 18:55     ` Ben Gardon
2021-11-30 23:28     ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 06/15] KVM: x86/mmu: Derive page role from parent David Matlack
2021-11-20 12:53   ` Paolo Bonzini
2021-11-27  2:07     ` Lai Jiangshan
2021-11-27 10:26       ` Paolo Bonzini
2021-11-30 23:31     ` David Matlack
2021-12-01  0:45       ` Sean Christopherson
2021-12-01 21:56         ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 07/15] KVM: x86/mmu: Pass in vcpu->arch.mmu_caches instead of vcpu David Matlack
2021-11-22 18:56   ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 08/15] KVM: x86/mmu: Helper method to check for large and present sptes David Matlack
2021-11-22 18:56   ` Ben Gardon
2021-12-01 18:34   ` Sean Christopherson
2021-12-01 21:13     ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 09/15] KVM: x86/mmu: Move restore_acc_track_spte to spte.c David Matlack
2021-11-22 18:56   ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 10/15] KVM: x86/mmu: Abstract need_resched logic from tdp_mmu_iter_cond_resched David Matlack
2021-11-22 18:56   ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 11/15] KVM: x86/mmu: Refactor tdp_mmu iterators to take kvm_mmu_page root David Matlack
2021-11-22 18:56   ` Ben Gardon
2021-11-19 23:57 ` [RFC PATCH 12/15] KVM: x86/mmu: Split large pages when dirty logging is enabled David Matlack
2021-11-22  5:05   ` Nikunj A. Dadhania
2021-11-30 23:33     ` David Matlack
2021-11-22 19:30   ` Ben Gardon
2021-11-30 23:44     ` David Matlack [this message]
2021-11-26 12:01   ` Peter Xu
2021-11-30 23:56     ` David Matlack
2021-12-01  1:00       ` Sean Christopherson
2021-12-01  1:29         ` David Matlack
2021-12-01  2:29           ` Peter Xu
2021-12-01 18:29             ` Sean Christopherson
2021-12-01 21:36               ` David Matlack
2021-12-01 23:37                 ` Sean Christopherson
2021-12-02 17:41                   ` David Matlack
2021-12-02 18:42                     ` Sean Christopherson
2021-12-03  0:00                       ` David Matlack
2021-12-03  1:07                         ` Sean Christopherson
2021-12-03 17:22                           ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 13/15] KVM: x86/mmu: Split large pages during CLEAR_DIRTY_LOG David Matlack
2021-11-26 12:17   ` Peter Xu
2021-12-01  0:16     ` David Matlack
2021-12-01  0:17       ` David Matlack
2021-12-01  4:03         ` Peter Xu
2021-12-01 22:14           ` David Matlack
2021-12-03  4:57             ` Peter Xu
2021-12-01 19:22   ` Sean Christopherson
2021-12-01 19:49     ` Ben Gardon
2021-12-01 20:16       ` Sean Christopherson
2021-12-01 22:11         ` Ben Gardon
2021-12-01 22:17     ` David Matlack
2021-12-05 13:30   ` [KVM] d3750a0923: WARNING:possible_circular_locking_dependency_detected kernel test robot
2021-12-05 13:30     ` kernel test robot
2021-12-06  6:55     ` Paolo Bonzini
2021-12-06  6:55       ` Paolo Bonzini
2021-12-06 17:19       ` David Matlack
2021-12-06 17:19         ` David Matlack
2021-11-19 23:57 ` [RFC PATCH 14/15] KVM: x86/mmu: Add tracepoint for splitting large pages David Matlack
2021-11-19 23:57 ` [RFC PATCH 15/15] KVM: x86/mmu: Update page stats when " David Matlack
2021-12-01 19:36   ` Sean Christopherson
2021-12-01 21:11     ` David Matlack
2021-11-26 14:13 ` [RFC PATCH 00/15] KVM: x86/mmu: Eager Page Splitting for the TDP MMU Peter Xu
2021-11-30 23:22   ` David Matlack
2021-12-01  4:10     ` Peter Xu
2021-12-01  4:19       ` Peter Xu
2021-12-01 21:46       ` David Matlack

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALzav=cz+G_3r8T14_LbhVgZYMY4tNnC8LOSvqm2ib0MPb7Q_A@mail.gmail.com' \
    --to=dmatlack@google.com \
    --cc=bgardon@google.com \
    --cc=hbarath@google.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=junaids@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pshier@google.com \
    --cc=scgl@linux.vnet.ibm.com \
    --cc=seanjc@google.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.