From: Paolo Bonzini <pbonzini@redhat.com>
To: Ben Gardon <bgardon@google.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: Peter Xu <peterx@redhat.com>,
Sean Christopherson <seanjc@google.com>,
Peter Shier <pshier@google.com>,
Peter Feiner <pfeiner@google.com>,
Junaid Shahid <junaids@google.com>,
Jim Mattson <jmattson@google.com>,
Yulei Zhang <yulei.kernel@gmail.com>,
Wanpeng Li <kernellwp@gmail.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Xiao Guangrong <xiaoguangrong.eric@gmail.com>
Subject: Re: [PATCH v2 26/28] KVM: x86/mmu: Allow enabling / disabling dirty logging under MMU read lock
Date: Wed, 3 Feb 2021 12:38:47 +0100 [thread overview]
Message-ID: <b0829378-6991-4f59-273d-db58057d7cb8@redhat.com> (raw)
In-Reply-To: <20210202185734.1680553-27-bgardon@google.com>
On 02/02/21 19:57, Ben Gardon wrote:
> To reduce lock contention and interference with page fault handlers,
> allow the TDP MMU functions which enable and disable dirty logging
> to operate under the MMU read lock.
>
>
> Extend dirty logging enable disable functions read lock-ness
>
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 14 +++---
> arch/x86/kvm/mmu/tdp_mmu.c | 93 ++++++++++++++++++++++++++++++--------
> arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
> 3 files changed, 84 insertions(+), 25 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index e3cf868be6bd..6ba2a72d4330 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5638,9 +5638,10 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
>
> write_lock(&kvm->mmu_lock);
> flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false);
> + write_unlock(&kvm->mmu_lock);
> +
> if (kvm->arch.tdp_mmu_enabled)
> flush |= kvm_tdp_mmu_clear_dirty_slot(kvm, memslot);
> - write_unlock(&kvm->mmu_lock);
>
> /*
> * It's also safe to flush TLBs out of mmu lock here as currently this
> @@ -5661,9 +5662,10 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm,
> write_lock(&kvm->mmu_lock);
> flush = slot_handle_large_level(kvm, memslot, slot_rmap_write_protect,
> false);
> + write_unlock(&kvm->mmu_lock);
> +
> if (kvm->arch.tdp_mmu_enabled)
> flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_2M);
> - write_unlock(&kvm->mmu_lock);
>
> if (flush)
> kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);
> @@ -5677,12 +5679,12 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm,
>
> write_lock(&kvm->mmu_lock);
> flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false);
> - if (kvm->arch.tdp_mmu_enabled)
> - flush |= kvm_tdp_mmu_slot_set_dirty(kvm, memslot);
> - write_unlock(&kvm->mmu_lock);
> -
> if (flush)
> kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);
> + write_unlock(&kvm->mmu_lock);
> +
> + if (kvm->arch.tdp_mmu_enabled)
> + kvm_tdp_mmu_slot_set_dirty(kvm, memslot);
> }
> EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty);
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index cfe66b8d39fa..6093926a6bc5 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -553,18 +553,22 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
> }
>
> /*
> - * tdp_mmu_set_spte_atomic - Set a TDP MMU SPTE atomically and handle the
> + * __tdp_mmu_set_spte_atomic - Set a TDP MMU SPTE atomically and handle the
> * associated bookkeeping
> *
> * @kvm: kvm instance
> * @iter: a tdp_iter instance currently on the SPTE that should be set
> * @new_spte: The value the SPTE should be set to
> + * @record_dirty_log: Record the page as dirty in the dirty bitmap if
> + * appropriate for the change being made. Should be set
> + * unless performing certain dirty logging operations.
> + * Leaving record_dirty_log unset in that case prevents page
> + * writes from being double counted.
> * Returns: true if the SPTE was set, false if it was not. If false is returned,
> * this function will have no side-effects.
> */
> -static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
> - struct tdp_iter *iter,
> - u64 new_spte)
> +static inline bool __tdp_mmu_set_spte_atomic(struct kvm *kvm,
> + struct tdp_iter *iter, u64 new_spte, bool record_dirty_log)
Instead of adding the bool argument, just name this
tdp_mmu_set_spte_atomic_no_dirty_log...
> {
> u64 *root_pt = tdp_iter_root_pt(iter);
> struct kvm_mmu_page *root = sptep_to_sp(root_pt);
> @@ -583,12 +587,31 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
> new_spte) != iter->old_spte)
> return false;
>
> - handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte,
> - iter->level, true);
> + __handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte,
> + iter->level, true);
> + handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level);
> + if (record_dirty_log)
> + handle_changed_spte_dirty_log(kvm, as_id, iter->gfn,
> + iter->old_spte, new_spte,
> + iter->level);
... and tdp_mmu_set_spte_atomic becomes
if (!tdp_mmu_set_spte_atomic_no_dirty_log(kvm, iter, new_spte))
return false;
handle_changed_spte_dirty_log(kvm, as_id, iter->gfn,
iter->old_spte, new_spte,
iter->level);
return true;
> @@ -1301,7 +1344,8 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot)
> int root_as_id;
> bool spte_set = false;
>
> - for_each_tdp_mmu_root_yield_safe(kvm, root, false) {
> + read_lock(&kvm->mmu_lock);
> + for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
> root_as_id = kvm_mmu_page_as_id(root);
> if (root_as_id != slot->as_id)
> continue;
> @@ -1309,6 +1353,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot)
> spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
> slot->base_gfn + slot->npages);
> }
> + read_unlock(&kvm->mmu_lock);
Same remark as before.
> return spte_set;
> }
> @@ -1397,7 +1442,8 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> rcu_read_lock();
>
> tdp_root_for_each_pte(iter, root, start, end) {
> - if (tdp_mmu_iter_cond_resched(kvm, &iter, false, false))
> +retry:
> + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true))
> continue;
>
> if (!is_shadow_present_pte(iter.old_spte) ||
> @@ -1406,7 +1452,14 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
>
> new_spte = iter.old_spte | shadow_dirty_mask;
>
> - tdp_mmu_set_spte(kvm, &iter, new_spte);
> + if (!tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) {
> + /*
> + * The iter must explicitly re-read the SPTE because
> + * the atomic cmpxchg failed.
> + */
> + iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep));
> + goto retry;
> + }
> spte_set = true;
Yep, looks like that spte_set assignment should not have been removed. :)
> }
>
> @@ -1417,15 +1470,15 @@ static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
> /*
> * Set the dirty status of all the SPTEs mapping GFNs in the memslot. This is
> * only used for PML, and so will involve setting the dirty bit on each SPTE.
> - * Returns true if an SPTE has been changed and the TLBs need to be flushed.
> */
> -bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
> +void kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
> {
> struct kvm_mmu_page *root;
> int root_as_id;
> bool spte_set = false;
>
> - for_each_tdp_mmu_root_yield_safe(kvm, root, false) {
> + read_lock(&kvm->mmu_lock);
And again here.
Paolo
> + for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
> root_as_id = kvm_mmu_page_as_id(root);
> if (root_as_id != slot->as_id)
> continue;
> @@ -1433,7 +1486,11 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
> spte_set |= set_dirty_gfn_range(kvm, root, slot->base_gfn,
> slot->base_gfn + slot->npages);
> }
> - return spte_set;
> +
> + if (spte_set)
> + kvm_arch_flush_remote_tlbs_memslot(kvm, slot);
> +
> + read_unlock(&kvm->mmu_lock);
> }
>
> /*
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
> index 10ada884270b..848b41b20985 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.h
> +++ b/arch/x86/kvm/mmu/tdp_mmu.h
> @@ -38,7 +38,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> gfn_t gfn, unsigned long mask,
> bool wrprot);
> -bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot);
> +void kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot);
> void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
> const struct kvm_memory_slot *slot);
>
>
next prev parent reply other threads:[~2021-02-03 11:40 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-02 18:57 [PATCH v2 00/28] Allow parallel MMU operations with TDP MMU Ben Gardon
2021-02-02 18:57 ` [PATCH v2 01/28] KVM: x86/mmu: change TDP MMU yield function returns to match cond_resched Ben Gardon
2021-02-02 18:57 ` [PATCH v2 02/28] KVM: x86/mmu: Add comment on __tdp_mmu_set_spte Ben Gardon
2021-02-02 18:57 ` [PATCH v2 03/28] KVM: x86/mmu: Add lockdep when setting a TDP MMU SPTE Ben Gardon
2021-02-02 18:57 ` [PATCH v2 04/28] KVM: x86/mmu: Don't redundantly clear TDP MMU pt memory Ben Gardon
2021-02-02 18:57 ` [PATCH v2 05/28] KVM: x86/mmu: Factor out handling of removed page tables Ben Gardon
2021-02-02 18:57 ` [PATCH v2 06/28] locking/rwlocks: Add contention detection for rwlocks Ben Gardon
2021-02-09 20:39 ` Guenter Roeck
2021-02-09 21:46 ` Waiman Long
2021-02-09 22:25 ` Guenter Roeck
2021-02-10 0:27 ` Waiman Long
2021-02-10 0:41 ` Waiman Long
2021-02-10 6:04 ` Guenter Roeck
2021-02-10 14:57 ` Waiman Long
2021-02-10 3:32 ` Waiman Long
2021-02-10 15:15 ` Waiman Long
2021-02-02 18:57 ` [PATCH v2 07/28] sched: Add needbreak " Ben Gardon
2021-02-02 18:57 ` [PATCH v2 08/28] sched: Add cond_resched_rwlock Ben Gardon
2021-02-02 18:57 ` [PATCH v2 09/28] KVM: x86/mmu: Fix braces in kvm_recover_nx_lpages Ben Gardon
2021-02-02 18:57 ` [PATCH v2 10/28] KVM: x86/mmu: Fix TDP MMU zap collapsible SPTEs Ben Gardon
2021-02-03 9:43 ` Paolo Bonzini
2021-02-02 18:57 ` [PATCH v2 11/28] KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched Ben Gardon
2021-02-02 18:57 ` [PATCH v2 12/28] KVM: x86/mmu: Rename goal_gfn to next_last_level_gfn Ben Gardon
2021-02-02 18:57 ` [PATCH v2 13/28] KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter Ben Gardon
2021-02-05 23:42 ` Sean Christopherson
2021-02-02 18:57 ` [PATCH v2 14/28] KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed Ben Gardon
2021-02-02 18:57 ` [PATCH v2 15/28] KVM: x86/mmu: Skip no-op changes in TDP MMU functions Ben Gardon
2021-02-02 18:57 ` [PATCH v2 16/28] KVM: x86/mmu: Clear dirtied pages mask bit before early break Ben Gardon
2021-02-02 18:57 ` [PATCH v2 17/28] KVM: x86/mmu: Protect TDP MMU page table memory with RCU Ben Gardon
2021-02-02 18:57 ` [PATCH v2 18/28] KVM: x86/mmu: Use an rwlock for the x86 MMU Ben Gardon
[not found] ` <c8aa8f9c-2305-5d58-3b48-261663524ad5@redhat.com>
[not found] ` <CANgfPd_RxhBwM95MQQmGOdtmeH8c6=zPqUnXXHNV5Ta0R5R=iw@mail.gmail.com>
2021-02-03 18:14 ` Paolo Bonzini
2021-02-02 18:57 ` [PATCH v2 19/28] KVM: x86/mmu: Factor out functions to add/remove TDP MMU pages Ben Gardon
2021-02-02 18:57 ` [PATCH v2 20/28] KVM: x86/mmu: Use atomic ops to set SPTEs in TDP MMU map Ben Gardon
2021-02-03 2:48 ` kernel test robot
2021-02-03 11:14 ` Paolo Bonzini
2021-02-06 0:26 ` Sean Christopherson
2021-02-08 10:32 ` Paolo Bonzini
2021-04-01 10:32 ` Paolo Bonzini
2021-04-01 16:50 ` Ben Gardon
2021-04-01 17:32 ` Paolo Bonzini
2021-04-01 18:09 ` Sean Christopherson
2021-02-02 18:57 ` [PATCH v2 21/28] KVM: x86/mmu: Flush TLBs after zap in TDP MMU PF handler Ben Gardon
2021-02-06 0:29 ` Sean Christopherson
2021-02-02 18:57 ` [PATCH v2 22/28] KVM: x86/mmu: Mark SPTEs in disconnected pages as removed Ben Gardon
2021-02-03 11:17 ` Paolo Bonzini
2021-02-02 18:57 ` [PATCH v2 23/28] KVM: x86/mmu: Allow parallel page faults for the TDP MMU Ben Gardon
2021-02-03 12:39 ` Paolo Bonzini
2021-02-03 17:46 ` Ben Gardon
2021-02-03 18:30 ` Paolo Bonzini
2021-02-06 0:12 ` Sean Christopherson
2021-02-02 18:57 ` [PATCH v2 24/28] KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock Ben Gardon
2021-02-03 11:25 ` Paolo Bonzini
2021-02-03 11:26 ` Paolo Bonzini
2021-02-03 18:31 ` Ben Gardon
2021-02-03 18:32 ` Paolo Bonzini
2021-02-02 18:57 ` [PATCH v2 25/28] KVM: x86/mmu: Allow zapping collapsible SPTEs to use MMU " Ben Gardon
2021-02-03 11:34 ` Paolo Bonzini
2021-02-03 18:51 ` Ben Gardon
2021-02-02 18:57 ` [PATCH v2 26/28] KVM: x86/mmu: Allow enabling / disabling dirty logging under " Ben Gardon
2021-02-03 11:38 ` Paolo Bonzini [this message]
2021-02-02 18:57 ` [PATCH v2 27/28] KVM: selftests: Add backing src parameter to dirty_log_perf_test Ben Gardon
2021-02-02 18:57 ` [PATCH v2 28/28] KVM: selftests: Disable dirty logging with vCPUs running Ben Gardon
2021-02-03 10:07 ` Paolo Bonzini
2021-02-03 11:00 ` [PATCH v2 00/28] Allow parallel MMU operations with TDP MMU Paolo Bonzini
2021-02-03 17:54 ` Sean Christopherson
2021-02-03 18:13 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0829378-6991-4f59-273d-db58057d7cb8@redhat.com \
--to=pbonzini@redhat.com \
--cc=bgardon@google.com \
--cc=jmattson@google.com \
--cc=junaids@google.com \
--cc=kernellwp@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
--cc=pshier@google.com \
--cc=seanjc@google.com \
--cc=vkuznets@redhat.com \
--cc=xiaoguangrong.eric@gmail.com \
--cc=yulei.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).