From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH 1/2] KVM: Block memslot updates across range_start() and range_end()
Date: Mon, 2 Aug 2021 18:30:13 +0000 [thread overview]
Message-ID: <YQg5tQslPv83TTQW@google.com> (raw)
In-Reply-To: <20210727171808.1645060-2-pbonzini@redhat.com>
On Tue, Jul 27, 2021, Paolo Bonzini wrote:
> @@ -764,8 +769,9 @@ static inline struct kvm_memslots *__kvm_memslots(struct kvm *kvm, int as_id)
> {
> as_id = array_index_nospec(as_id, KVM_ADDRESS_SPACE_NUM);
> return srcu_dereference_check(kvm->memslots[as_id], &kvm->srcu,
> - lockdep_is_held(&kvm->slots_lock) ||
> - !refcount_read(&kvm->users_count));
> + lockdep_is_held(&kvm->slots_lock) ||
> + READ_ONCE(kvm->mn_active_invalidate_count) ||
Hmm, I'm not sure we should add mn_active_invalidate_count as an exception to
holding kvm->srcu. It made sense in original (flawed) approach because the
exception was a locked_is_held() check, i.e. it was verifying the the current
task holds the lock. With mn_active_invalidate_count, this only verifies that
there's an invalidation in-progress, it doesn't verify that this task/CPU is the
one doing the invalidation.
Since __kvm_handle_hva_range() takes SRCU for read, maybe it's best omit this?
> + !refcount_read(&kvm->users_count));
> }
>
> static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
...
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5cc79373827f..c64a7de60846 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -605,10 +605,8 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
>
> /*
> * .change_pte() must be surrounded by .invalidate_range_{start,end}(),
Nit, the comma can be switch to a period. The next patch starts a new sentence,
so it would be correct even in the long term.
> - * and so always runs with an elevated notifier count. This obviates
> - * the need to bump the sequence count.
> */
> - WARN_ON_ONCE(!kvm->mmu_notifier_count);
> + WARN_ON_ONCE(!READ_ONCE(kvm->mn_active_invalidate_count));
>
> kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
> }
Nits aside,
Reviewed-by: Sean Christopherson <seanjc@google.com>
next prev parent reply other threads:[~2021-08-02 18:30 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-27 17:18 [PATCH v2 0/2] KVM: Don't take mmu_lock for range invalidation unless necessary Paolo Bonzini
2021-07-27 17:18 ` [PATCH 1/2] KVM: Block memslot updates across range_start() and range_end() Paolo Bonzini
2021-08-02 18:30 ` Sean Christopherson [this message]
2021-07-27 17:18 ` [PATCH 2/2] KVM: Don't take mmu_lock for range invalidation unless necessary Paolo Bonzini
2021-08-02 19:05 ` Sean Christopherson
-- strict thread matches above, loose matches on Subject: below --
2021-06-10 12:06 [PATCH 0/2] " Paolo Bonzini
2021-06-10 12:06 ` [PATCH 1/2] KVM: Block memslot updates across range_start() and range_end() Paolo Bonzini
2021-07-13 17:34 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YQg5tQslPv83TTQW@google.com \
--to=seanjc@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).