linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Maxim Levitsky <mlevitsk@redhat.com>, kvm@vger.kernel.org
Cc: Wanpeng Li <wanpengli@tencent.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Joerg Roedel <joro@8bytes.org>, Borislav Petkov <bp@alien8.de>,
	Sean Christopherson <seanjc@google.com>,
	Jim Mattson <jmattson@google.com>,
	"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
	<x86@kernel.org>,
	"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" 
	<linux-kernel@vger.kernel.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v3 02/12] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range
Date: Tue, 3 Aug 2021 11:00:23 +0200	[thread overview]
Message-ID: <8cae345b-767d-69fe-b7dc-7be559c18e2a@redhat.com> (raw)
In-Reply-To: <20210802183329.2309921-3-mlevitsk@redhat.com>

On 02/08/21 20:33, Maxim Levitsky wrote:
> This together with previous patch, ensures that
> kvm_zap_gfn_range doesn't race with page fault
> running on another vcpu, and will make this page fault code
> retry instead.
> 
> This is based on a patch suggested by Sean Christopherson:
> https://lkml.org/lkml/2021/7/22/1025
> 
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
>   arch/x86/kvm/mmu/mmu.c   | 4 ++++
>   include/linux/kvm_host.h | 5 +++++
>   virt/kvm/kvm_main.c      | 7 +++++--
>   3 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 9d78cb1c0f35..9da635e383c2 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -5640,6 +5640,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
>   
>   	write_lock(&kvm->mmu_lock);
>   
> +	kvm_inc_notifier_count(kvm, gfn_start, gfn_end);
> +
>   	if (kvm_memslots_have_rmaps(kvm)) {
>   		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
>   			slots = __kvm_memslots(kvm, i);
> @@ -5671,6 +5673,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
>   	if (flush)
>   		kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end);
>   
> +	kvm_dec_notifier_count(kvm, gfn_start, gfn_end);
> +
>   	write_unlock(&kvm->mmu_lock);
>   }
>   
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 9d6b4ad407b8..962e11a73e8e 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -985,6 +985,11 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
>   void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
>   #endif
>   
> +void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
> +				   unsigned long end);
> +void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
> +				   unsigned long end);
> +
>   long kvm_arch_dev_ioctl(struct file *filp,
>   			unsigned int ioctl, unsigned long arg);
>   long kvm_arch_vcpu_ioctl(struct file *filp,
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index a96cbe24c688..71042cd807b3 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -608,7 +608,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
>   	kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
>   }
>   
> -static void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
> +void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
>   				   unsigned long end)
>   {
>   	/*
> @@ -636,6 +636,7 @@ static void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
>   			max(kvm->mmu_notifier_range_end, end);
>   	}
>   }
> +EXPORT_SYMBOL_GPL(kvm_inc_notifier_count);
>   
>   static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
>   					const struct mmu_notifier_range *range)
> @@ -670,7 +671,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
>   	return 0;
>   }
>   
> -static void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
> +void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
>   				   unsigned long end)
>   {
>   	/*
> @@ -687,6 +688,8 @@ static void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
>   	 */
>   	kvm->mmu_notifier_count--;
>   }
> +EXPORT_SYMBOL_GPL(kvm_dec_notifier_count);
> +
>   
>   static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
>   					const struct mmu_notifier_range *range)
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>


  reply	other threads:[~2021-08-03  9:00 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-02 18:33 [PATCH v3 00/12] My AVIC patch queue Maxim Levitsky
2021-08-02 18:33 ` [PATCH v3 01/12] Revert "KVM: x86/mmu: Allow zap gfn range to operate under the mmu read lock" Maxim Levitsky
2021-08-03  8:05   ` Paolo Bonzini
2021-08-03 15:11     ` Sean Christopherson
2021-08-03 17:29       ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 02/12] KVM: x86/mmu: bump mmu notifier count in kvm_zap_gfn_range Maxim Levitsky
2021-08-03  9:00   ` Paolo Bonzini [this message]
2021-08-02 18:33 ` [PATCH v3 03/12] KVM: x86/mmu: rename try_async_pf to kvm_faultin_pfn Maxim Levitsky
2021-08-03  9:00   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 04/12] KVM: x86/mmu: allow kvm_faultin_pfn to return page fault handling code Maxim Levitsky
2021-08-03  9:00   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 05/12] KVM: x86/mmu: allow APICv memslot to be partially enabled Maxim Levitsky
2021-08-03  9:12   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 06/12] KVM: x86: don't disable APICv memslot when inhibited Maxim Levitsky
2021-08-03  8:44   ` Paolo Bonzini
2021-08-09 18:51     ` Maxim Levitsky
2021-08-09 19:14       ` Sean Christopherson
2021-08-02 18:33 ` [PATCH v3 07/12] KVM: x86: APICv: fix race in kvm_request_apicv_update on SVM Maxim Levitsky
2021-08-02 18:33 ` [PATCH v3 08/12] KVM: SVM: add warning for mistmatch between AVIC state and AVIC access page state Maxim Levitsky
2021-08-03  8:45   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 09/12] KVM: x86: hyper-v: Deactivate APICv only when AutoEOI feature is in use Maxim Levitsky
2021-08-03  8:47   ` Paolo Bonzini
2021-08-03  9:01   ` Paolo Bonzini
2021-08-03  9:11   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 10/12] KVM: SVM: remove svm_toggle_avic_for_irq_window Maxim Levitsky
2021-08-03  9:11   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 11/12] KVM: SVM: call avic_vcpu_load/avic_vcpu_put when enabling/disabling AVIC Maxim Levitsky
2021-08-03  9:00   ` Paolo Bonzini
2021-08-02 18:33 ` [PATCH v3 12/12] KVM: SVM: AVIC: drop unsupported AVIC base relocation code Maxim Levitsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8cae345b-767d-69fe-b7dc-7be559c18e2a@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=bp@alien8.de \
    --cc=hpa@zytor.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=mlevitsk@redhat.com \
    --cc=seanjc@google.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).