All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: David Woodhouse <dwmw2@infradead.org>, kvm <kvm@vger.kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	"jmattson @ google . com" <jmattson@google.com>,
	"wanpengli @ tencent . com" <wanpengli@tencent.com>,
	"seanjc @ google . com" <seanjc@google.com>,
	"vkuznets @ redhat . com" <vkuznets@redhat.com>,
	"mtosatti @ redhat . com" <mtosatti@redhat.com>,
	"joro @ 8bytes . org" <joro@8bytes.org>,
	karahmed@amazon.com, Marc Zyngier <maz@kernel.org>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Anup Patel <anup.patel@wdc.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	butt3rflyh4ck <butterflyhuangxx@gmail.com>
Subject: Re: [PATCH v5 08/12] KVM: Reinstate gfn_to_pfn_cache with invalidation support
Date: Thu, 9 Dec 2021 19:34:08 +0100	[thread overview]
Message-ID: <b1bacc6f-be56-4108-6e52-4315a021184b@redhat.com> (raw)
In-Reply-To: <20211121125451.9489-9-dwmw2@infradead.org>

Sorry for the late review...

On 11/21/21 13:54, David Woodhouse wrote:
> +EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
> +
> +static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
> +			  gpa_t gpa, bool dirty)
> +{
> +	/* Unmap the old page if it was mapped before, and release it */
> +	if (!is_error_noslot_pfn(pfn)) {
> +		if (khva) {
> +			if (pfn_valid(pfn))
> +				kunmap(pfn_to_page(pfn));
> +#ifdef CONFIG_HAS_IOMEM
> +			else
> +				memunmap(khva);
> +#endif
> +		}

Considering that the khva is passed directly to memunmap, perhaps it's
cleaner to ensure it's page-aligned:

diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 13cae72d39e9..267477bd2972 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -147,7 +147,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_uhva = gpc->uhva;
  	old_valid = gpc->valid;
  	old_dirty = gpc->dirty;
@@ -209,7 +209,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  		if (gpc->kernel_map) {
  			if (new_pfn == old_pfn) {
-				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				new_khva = old_khva;
  				old_pfn = KVM_PFN_ERR_FAULT;
  				old_khva = NULL;
  			} else if (pfn_valid(new_pfn)) {
@@ -265,7 +265,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
  
  	gpc->valid = false;
  
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_dirty = gpc->dirty;
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;


> 
> +	retry_map:
> +		mmu_seq = kvm->mmu_notifier_seq;
> +		smp_rmb();
> +
> +		/* We always request a writeable mapping */
> +		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
> +		if (is_error_noslot_pfn(new_pfn)) {
> +			ret = -EFAULT;
> +			goto map_done;
> +		}
> +
> +		KVM_MMU_READ_LOCK(kvm);
> +		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
> +		KVM_MMU_READ_UNLOCK(kvm);
> +		if (retry) {
> +			cond_resched();
> +			goto retry_map;
> +		}
> +

This should also be a separate function, like

static kvm_pfn_t hva_to_pfn_retry(unsigned long uhva)
{
         kvm_pfn_t new_pfn
         unsigned long mmu_seq;
         int retry;

retry_map:
         mmu_seq = kvm->mmu_notifier_seq;
         smp_rmb();

         /* We always request a writeable mapping */
         new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
         if (is_error_noslot_pfn(new_pfn))
                 return new_pfn;

         KVM_MMU_READ_LOCK(kvm);
         retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
         KVM_MMU_READ_UNLOCK(kvm);
         if (retry) {
                 cond_resched();
                 goto retry_map;
         }
         return new_pfn;
}

> 
> +		write_lock_irq(&gpc->lock);
> +		if (ret) {
> +			gpc->valid = false;
> +			gpc->pfn = KVM_PFN_ERR_FAULT;
> +			gpc->khva = NULL;
> +		} else {
> +			/* At this point, gpc->valid may already have been cleared */
> +			gpc->pfn = new_pfn;
> +			gpc->khva = new_khva + page_offset;
> +		}

Should set gpc->khva only if new_khva != NULL (i.e. only if gpc->kernel_map
is true).

Paolo

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: David Woodhouse <dwmw2@infradead.org>, kvm <kvm@vger.kernel.org>
Cc: Anup Patel <anup.patel@wdc.com>,
	"wanpengli @ tencent . com" <wanpengli@tencent.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu, linux-s390@vger.kernel.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	"joro @ 8bytes . org" <joro@8bytes.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	karahmed@amazon.com,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	butt3rflyh4ck <butterflyhuangxx@gmail.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	"jmattson @ google . com" <jmattson@google.com>,
	"mtosatti @ redhat . com" <mtosatti@redhat.com>,
	linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org,
	Marc Zyngier <maz@kernel.org>,
	"vkuznets @ redhat . com" <vkuznets@redhat.com>,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v5 08/12] KVM: Reinstate gfn_to_pfn_cache with invalidation support
Date: Thu, 9 Dec 2021 19:34:08 +0100	[thread overview]
Message-ID: <b1bacc6f-be56-4108-6e52-4315a021184b@redhat.com> (raw)
In-Reply-To: <20211121125451.9489-9-dwmw2@infradead.org>

Sorry for the late review...

On 11/21/21 13:54, David Woodhouse wrote:
> +EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
> +
> +static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
> +			  gpa_t gpa, bool dirty)
> +{
> +	/* Unmap the old page if it was mapped before, and release it */
> +	if (!is_error_noslot_pfn(pfn)) {
> +		if (khva) {
> +			if (pfn_valid(pfn))
> +				kunmap(pfn_to_page(pfn));
> +#ifdef CONFIG_HAS_IOMEM
> +			else
> +				memunmap(khva);
> +#endif
> +		}

Considering that the khva is passed directly to memunmap, perhaps it's
cleaner to ensure it's page-aligned:

diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 13cae72d39e9..267477bd2972 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -147,7 +147,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_uhva = gpc->uhva;
  	old_valid = gpc->valid;
  	old_dirty = gpc->dirty;
@@ -209,7 +209,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  		if (gpc->kernel_map) {
  			if (new_pfn == old_pfn) {
-				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				new_khva = old_khva;
  				old_pfn = KVM_PFN_ERR_FAULT;
  				old_khva = NULL;
  			} else if (pfn_valid(new_pfn)) {
@@ -265,7 +265,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
  
  	gpc->valid = false;
  
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_dirty = gpc->dirty;
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;


> 
> +	retry_map:
> +		mmu_seq = kvm->mmu_notifier_seq;
> +		smp_rmb();
> +
> +		/* We always request a writeable mapping */
> +		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
> +		if (is_error_noslot_pfn(new_pfn)) {
> +			ret = -EFAULT;
> +			goto map_done;
> +		}
> +
> +		KVM_MMU_READ_LOCK(kvm);
> +		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
> +		KVM_MMU_READ_UNLOCK(kvm);
> +		if (retry) {
> +			cond_resched();
> +			goto retry_map;
> +		}
> +

This should also be a separate function, like

static kvm_pfn_t hva_to_pfn_retry(unsigned long uhva)
{
         kvm_pfn_t new_pfn
         unsigned long mmu_seq;
         int retry;

retry_map:
         mmu_seq = kvm->mmu_notifier_seq;
         smp_rmb();

         /* We always request a writeable mapping */
         new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
         if (is_error_noslot_pfn(new_pfn))
                 return new_pfn;

         KVM_MMU_READ_LOCK(kvm);
         retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
         KVM_MMU_READ_UNLOCK(kvm);
         if (retry) {
                 cond_resched();
                 goto retry_map;
         }
         return new_pfn;
}

> 
> +		write_lock_irq(&gpc->lock);
> +		if (ret) {
> +			gpc->valid = false;
> +			gpc->pfn = KVM_PFN_ERR_FAULT;
> +			gpc->khva = NULL;
> +		} else {
> +			/* At this point, gpc->valid may already have been cleared */
> +			gpc->pfn = new_pfn;
> +			gpc->khva = new_khva + page_offset;
> +		}

Should set gpc->khva only if new_khva != NULL (i.e. only if gpc->kernel_map
is true).

Paolo
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: David Woodhouse <dwmw2@infradead.org>, kvm <kvm@vger.kernel.org>
Cc: Anup Patel <anup.patel@wdc.com>,
	"wanpengli @ tencent . com" <wanpengli@tencent.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu, linux-s390@vger.kernel.org,
	"joro @ 8bytes . org" <joro@8bytes.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	karahmed@amazon.com, Suzuki K Poulose <suzuki.poulose@arm.com>,
	butt3rflyh4ck <butterflyhuangxx@gmail.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	"jmattson @ google . com" <jmattson@google.com>,
	"seanjc @ google . com" <seanjc@google.com>,
	"mtosatti @ redhat . com" <mtosatti@redhat.com>,
	linux-mips@vger.kernel.org, James Morse <james.morse@arm.com>,
	kvm-riscv@lists.infradead.org, Marc Zyngier <maz@kernel.org>,
	"vkuznets @ redhat . com" <vkuznets@redhat.com>,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v5 08/12] KVM: Reinstate gfn_to_pfn_cache with invalidation support
Date: Thu, 9 Dec 2021 19:34:08 +0100	[thread overview]
Message-ID: <b1bacc6f-be56-4108-6e52-4315a021184b@redhat.com> (raw)
In-Reply-To: <20211121125451.9489-9-dwmw2@infradead.org>

Sorry for the late review...

On 11/21/21 13:54, David Woodhouse wrote:
> +EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
> +
> +static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
> +			  gpa_t gpa, bool dirty)
> +{
> +	/* Unmap the old page if it was mapped before, and release it */
> +	if (!is_error_noslot_pfn(pfn)) {
> +		if (khva) {
> +			if (pfn_valid(pfn))
> +				kunmap(pfn_to_page(pfn));
> +#ifdef CONFIG_HAS_IOMEM
> +			else
> +				memunmap(khva);
> +#endif
> +		}

Considering that the khva is passed directly to memunmap, perhaps it's
cleaner to ensure it's page-aligned:

diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 13cae72d39e9..267477bd2972 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -147,7 +147,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_uhva = gpc->uhva;
  	old_valid = gpc->valid;
  	old_dirty = gpc->dirty;
@@ -209,7 +209,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  		if (gpc->kernel_map) {
  			if (new_pfn == old_pfn) {
-				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				new_khva = old_khva;
  				old_pfn = KVM_PFN_ERR_FAULT;
  				old_khva = NULL;
  			} else if (pfn_valid(new_pfn)) {
@@ -265,7 +265,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
  
  	gpc->valid = false;
  
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_dirty = gpc->dirty;
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;


> 
> +	retry_map:
> +		mmu_seq = kvm->mmu_notifier_seq;
> +		smp_rmb();
> +
> +		/* We always request a writeable mapping */
> +		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
> +		if (is_error_noslot_pfn(new_pfn)) {
> +			ret = -EFAULT;
> +			goto map_done;
> +		}
> +
> +		KVM_MMU_READ_LOCK(kvm);
> +		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
> +		KVM_MMU_READ_UNLOCK(kvm);
> +		if (retry) {
> +			cond_resched();
> +			goto retry_map;
> +		}
> +

This should also be a separate function, like

static kvm_pfn_t hva_to_pfn_retry(unsigned long uhva)
{
         kvm_pfn_t new_pfn
         unsigned long mmu_seq;
         int retry;

retry_map:
         mmu_seq = kvm->mmu_notifier_seq;
         smp_rmb();

         /* We always request a writeable mapping */
         new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
         if (is_error_noslot_pfn(new_pfn))
                 return new_pfn;

         KVM_MMU_READ_LOCK(kvm);
         retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
         KVM_MMU_READ_UNLOCK(kvm);
         if (retry) {
                 cond_resched();
                 goto retry_map;
         }
         return new_pfn;
}

> 
> +		write_lock_irq(&gpc->lock);
> +		if (ret) {
> +			gpc->valid = false;
> +			gpc->pfn = KVM_PFN_ERR_FAULT;
> +			gpc->khva = NULL;
> +		} else {
> +			/* At this point, gpc->valid may already have been cleared */
> +			gpc->pfn = new_pfn;
> +			gpc->khva = new_khva + page_offset;
> +		}

Should set gpc->khva only if new_khva != NULL (i.e. only if gpc->kernel_map
is true).

Paolo

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: David Woodhouse <dwmw2@infradead.org>, kvm <kvm@vger.kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	"jmattson @ google . com" <jmattson@google.com>,
	"wanpengli @ tencent . com" <wanpengli@tencent.com>,
	"seanjc @ google . com" <seanjc@google.com>,
	"vkuznets @ redhat . com" <vkuznets@redhat.com>,
	"mtosatti @ redhat . com" <mtosatti@redhat.com>,
	"joro @ 8bytes . org" <joro@8bytes.org>,
	karahmed@amazon.com, Marc Zyngier <maz@kernel.org>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Huacai Chen <chenhuacai@kernel.org>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Anup Patel <anup.patel@wdc.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
	butt3rflyh4ck <butterflyhuangxx@gmail.com>
Subject: Re: [PATCH v5 08/12] KVM: Reinstate gfn_to_pfn_cache with invalidation support
Date: Thu, 9 Dec 2021 19:34:08 +0100	[thread overview]
Message-ID: <b1bacc6f-be56-4108-6e52-4315a021184b@redhat.com> (raw)
In-Reply-To: <20211121125451.9489-9-dwmw2@infradead.org>

Sorry for the late review...

On 11/21/21 13:54, David Woodhouse wrote:
> +EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_check);
> +
> +static void __release_gpc(struct kvm *kvm, kvm_pfn_t pfn, void *khva,
> +			  gpa_t gpa, bool dirty)
> +{
> +	/* Unmap the old page if it was mapped before, and release it */
> +	if (!is_error_noslot_pfn(pfn)) {
> +		if (khva) {
> +			if (pfn_valid(pfn))
> +				kunmap(pfn_to_page(pfn));
> +#ifdef CONFIG_HAS_IOMEM
> +			else
> +				memunmap(khva);
> +#endif
> +		}

Considering that the khva is passed directly to memunmap, perhaps it's
cleaner to ensure it's page-aligned:

diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 13cae72d39e9..267477bd2972 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -147,7 +147,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_uhva = gpc->uhva;
  	old_valid = gpc->valid;
  	old_dirty = gpc->dirty;
@@ -209,7 +209,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
  
  		if (gpc->kernel_map) {
  			if (new_pfn == old_pfn) {
-				new_khva = (void *)((unsigned long)old_khva - page_offset);
+				new_khva = old_khva;
  				old_pfn = KVM_PFN_ERR_FAULT;
  				old_khva = NULL;
  			} else if (pfn_valid(new_pfn)) {
@@ -265,7 +265,7 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
  
  	gpc->valid = false;
  
-	old_khva = gpc->khva;
+	old_khva = (void *)((unsigned long)gpc->khva & ~PAGE_MASK);
  	old_dirty = gpc->dirty;
  	old_gpa = gpc->gpa;
  	old_pfn = gpc->pfn;


> 
> +	retry_map:
> +		mmu_seq = kvm->mmu_notifier_seq;
> +		smp_rmb();
> +
> +		/* We always request a writeable mapping */
> +		new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
> +		if (is_error_noslot_pfn(new_pfn)) {
> +			ret = -EFAULT;
> +			goto map_done;
> +		}
> +
> +		KVM_MMU_READ_LOCK(kvm);
> +		retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
> +		KVM_MMU_READ_UNLOCK(kvm);
> +		if (retry) {
> +			cond_resched();
> +			goto retry_map;
> +		}
> +

This should also be a separate function, like

static kvm_pfn_t hva_to_pfn_retry(unsigned long uhva)
{
         kvm_pfn_t new_pfn
         unsigned long mmu_seq;
         int retry;

retry_map:
         mmu_seq = kvm->mmu_notifier_seq;
         smp_rmb();

         /* We always request a writeable mapping */
         new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL);
         if (is_error_noslot_pfn(new_pfn))
                 return new_pfn;

         KVM_MMU_READ_LOCK(kvm);
         retry = mmu_notifier_retry_hva(kvm, mmu_seq, uhva);
         KVM_MMU_READ_UNLOCK(kvm);
         if (retry) {
                 cond_resched();
                 goto retry_map;
         }
         return new_pfn;
}

> 
> +		write_lock_irq(&gpc->lock);
> +		if (ret) {
> +			gpc->valid = false;
> +			gpc->pfn = KVM_PFN_ERR_FAULT;
> +			gpc->khva = NULL;
> +		} else {
> +			/* At this point, gpc->valid may already have been cleared */
> +			gpc->pfn = new_pfn;
> +			gpc->khva = new_khva + page_offset;
> +		}

Should set gpc->khva only if new_khva != NULL (i.e. only if gpc->kernel_map
is true).

Paolo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-12-09 18:34 UTC|newest]

Thread overview: 110+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-21 12:54 [PATCH v5 00/12] KVM: x86/xen: Add in-kernel Xen event channel delivery David Woodhouse
2021-11-21 12:54 ` David Woodhouse
2021-11-21 12:54 ` David Woodhouse
2021-11-21 12:54 ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 01/12] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 02/12] KVM: Add Makefile.kvm for common files, use it for x86 David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 03/12] KVM: s390: Use Makefile.kvm for common files David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-12-09 17:58   ` Paolo Bonzini
2021-12-09 17:58     ` Paolo Bonzini
2021-12-09 17:58     ` Paolo Bonzini
2021-12-09 17:58     ` Paolo Bonzini
2021-11-21 12:54 ` [PATCH v5 04/12] KVM: mips: " David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 05/12] KVM: RISC-V: " David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-23  9:12   ` Anup Patel
2021-11-23  9:12     ` Anup Patel
2021-11-23  9:12     ` Anup Patel
2021-11-23  9:12     ` Anup Patel
2021-11-29 16:44     ` David Woodhouse
2021-11-29 16:44       ` David Woodhouse
2021-11-29 16:44       ` David Woodhouse
2021-11-29 16:44       ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 06/12] KVM: powerpc: " David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-12-09 18:05   ` Paolo Bonzini
2021-12-09 18:05     ` Paolo Bonzini
2021-12-09 18:05     ` Paolo Bonzini
2021-12-09 18:05     ` Paolo Bonzini
2021-11-21 12:54 ` [PATCH v5 07/12] KVM: arm64: " David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 08/12] KVM: Reinstate gfn_to_pfn_cache with invalidation support David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-12-09 18:34   ` Paolo Bonzini [this message]
2021-12-09 18:34     ` Paolo Bonzini
2021-12-09 18:34     ` Paolo Bonzini
2021-12-09 18:34     ` Paolo Bonzini
2021-12-09 20:40     ` David Woodhouse
2021-12-09 20:40       ` David Woodhouse
2021-12-09 20:40       ` David Woodhouse
2021-12-09 20:40       ` David Woodhouse
2021-12-09 22:34       ` Paolo Bonzini
2021-12-09 22:34         ` Paolo Bonzini
2021-12-09 22:34         ` Paolo Bonzini
2021-12-09 22:34         ` Paolo Bonzini
2021-12-09 22:38         ` David Woodhouse
2021-12-09 22:38           ` David Woodhouse
2021-12-09 22:38           ` David Woodhouse
2021-12-09 22:38           ` David Woodhouse
2021-12-10 12:25         ` David Woodhouse
2021-12-10 12:25           ` David Woodhouse
2021-12-10 12:25           ` David Woodhouse
2021-12-10 12:25           ` David Woodhouse
2021-12-10 14:53           ` Paolo Bonzini
2021-12-10 14:53             ` Paolo Bonzini
2021-12-10 14:53             ` Paolo Bonzini
2021-12-10 14:53             ` Paolo Bonzini
2021-12-10 14:57             ` David Woodhouse
2021-12-10 14:57               ` David Woodhouse
2021-12-10 14:57               ` David Woodhouse
2021-12-10 14:57               ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 09/12] KVM: x86/xen: Maintain valid mapping of Xen shared_info page David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 10/12] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 11/12] KVM: x86: Fix wall clock writes in Xen shared_info not to mark page dirty David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54 ` [PATCH v5 12/12] KVM: x86: First attempt at converting nested virtual APIC page to gpc David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-21 12:54   ` David Woodhouse
2021-11-26 20:14   ` kernel test robot
2021-11-26 20:14     ` kernel test robot
2021-12-09 18:34 ` [PATCH v5 00/12] KVM: x86/xen: Add in-kernel Xen event channel delivery Paolo Bonzini
2021-12-09 18:34   ` Paolo Bonzini
2021-12-09 18:34   ` Paolo Bonzini
2021-12-09 18:34   ` Paolo Bonzini
2021-12-09 18:47   ` David Woodhouse
2021-12-09 18:47     ` David Woodhouse
2021-12-09 18:47     ` David Woodhouse
2021-12-09 18:47     ` David Woodhouse
2021-12-09 18:55     ` Paolo Bonzini
2021-12-09 18:55       ` Paolo Bonzini
2021-12-09 18:55       ` Paolo Bonzini
2021-12-09 18:55       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b1bacc6f-be56-4108-6e52-4315a021184b@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=alexandru.elisei@arm.com \
    --cc=anup.patel@wdc.com \
    --cc=benh@kernel.crashing.org \
    --cc=boris.ostrovsky@oracle.com \
    --cc=borntraeger@de.ibm.com \
    --cc=butterflyhuangxx@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=chenhuacai@kernel.org \
    --cc=dwmw2@infradead.org \
    --cc=james.morse@arm.com \
    --cc=jmattson@google.com \
    --cc=joao.m.martins@oracle.com \
    --cc=joro@8bytes.org \
    --cc=karahmed@amazon.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=maz@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=mtosatti@redhat.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.