kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages
@ 2020-07-23 11:02 Will Deacon
  2020-07-23 11:02 ` [PATCH 1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap() Will Deacon
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Will Deacon @ 2020-07-23 11:02 UTC (permalink / raw)
  To: kvmarm; +Cc: Will Deacon, maz, kernel-team, linux-arm-kernel

Hi all,

Here are some small cleanups I made to the memcache logic while hacking on the
page-table code. The ioremap() behaviour looks like a bug to me, although it's
just a performance thing so nothing urgent.

Cheers,

Will

--->8

Will Deacon (3):
  KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()
  KVM: arm64: Simplify mmu_topup_memory_cache()
  KVM: arm64: Remove mmu_free_memory_cache()

 arch/arm64/kvm/mmu.c | 34 ++++++++++++++--------------------
 1 file changed, 14 insertions(+), 20 deletions(-)

-- 
2.28.0.rc0.105.gf9edc3c819-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()
  2020-07-23 11:02 [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Will Deacon
@ 2020-07-23 11:02 ` Will Deacon
  2020-07-23 11:02 ` [PATCH 2/3] KVM: arm64: Simplify mmu_topup_memory_cache() Will Deacon
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2020-07-23 11:02 UTC (permalink / raw)
  To: kvmarm; +Cc: Will Deacon, maz, kernel-team, linux-arm-kernel

kvm_phys_addr_ioremap() unconditionally empties out the memcache pages
for the current vCPU on return. This causes subsequent topups to allocate
fresh pages and is at odds with the behaviour when mapping memory in
user_mem_abort().

Remove the call to mmu_free_memory_cache() from kvm_phys_addr_ioremap(),
allowing the cached pages to be used by a later mapping.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/mmu.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 31058e6e7c2a..9102373a9744 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1484,19 +1484,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 					     kvm_mmu_cache_min_pages(kvm),
 					     KVM_NR_MEM_OBJS);
 		if (ret)
-			goto out;
+			break;
 		spin_lock(&kvm->mmu_lock);
 		ret = stage2_set_pte(kvm, &cache, addr, &pte,
 						KVM_S2PTE_FLAG_IS_IOMAP);
 		spin_unlock(&kvm->mmu_lock);
 		if (ret)
-			goto out;
+			break;
 
 		pfn++;
 	}
 
-out:
-	mmu_free_memory_cache(&cache);
 	return ret;
 }
 
-- 
2.28.0.rc0.105.gf9edc3c819-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] KVM: arm64: Simplify mmu_topup_memory_cache()
  2020-07-23 11:02 [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Will Deacon
  2020-07-23 11:02 ` [PATCH 1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap() Will Deacon
@ 2020-07-23 11:02 ` Will Deacon
  2020-07-23 11:02 ` [PATCH 3/3] KVM: arm64: Remove mmu_free_memory_cache() Will Deacon
  2020-07-27  8:45 ` [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Marc Zyngier
  3 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2020-07-23 11:02 UTC (permalink / raw)
  To: kvmarm; +Cc: Will Deacon, maz, kernel-team, linux-arm-kernel

All callers of mmu_topup_memory_cache() pass the same min/max limits.
Simplify the code by just passing the 'struct kvm' instead.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/mmu.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9102373a9744..e55a28178164 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -124,20 +124,22 @@ static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t *pudp)
 	put_page(virt_to_page(pudp));
 }
 
-static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
-				  int min, int max)
+static int mmu_topup_memory_cache(struct kvm *kvm,
+				  struct kvm_mmu_memory_cache *cache)
 {
 	void *page;
 
-	BUG_ON(max > KVM_NR_MEM_OBJS);
-	if (cache->nobjs >= min)
+	if (cache->nobjs >= kvm_mmu_cache_min_pages(kvm))
 		return 0;
-	while (cache->nobjs < max) {
+
+	while (cache->nobjs < KVM_NR_MEM_OBJS) {
 		page = (void *)__get_free_page(GFP_PGTABLE_USER);
 		if (!page)
 			return -ENOMEM;
+
 		cache->objects[cache->nobjs++] = page;
 	}
+
 	return 0;
 }
 
@@ -1480,9 +1482,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 		if (writable)
 			pte = kvm_s2pte_mkwrite(pte);
 
-		ret = mmu_topup_memory_cache(&cache,
-					     kvm_mmu_cache_min_pages(kvm),
-					     KVM_NR_MEM_OBJS);
+		ret = mmu_topup_memory_cache(kvm, &cache);
 		if (ret)
 			break;
 		spin_lock(&kvm->mmu_lock);
@@ -1880,8 +1880,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	mmap_read_unlock(current->mm);
 
 	/* We need minimum second+third level pages */
-	ret = mmu_topup_memory_cache(memcache, kvm_mmu_cache_min_pages(kvm),
-				     KVM_NR_MEM_OBJS);
+	ret = mmu_topup_memory_cache(kvm, memcache);
 	if (ret)
 		return ret;
 
-- 
2.28.0.rc0.105.gf9edc3c819-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] KVM: arm64: Remove mmu_free_memory_cache()
  2020-07-23 11:02 [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Will Deacon
  2020-07-23 11:02 ` [PATCH 1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap() Will Deacon
  2020-07-23 11:02 ` [PATCH 2/3] KVM: arm64: Simplify mmu_topup_memory_cache() Will Deacon
@ 2020-07-23 11:02 ` Will Deacon
  2020-07-27  8:45 ` [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Marc Zyngier
  3 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2020-07-23 11:02 UTC (permalink / raw)
  To: kvmarm; +Cc: Will Deacon, maz, kernel-team, linux-arm-kernel

mmu_free_memory_cache() is only called by kvm_mmu_free_memory_caches(),
so inline the implementation and get rid of the extra function.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/mmu.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index e55a28178164..df2a8025ec8a 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -143,8 +143,10 @@ static int mmu_topup_memory_cache(struct kvm *kvm,
 	return 0;
 }
 
-static void mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc)
+void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu)
 {
+	struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_page_cache;
+
 	while (mc->nobjs)
 		free_page((unsigned long)mc->objects[--mc->nobjs]);
 }
@@ -2302,11 +2304,6 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
 				 kvm_test_age_hva_handler, NULL);
 }
 
-void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu)
-{
-	mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
-}
-
 phys_addr_t kvm_mmu_get_httbr(void)
 {
 	if (__kvm_cpu_uses_extended_idmap())
-- 
2.28.0.rc0.105.gf9edc3c819-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages
  2020-07-23 11:02 [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Will Deacon
                   ` (2 preceding siblings ...)
  2020-07-23 11:02 ` [PATCH 3/3] KVM: arm64: Remove mmu_free_memory_cache() Will Deacon
@ 2020-07-27  8:45 ` Marc Zyngier
  2020-07-27 10:38   ` Will Deacon
  3 siblings, 1 reply; 6+ messages in thread
From: Marc Zyngier @ 2020-07-27  8:45 UTC (permalink / raw)
  To: Will Deacon; +Cc: kernel-team, kvmarm, linux-arm-kernel

Hi Will,

On 2020-07-23 12:02, Will Deacon wrote:
> Hi all,
> 
> Here are some small cleanups I made to the memcache logic while hacking 
> on the
> page-table code. The ioremap() behaviour looks like a bug to me, 
> although it's
> just a performance thing so nothing urgent.
> 
> Cheers,
> 
> Will
> 
> --->8
> 
> Will Deacon (3):
>   KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()
>   KVM: arm64: Simplify mmu_topup_memory_cache()
>   KVM: arm64: Remove mmu_free_memory_cache()
> 
>  arch/arm64/kvm/mmu.c | 34 ++++++++++++++--------------------
>  1 file changed, 14 insertions(+), 20 deletions(-)

Although I'm OK with this series, it conflicts with the changes
Sean did on the MMU memory cache in the core code, which also
affects arm64.

I guess I'll queue patch 1 and 3 as fixes post -rc1. Patch 2 doesn't
seem to make sense anymore in that context.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages
  2020-07-27  8:45 ` [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Marc Zyngier
@ 2020-07-27 10:38   ` Will Deacon
  0 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2020-07-27 10:38 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kernel-team, kvmarm, linux-arm-kernel

On Mon, Jul 27, 2020 at 09:45:39AM +0100, Marc Zyngier wrote:
> On 2020-07-23 12:02, Will Deacon wrote:
> > Here are some small cleanups I made to the memcache logic while hacking
> > on the
> > page-table code. The ioremap() behaviour looks like a bug to me,
> > although it's
> > just a performance thing so nothing urgent.
> > 
> > Cheers,
> > 
> > Will
> > 
> > --->8
> > 
> > Will Deacon (3):
> >   KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()
> >   KVM: arm64: Simplify mmu_topup_memory_cache()
> >   KVM: arm64: Remove mmu_free_memory_cache()
> > 
> >  arch/arm64/kvm/mmu.c | 34 ++++++++++++++--------------------
> >  1 file changed, 14 insertions(+), 20 deletions(-)
> 
> Although I'm OK with this series, it conflicts with the changes
> Sean did on the MMU memory cache in the core code, which also
> affects arm64.
> 
> I guess I'll queue patch 1 and 3 as fixes post -rc1. Patch 2 doesn't
> seem to make sense anymore in that context.

Cheers, that sounds good to me. None of this is urgent.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-07-27 10:38 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-23 11:02 [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Will Deacon
2020-07-23 11:02 ` [PATCH 1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap() Will Deacon
2020-07-23 11:02 ` [PATCH 2/3] KVM: arm64: Simplify mmu_topup_memory_cache() Will Deacon
2020-07-23 11:02 ` [PATCH 3/3] KVM: arm64: Remove mmu_free_memory_cache() Will Deacon
2020-07-27  8:45 ` [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Marc Zyngier
2020-07-27 10:38   ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).