kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: kvmarm@lists.cs.columbia.edu
Cc: Will Deacon <will@kernel.org>,
	maz@kernel.org, kernel-team@android.com,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH 1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()
Date: Thu, 23 Jul 2020 12:02:25 +0100	[thread overview]
Message-ID: <20200723110227.16001-2-will@kernel.org> (raw)
In-Reply-To: <20200723110227.16001-1-will@kernel.org>

kvm_phys_addr_ioremap() unconditionally empties out the memcache pages
for the current vCPU on return. This causes subsequent topups to allocate
fresh pages and is at odds with the behaviour when mapping memory in
user_mem_abort().

Remove the call to mmu_free_memory_cache() from kvm_phys_addr_ioremap(),
allowing the cached pages to be used by a later mapping.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/mmu.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 31058e6e7c2a..9102373a9744 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1484,19 +1484,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 					     kvm_mmu_cache_min_pages(kvm),
 					     KVM_NR_MEM_OBJS);
 		if (ret)
-			goto out;
+			break;
 		spin_lock(&kvm->mmu_lock);
 		ret = stage2_set_pte(kvm, &cache, addr, &pte,
 						KVM_S2PTE_FLAG_IS_IOMAP);
 		spin_unlock(&kvm->mmu_lock);
 		if (ret)
-			goto out;
+			break;
 
 		pfn++;
 	}
 
-out:
-	mmu_free_memory_cache(&cache);
 	return ret;
 }
 
-- 
2.28.0.rc0.105.gf9edc3c819-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2020-07-23 11:02 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-23 11:02 [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Will Deacon
2020-07-23 11:02 ` Will Deacon [this message]
2020-07-23 11:02 ` [PATCH 2/3] KVM: arm64: Simplify mmu_topup_memory_cache() Will Deacon
2020-07-23 11:02 ` [PATCH 3/3] KVM: arm64: Remove mmu_free_memory_cache() Will Deacon
2020-07-27  8:45 ` [PATCH 0/3] KVM: arm64: Clean up memcache usage for page-table pages Marc Zyngier
2020-07-27 10:38   ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200723110227.16001-2-will@kernel.org \
    --to=will@kernel.org \
    --cc=kernel-team@android.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=maz@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).