kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <paul@xen.org>
To: David Woodhouse <dwmw2@infradead.org>,
	Paul Durrant <paul@xen.org>,
	Sean Christopherson <seanjc@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH v8 07/15] KVM: pfncache: include page offset in uhva and use it consistently
Date: Tue, 21 Nov 2023 18:02:15 +0000	[thread overview]
Message-ID: <20231121180223.12484-8-paul@xen.org> (raw)
In-Reply-To: <20231121180223.12484-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently the pfncache page offset is sometimes determined using the gpa
and sometimes the khva, whilst the uhva is always page-aligned. After a
subsequent patch is applied the gpa will not always be valid so adjust
the code to include the page offset in the uhva and use it consistently
as the source of truth.

Also, where a page-aligned address is required, use PAGE_ALIGN_DOWN()
for clarity.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>

v8:
 - New in this version.
---
 virt/kvm/pfncache.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 0eeb034d0674..c545f6246501 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -48,10 +48,10 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len)
 	if (!gpc->active)
 		return false;
 
-	if (offset_in_page(gpc->gpa) + len > PAGE_SIZE)
+	if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva))
 		return false;
 
-	if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva))
+	if (offset_in_page(gpc->uhva) + len > PAGE_SIZE)
 		return false;
 
 	if (!gpc->valid)
@@ -119,7 +119,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s
 static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
 {
 	/* Note, the new page offset may be different than the old! */
-	void *old_khva = gpc->khva - offset_in_page(gpc->khva);
+	void *old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva);
 	kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT;
 	void *new_khva = NULL;
 	unsigned long mmu_seq;
@@ -192,7 +192,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc)
 
 	gpc->valid = true;
 	gpc->pfn = new_pfn;
-	gpc->khva = new_khva + offset_in_page(gpc->gpa);
+	gpc->khva = new_khva + offset_in_page(gpc->uhva);
 
 	/*
 	 * Put the reference to the _new_ pfn.  The pfn is now tracked by the
@@ -215,8 +215,8 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
 	struct kvm_memslots *slots = kvm_memslots(gpc->kvm);
 	unsigned long page_offset = offset_in_page(gpa);
 	bool unmap_old = false;
-	unsigned long old_uhva;
 	kvm_pfn_t old_pfn;
+	bool hva_change = false;
 	void *old_khva;
 	int ret;
 
@@ -242,8 +242,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
 	}
 
 	old_pfn = gpc->pfn;
-	old_khva = gpc->khva - offset_in_page(gpc->khva);
-	old_uhva = gpc->uhva;
+	old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva);
 
 	/* If the userspace HVA is invalid, refresh that first */
 	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
@@ -259,13 +258,25 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa,
 			ret = -EFAULT;
 			goto out;
 		}
+
+		hva_change = true;
+	} else {
+		/*
+		 * No need to do any re-mapping if the only thing that has
+		 * changed is the page offset. Just page align it to allow the
+		 * new offset to be added in.
+		 */
+		gpc->uhva = PAGE_ALIGN_DOWN(gpc->uhva);
 	}
 
+	/* Note: the offset must be correct before calling hva_to_pfn_retry() */
+	gpc->uhva += page_offset;
+
 	/*
 	 * If the userspace HVA changed or the PFN was already invalid,
 	 * drop the lock and do the HVA to PFN lookup again.
 	 */
-	if (!gpc->valid || old_uhva != gpc->uhva) {
+	if (!gpc->valid || hva_change) {
 		ret = hva_to_pfn_retry(gpc);
 	} else {
 		/*
-- 
2.39.2


  parent reply	other threads:[~2023-11-21 18:03 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-21 18:02 [PATCH v8 00/15] KVM: xen: update shared_info and vcpu_info handling Paul Durrant
2023-11-21 18:02 ` [PATCH v8 01/15] KVM: pfncache: Add a map helper function Paul Durrant
2023-11-21 18:02 ` [PATCH v8 02/15] KVM: pfncache: remove unnecessary exports Paul Durrant
2023-11-21 21:49   ` David Woodhouse
2023-11-22  8:44     ` Paul Durrant
2023-11-21 18:02 ` [PATCH v8 03/15] KVM: xen: mark guest pages dirty with the pfncache lock held Paul Durrant
2023-11-21 21:49   ` David Woodhouse
2023-11-21 18:02 ` [PATCH v8 04/15] KVM: pfncache: add a mark-dirty helper Paul Durrant
2023-11-21 18:02 ` [PATCH v8 05/15] KVM: pfncache: remove KVM_GUEST_USES_PFN usage Paul Durrant
2023-11-21 22:24   ` David Woodhouse
2023-11-27 23:36     ` Sean Christopherson
2023-11-21 18:02 ` [PATCH v8 06/15] KVM: pfncache: stop open-coding offset_in_page() Paul Durrant
2023-11-21 22:26   ` David Woodhouse
2023-11-21 18:02 ` Paul Durrant [this message]
2023-11-21 22:35   ` [PATCH v8 07/15] KVM: pfncache: include page offset in uhva and use it consistently David Woodhouse
2023-11-22  9:29     ` Paul Durrant
2023-11-22  8:54   ` Xu Yilun
2023-11-22  9:12     ` David Woodhouse
2023-11-22 14:27       ` Xu Yilun
2023-11-22 15:42         ` David Woodhouse
2023-11-22 15:52           ` Paul Durrant
2023-11-21 18:02 ` [PATCH v8 08/15] KVM: pfncache: allow a cache to be activated with a fixed (userspace) HVA Paul Durrant
2023-11-21 22:47   ` David Woodhouse
2023-11-22 10:07     ` Paul Durrant
2023-11-21 18:02 ` [PATCH v8 09/15] KVM: xen: allow shared_info to be mapped by fixed HVA Paul Durrant
2023-11-21 18:02 ` [PATCH v8 10/15] KVM: xen: allow vcpu_info " Paul Durrant
2023-11-21 18:02 ` [PATCH v8 11/15] KVM: selftests / xen: map shared_info using HVA rather than GFN Paul Durrant
2023-11-21 18:02 ` [PATCH v8 12/15] KVM: selftests / xen: re-map vcpu_info using HVA rather than GPA Paul Durrant
2023-11-21 18:02 ` [PATCH v8 13/15] KVM: xen: advertize the KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA capability Paul Durrant
2023-11-21 18:02 ` [PATCH v8 14/15] KVM: xen: split up kvm_xen_set_evtchn_fast() Paul Durrant
2023-11-21 22:49   ` David Woodhouse
2023-11-21 18:02 ` [PATCH v8 15/15] KVM: xen: allow vcpu_info content to be 'safely' copied Paul Durrant
2023-11-21 22:53   ` David Woodhouse
2023-11-22 10:39     ` David Woodhouse
2023-11-22 10:55       ` Paul Durrant
2023-11-22 11:25         ` David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231121180223.12484-8-paul@xen.org \
    --to=paul@xen.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=dwmw2@infradead.org \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).