All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: David Woodhouse <dwmw2@infradead.org>
Cc: kvm@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	Joao Martins <joao.m.martins@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Metin Kaya <metikaya@amazon.co.uk>,
	Paul Durrant <pdurrant@amazon.co.uk>
Subject: Re: [PATCH v0 02/15] KVM: x86/xen: Use gfn_to_pfn_cache for runstate area
Date: Fri, 18 Feb 2022 19:57:10 +0000	[thread overview]
Message-ID: <Yg/6FoL9dH1/lTOS@google.com> (raw)
In-Reply-To: <20220210002721.273608-3-dwmw2@infradead.org>

On Thu, Feb 10, 2022, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>

RFC, or lazy? :-D

> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
> -		r = kvm_gfn_to_hva_cache_init(vcpu->kvm,
> +		r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,
>  					      &vcpu->arch.xen.runstate_cache,
> -					      data->u.gpa,
> -					      sizeof(struct vcpu_runstate_info));
> -		if (!r) {
> -			vcpu->arch.xen.runstate_set = true;
> -		}
> +					      NULL, false, true, data->u.gpa,
> +					      sizeof(struct vcpu_runstate_info),
> +					      false);

Now that I'm all too aware of the subtle importants of @guest_uses_pa and
@kernel_map, before the pfn cache code gains more users, can you slot this in at
the beginning of the series?  It'd be quite easy to mess up and pass "false, false"
and completely mishandle cases where the PFN is mapped into the guest.


From: Sean Christopherson <seanjc@google.com>
Date: Fri, 18 Feb 2022 11:45:47 -0800
Subject: [PATCH] KVM: Use enum to track if cached PFN will be used in guest
 and/or host

Replace the guest_uses_pa and kernel_map booleans in the PFN cache code
with a unified enum/bitmask.  Using explicit names makes it easier to
review and audit call sites.

Opportunstically add a WARN to prevent passing garbage, instantating a
cache without declaring its usage is either buggy or pointless.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/xen.c        |  2 +-
 include/linux/kvm_host.h  | 11 +++++------
 include/linux/kvm_types.h | 10 ++++++++--
 virt/kvm/pfncache.c       | 14 +++++++-------
 4 files changed, 21 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 4aa0f2b31665..5be1c9227105 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -39,7 +39,7 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 	}

 	do {
-		ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, false, true,
+		ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, KVM_HOST_USES_PFN,
 						gpa, PAGE_SIZE, false);
 		if (ret)
 			goto out;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f11039944c08..d044e328046a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1222,9 +1222,9 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
  * @gpc:	   struct gfn_to_pfn_cache object.
  * @vcpu:	   vCPU to be used for marking pages dirty and to be woken on
  *		   invalidation.
- * @guest_uses_pa: indicates that the resulting host physical PFN is used while
- *		   @vcpu is IN_GUEST_MODE so invalidations should wake it.
- * @kernel_map:    requests a kernel virtual mapping (kmap / memremap).
+ * @usage:	   indicates if the resulting host physical PFN is used while
+ *		   the @vcpu is IN_GUEST_MODE and/or if the PFN used directly
+ *		   by KVM (and thus needs a kernel virtual mapping).
  * @gpa:	   guest physical address to map.
  * @len:	   sanity check; the range being access must fit a single page.
  * @dirty:         mark the cache dirty immediately.
@@ -1240,9 +1240,8 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
  * to ensure that the cache is valid before accessing the target page.
  */
 int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
-			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
-			      bool kernel_map, gpa_t gpa, unsigned long len,
-			      bool dirty);
+			      struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
+			      gpa_t gpa, unsigned long len, bool dirty);

 /**
  * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index dceac12c1ce5..784f37cbf33e 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -18,6 +18,7 @@ struct kvm_memslots;

 enum kvm_mr_change;

+#include <linux/bits.h>
 #include <linux/types.h>
 #include <linux/spinlock_types.h>

@@ -46,6 +47,12 @@ typedef u64            hfn_t;

 typedef hfn_t kvm_pfn_t;

+enum pfn_cache_usage {
+	KVM_GUEST_USES_PFN = BIT(0),
+	KVM_HOST_USES_PFN  = BIT(1),
+	KVM_GUEST_AND_HOST_USE_PFN = KVM_GUEST_USES_PFN | KVM_HOST_USES_PFN,
+};
+
 struct gfn_to_hva_cache {
 	u64 generation;
 	gpa_t gpa;
@@ -64,11 +71,10 @@ struct gfn_to_pfn_cache {
 	rwlock_t lock;
 	void *khva;
 	kvm_pfn_t pfn;
+	enum pfn_cache_usage usage;
 	bool active;
 	bool valid;
 	bool dirty;
-	bool kernel_map;
-	bool guest_uses_pa;
 };

 #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index ce878f4be4da..9b3a192cb18c 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -42,7 +42,7 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
 			 * If a guest vCPU could be using the physical address,
 			 * it needs to be woken.
 			 */
-			if (gpc->guest_uses_pa) {
+			if (gpc->usage & KVM_GUEST_USES_PFN) {
 				if (!wake_vcpus) {
 					wake_vcpus = true;
 					bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
@@ -219,7 +219,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 			goto map_done;
 		}

-		if (gpc->kernel_map) {
+		if (gpc->usage & KVM_HOST_USES_PFN) {
 			if (new_pfn == old_pfn) {
 				new_khva = old_khva;
 				old_pfn = KVM_PFN_ERR_FAULT;
@@ -299,10 +299,11 @@ EXPORT_SYMBOL_GPL(kvm_gfn_to_pfn_cache_unmap);


 int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
-			      struct kvm_vcpu *vcpu, bool guest_uses_pa,
-			      bool kernel_map, gpa_t gpa, unsigned long len,
-			      bool dirty)
+			      struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
+			      gpa_t gpa, unsigned long len, bool dirty)
 {
+	WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage);
+
 	if (!gpc->active) {
 		rwlock_init(&gpc->lock);

@@ -310,8 +311,7 @@ int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
 		gpc->pfn = KVM_PFN_ERR_FAULT;
 		gpc->uhva = KVM_HVA_ERR_BAD;
 		gpc->vcpu = vcpu;
-		gpc->kernel_map = kernel_map;
-		gpc->guest_uses_pa = guest_uses_pa;
+		gpc->usage = usage;
 		gpc->valid = false;
 		gpc->active = true;


base-commit: f6ae04ddb347f526b4620d1053690ecf1f87d77f
--


  reply	other threads:[~2022-02-18 19:57 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-10  0:27 [PATCH v0 00/15] KVM: Add Xen event channel acceleration David Woodhouse
2022-02-10  0:27 ` [PATCH v0 01/15] KVM: x86/xen: Fix runstate updates to be atomic when preempting vCPU David Woodhouse
2022-02-10  0:27 ` [PATCH v0 02/15] KVM: x86/xen: Use gfn_to_pfn_cache for runstate area David Woodhouse
2022-02-18 19:57   ` Sean Christopherson [this message]
2022-02-18 21:45     ` David Woodhouse
2022-02-10  0:27 ` [PATCH v0 03/15] KVM: x86: Use gfn_to_pfn_cache for pv_time David Woodhouse
2022-02-10  0:27 ` [PATCH v0 04/15] KVM: x86/xen: Use gfn_to_pfn_cache for vcpu_info David Woodhouse
2022-02-10  0:27 ` [PATCH v0 05/15] KVM: x86/xen: Use gfn_to_pfn_cache for vcpu_time_info David Woodhouse
2022-02-10  0:27 ` [PATCH v0 06/15] KVM: x86/xen: Make kvm_xen_set_evtchn() reusable from other places David Woodhouse
2022-02-10  0:27 ` [PATCH v0 07/15] KVM: x86/xen: Support direct injection of event channel events David Woodhouse
2022-02-10  0:27 ` [PATCH v0 08/15] KVM: x86/xen: intercept EVTCHNOP_send from guests David Woodhouse
2022-02-10  0:27 ` [PATCH v0 09/15] KVM: x86/xen: handle PV IPI vcpu yield David Woodhouse
2022-02-10  0:27 ` [PATCH v0 10/15] KVM: x86/xen: Add KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID David Woodhouse
2022-02-10  0:27 ` [PATCH v0 11/15] KVM: x86/xen: handle PV timers oneshot mode David Woodhouse
2022-02-10  0:27 ` [PATCH v0 12/15] KVM: x86/xen: Kernel acceleration for XENVER_version David Woodhouse
2022-02-10  0:27 ` [PATCH v0 13/15] KVM: x86/xen: Support per-vCPU event channel upcall via local APIC David Woodhouse
2022-02-10  0:27 ` [PATCH v0 14/15] KVM: x86/xen: Advertise and document KVM_XEN_HVM_CONFIG_EVTCHN_SEND David Woodhouse
2022-02-10  0:27 ` [PATCH v0 15/15] KVM: x86/xen: handle PV spinlocks slowpath David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yg/6FoL9dH1/lTOS@google.com \
    --to=seanjc@google.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=dwmw2@infradead.org \
    --cc=jmattson@google.com \
    --cc=joao.m.martins@oracle.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=metikaya@amazon.co.uk \
    --cc=pbonzini@redhat.com \
    --cc=pdurrant@amazon.co.uk \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.