All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue
@ 2022-11-27 12:22 David Woodhouse
  2022-11-27 12:22 ` [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update David Woodhouse
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: David Woodhouse @ 2022-11-27 12:22 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: Paul Durrant, Michal Luczaj, kvm

Clean the update code up a little bit by unifying the fast and slow 
paths as discussed, and make the update flag conditional to avoid 
confusing older guests that don't ask for it.

On top of kvm/queue as of today at commit da5f28e10aa7d.

(This is identical to what I sent a couple of minutes ago, except that 
this time I didn't forget to Cc the list)



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update
  2022-11-27 12:22 [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue David Woodhouse
@ 2022-11-27 12:22 ` David Woodhouse
  2022-11-30 14:05   ` Paul Durrant
  2022-11-27 12:22 ` [PATCH v1 2/2] KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured David Woodhouse
  2022-11-30 16:03 ` [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue Paolo Bonzini
  2 siblings, 1 reply; 10+ messages in thread
From: David Woodhouse @ 2022-11-27 12:22 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: Paul Durrant, Michal Luczaj, kvm

From: David Woodhouse <dwmw@amazon.co.uk>

Instead of having a completely separate fast path for the case where the
guest's runstate_info fits entirely in a page, recognise the similarity.

In both cases, the @update_bit pointer points to the byte containing the
XEN_RUNSTATE_UPDATE flag directly in the guest, via one of the GPCs.

In both cases, the actual guest structure (compat or not) is built up
from the fields in @vx, following preset pointers to the state and times
fields. The only difference is whether those pointers point to the kernel
stack (in the split case) or to guest memory directly via the GPC.

We can unify it by just setting the rs_state and rs_times pointers up
accordingly for each case. Then the only real difference is that dual
memcpy which can be made conditional, so the whole of that separate
fast path can go away, disappearing into the slow path without actually
doing the slow part.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kvm/xen.c | 253 +++++++++++++++++++++++----------------------
 1 file changed, 127 insertions(+), 126 deletions(-)

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index b246decb53a9..d1a506986a32 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -177,44 +177,78 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 	struct gfn_to_pfn_cache *gpc2 = &vx->runstate2_cache;
 	size_t user_len, user_len1, user_len2;
 	struct vcpu_runstate_info rs;
-	int *rs_state = &rs.state;
 	unsigned long flags;
 	size_t times_ofs;
-	u8 *update_bit;
+	uint8_t *update_bit;
+	uint64_t *rs_times;
+	int *rs_state;
 
 	/*
 	 * The only difference between 32-bit and 64-bit versions of the
 	 * runstate struct is the alignment of uint64_t in 32-bit, which
 	 * means that the 64-bit version has an additional 4 bytes of
-	 * padding after the first field 'state'.
+	 * padding after the first field 'state'. Let's be really really
+	 * paranoid about that, and matching it with our internal data
+	 * structures that we memcpy into it...
 	 */
 	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) != 0);
 	BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state) != 0);
 	BUILD_BUG_ON(sizeof(struct compat_vcpu_runstate_info) != 0x2c);
 #ifdef CONFIG_X86_64
+	/*
+	 * The 64-bit structure has 4 bytes of padding before 'state_entry_time'
+	 * so each subsequent field is shifted by 4, and it's 4 bytes longer.
+	 */
 	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state_entry_time) !=
 		     offsetof(struct compat_vcpu_runstate_info, state_entry_time) + 4);
 	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, time) !=
 		     offsetof(struct compat_vcpu_runstate_info, time) + 4);
+	BUILD_BUG_ON(sizeof(struct vcpu_runstate_info) != 0x2c + 4);
 #endif
+	/*
+	 * The state field is in the same place at the start of both structs,
+	 * and is the same size (int) as vx->current_runstate.
+	 */
+	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) !=
+		     offsetof(struct compat_vcpu_runstate_info, state));
+	BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state) !=
+		     sizeof(vx->current_runstate));
+	BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state) !=
+		     sizeof(vx->current_runstate));
+
+	/*
+	 * The state_entry_time field is 64 bits in both versions, and the
+	 * XEN_RUNSTATE_UPDATE flag is in the top bit, which given that x86
+	 * is little-endian means that it's in the last *byte* of the word.
+	 * That detail is important later.
+	 */
+	BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state_entry_time) !=
+		     sizeof(uint64_t));
+	BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state_entry_time) !=
+		     sizeof(uint64_t));
+	BUILD_BUG_ON((XEN_RUNSTATE_UPDATE >> 56) != 0x80);
+
+	/*
+	 * The time array is four 64-bit quantities in both versions, matching
+	 * the vx->runstate_times and immediately following state_entry_time.
+	 */
+	BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state_entry_time) !=
+		     offsetof(struct vcpu_runstate_info, time) - sizeof(uint64_t));
+	BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state_entry_time) !=
+		     offsetof(struct compat_vcpu_runstate_info, time) - sizeof(uint64_t));
+	BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
+		     sizeof_field(struct compat_vcpu_runstate_info, time));
+	BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
+		     sizeof(vx->runstate_times));
 
 	if (IS_ENABLED(CONFIG_64BIT) && v->kvm->arch.xen.long_mode) {
 		user_len = sizeof(struct vcpu_runstate_info);
 		times_ofs = offsetof(struct vcpu_runstate_info,
 				     state_entry_time);
-#ifdef CONFIG_X86_64
-		/*
-		 * Don't leak kernel memory through the padding in the 64-bit
-		 * struct if we take the split-page path.
-		 */
-		memset(&rs, 0, offsetof(struct vcpu_runstate_info, state_entry_time));
-#endif
 	} else {
 		user_len = sizeof(struct compat_vcpu_runstate_info);
 		times_ofs = offsetof(struct compat_vcpu_runstate_info,
 				     state_entry_time);
-		/* Start of struct for compat mode (qv). */
-		rs_state++;
 	}
 
 	/*
@@ -251,156 +285,123 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 		read_lock_irqsave(&gpc1->lock, flags);
 	}
 
-	/*
-	 * The common case is that it all fits on a page and we can
-	 * just do it the simple way.
-	 */
 	if (likely(!user_len2)) {
 		/*
-		 * We use 'int *user_state' to point to the state field, and
-		 * 'u64 *user_times' for runstate_entry_time. So the actual
-		 * array of time[] in each state starts at user_times[1].
+		 * Set up three pointers directly to the runstate_info
+		 * struct in the guest (via the GPC).
+		 *
+		 *  • @rs_state   → state field
+		 *  • @rs_times   → state_entry_time field.
+		 *  • @update_bit → last byte of state_entry_time, which
+		 *                  contains the XEN_RUNSTATE_UPDATE bit.
 		 */
-		int *user_state = gpc1->khva;
-		u64 *user_times = gpc1->khva + times_ofs;
-
+		rs_state = gpc1->khva;
+		rs_times = gpc1->khva + times_ofs;
+		update_bit = ((void *)(&rs_times[1])) - 1;
+	} else {
 		/*
-		 * The XEN_RUNSTATE_UPDATE bit is the top bit of the state_entry_time
-		 * field. We need to set it (and write-barrier) before writing to the
-		 * the rest of the structure, and clear it last. Just as Xen does, we
-		 * address the single *byte* in which it resides because it might be
-		 * in a different cache line to the rest of the 64-bit word, due to
-		 * the (lack of) alignment constraints.
+		 * The guest's runstate_info is split across two pages and we
+		 * need to hold and validate both GPCs simultaneously. We can
+		 * declare a lock ordering GPC1 > GPC2 because nothing else
+		 * takes them more than one at a time.
 		 */
-		BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state_entry_time) !=
-			     sizeof(uint64_t));
-		BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state_entry_time) !=
-			     sizeof(uint64_t));
-		BUILD_BUG_ON((XEN_RUNSTATE_UPDATE >> 56) != 0x80);
+		read_lock(&gpc2->lock);
 
-		update_bit = ((u8 *)(&user_times[1])) - 1;
-		*update_bit = (vx->runstate_entry_time | XEN_RUNSTATE_UPDATE) >> 56;
-		smp_wmb();
+		if (!kvm_gpc_check(v->kvm, gpc2, gpc2->gpa, user_len2)) {
+			read_unlock(&gpc2->lock);
+			read_unlock_irqrestore(&gpc1->lock, flags);
 
-		/*
-		 * Next, write the new runstate. This is in the *same* place
-		 * for 32-bit and 64-bit guests, asserted here for paranoia.
-		 */
-		BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) !=
-			     offsetof(struct compat_vcpu_runstate_info, state));
-		BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state) !=
-			     sizeof(vx->current_runstate));
-		BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state) !=
-			     sizeof(vx->current_runstate));
-		*user_state = vx->current_runstate;
+			/* When invoked from kvm_sched_out() we cannot sleep */
+			if (atomic)
+				return;
 
-		/*
-		 * Then the actual runstate_entry_time (with the UPDATE bit
-		 * still set).
-		 */
-		*user_times = vx->runstate_entry_time | XEN_RUNSTATE_UPDATE;
+			/*
+			 * Use kvm_gpc_activate() here because if the runstate
+			 * area was configured in 32-bit mode and only extends
+			 * to the second page now because the guest changed to
+			 * 64-bit mode, the second GPC won't have been set up.
+			 */
+			if (kvm_gpc_activate(v->kvm, gpc2, NULL, KVM_HOST_USES_PFN,
+					     gpc1->gpa + user_len1, user_len2))
+				return;
+
+			/*
+			 * We dropped the lock on GPC1 so we have to go all the
+			 * way back and revalidate that too.
+			 */
+			goto retry;
+		}
 
 		/*
-		 * Write the actual runstate times immediately after the
-		 * runstate_entry_time.
+		 * In this case, the runstate_info struct will be assembled on
+		 * the kernel stack (compat or not as appropriate) and will
+		 * be copied to GPC1/GPC2 with a dual memcpy. Set up the three
+		 * rs pointers accordingly.
 		 */
-		BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state_entry_time) !=
-			     offsetof(struct vcpu_runstate_info, time) - sizeof(u64));
-		BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state_entry_time) !=
-			     offsetof(struct compat_vcpu_runstate_info, time) - sizeof(u64));
-		BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
-			     sizeof_field(struct compat_vcpu_runstate_info, time));
-		BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) !=
-			     sizeof(vx->runstate_times));
-		memcpy(user_times + 1, vx->runstate_times, sizeof(vx->runstate_times));
-
-		smp_wmb();
+		rs_times = &rs.state_entry_time;
 
 		/*
-		 * Finally, clear the 'updating' bit. Don't use &= here because
-		 * the compiler may not realise that update_bit and user_times
-		 * point to the same place. That's a classic pointer-aliasing
-		 * problem.
+		 * The rs_state pointer points to the start of what we'll
+		 * copy to the guest, which in the case of a compat guest
+		 * is the 32-bit field that the compiler thinks is padding.
 		 */
-		*update_bit = vx->runstate_entry_time >> 56;
-		smp_wmb();
-
-		goto done_1;
-	}
-
-	/*
-	 * The painful code path. It's split across two pages and we need to
-	 * hold and validate both GPCs simultaneously. Thankfully we can get
-	 * away with declaring a lock ordering GPC1 > GPC2 because nothing
-	 * else takes them more than one at a time.
-	 */
-	read_lock(&gpc2->lock);
-
-	if (!kvm_gpc_check(v->kvm, gpc2, gpc2->gpa, user_len2)) {
-		read_unlock(&gpc2->lock);
-		read_unlock_irqrestore(&gpc1->lock, flags);
-
-		/* When invoked from kvm_sched_out() we cannot sleep */
-		if (atomic)
-			return;
+		rs_state = ((void *)rs_times) - times_ofs;
 
 		/*
-		 * Use kvm_gpc_activate() here because if the runstate
-		 * area was configured in 32-bit mode and only extends
-		 * to the second page now because the guest changed to
-		 * 64-bit mode, the second GPC won't have been set up.
+		 * The update_bit is still directly in the guest memory,
+		 * via one GPC or the other.
 		 */
-		if (kvm_gpc_activate(v->kvm, gpc2, NULL, KVM_HOST_USES_PFN,
-				     gpc1->gpa + user_len1, user_len2))
-			return;
+		if (user_len1 >= times_ofs + sizeof(uint64_t))
+			update_bit = gpc1->khva + times_ofs +
+				sizeof(uint64_t) - 1;
+		else
+			update_bit = gpc2->khva + times_ofs +
+				sizeof(uint64_t) - 1 - user_len1;
 
+#ifdef CONFIG_X86_64
 		/*
-		 * We dropped the lock on GPC1 so we have to go all the way
-		 * back and revalidate that too.
+		 * Don't leak kernel memory through the padding in the 64-bit
+		 * version of the struct.
 		 */
-		goto retry;
+		memset(&rs, 0, offsetof(struct vcpu_runstate_info, state_entry_time));
+#endif
 	}
 
 	/*
-	 * Work out where the byte containing the XEN_RUNSTATE_UPDATE bit is.
+	 * First, set the XEN_RUNSTATE_UPDATE bit in the top bit of the
+	 * state_entry_time field, directly in the guest. We need to set
+	 * that (and write-barrier) before writing to the rest of the
+	 * structure, and clear it last. Just as Xen does, we address the
+	 * single *byte* in which it resides because it might be in a
+	 * different cache line to the rest of the 64-bit word, due to
+	 * the (lack of) alignment constraints.
 	 */
-	if (user_len1 >= times_ofs + sizeof(uint64_t))
-		update_bit = ((u8 *)gpc1->khva) + times_ofs + sizeof(u64) - 1;
-	else
-		update_bit = ((u8 *)gpc2->khva) + times_ofs + sizeof(u64) - 1 -
-			user_len1;
+	*update_bit = (vx->runstate_entry_time | XEN_RUNSTATE_UPDATE) >> 56;
+	smp_wmb();
 
 	/*
-	 * Create a structure on our stack with everything in the right place.
-	 * The rs_state pointer points to the start of it, which in the case
-	 * of a compat guest on a 64-bit host is the 32 bit field that the
-	 * compiler thinks is padding.
+	 * Now assemble the actual structure, either on our kernel stack
+	 * or directly in the guest according to how the rs_state and
+	 * rs_times pointers were set up above.
 	 */
 	*rs_state = vx->current_runstate;
-	rs.state_entry_time = vx->runstate_entry_time | XEN_RUNSTATE_UPDATE;
-	memcpy(rs.time, vx->runstate_times, sizeof(vx->runstate_times));
-
-	*update_bit = rs.state_entry_time >> 56;
-	smp_wmb();
+	rs_times[0] = vx->runstate_entry_time | XEN_RUNSTATE_UPDATE;
+	memcpy(rs_times + 1, vx->runstate_times, sizeof(vx->runstate_times));
 
-	/*
-	 * Having constructed the structure starting at *rs_state, copy it
-	 * into the first and second pages as appropriate using user_len1
-	 * and user_len2.
-	 */
-	memcpy(gpc1->khva, rs_state, user_len1);
-	memcpy(gpc2->khva, ((u8 *)rs_state) + user_len1, user_len2);
+	/* For the split case, we have to then copy it to the guest. */
+	if (user_len2) {
+		memcpy(gpc1->khva, rs_state, user_len1);
+		memcpy(gpc2->khva, ((void *)rs_state) + user_len1, user_len2);
+	}
 	smp_wmb();
 
-	/*
-	 * Finally, clear the XEN_RUNSTATE_UPDATE bit.
-	 */
+	/* Finally, clear the XEN_RUNSTATE_UPDATE bit. */
 	*update_bit = vx->runstate_entry_time >> 56;
 	smp_wmb();
 
 	if (user_len2)
 		read_unlock(&gpc2->lock);
- done_1:
+
 	read_unlock_irqrestore(&gpc1->lock, flags);
 
 	mark_page_dirty_in_slot(v->kvm, gpc1->memslot, gpc1->gpa >> PAGE_SHIFT);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 2/2] KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured
  2022-11-27 12:22 [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue David Woodhouse
  2022-11-27 12:22 ` [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update David Woodhouse
@ 2022-11-27 12:22 ` David Woodhouse
  2022-11-30 14:07   ` Paul Durrant
  2022-11-30 16:03 ` [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue Paolo Bonzini
  2 siblings, 1 reply; 10+ messages in thread
From: David Woodhouse @ 2022-11-27 12:22 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: Paul Durrant, Michal Luczaj, kvm

From: David Woodhouse <dwmw@amazon.co.uk>

Closer inspection of the Xen code shows that we aren't supposed to be
using the XEN_RUNSTATE_UPDATE flag unconditionally. It should be
explicitly enabled by guests through the HYPERVISOR_vm_assist hypercall.
If we randomly set the top bit of ->state_entry_time for a guest that
hasn't asked for it and doesn't expect it, that could make the runtimes
fail to add up and confuse the guest. Without the flag it's perfectly
safe for a vCPU to read its own vcpu_runstate_info; just not for one
vCPU to read *another's*.

I briefly pondered adding a word for the whole set of VMASST_TYPE_*
flags but the only one we care about for HVM guests is this, so it
seemed a bit pointless.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 Documentation/virt/kvm/api.rst                | 34 +++++++++--
 arch/x86/include/asm/kvm_host.h               |  1 +
 arch/x86/kvm/x86.c                            |  3 +-
 arch/x86/kvm/xen.c                            | 57 ++++++++++++++-----
 include/uapi/linux/kvm.h                      |  4 ++
 .../selftests/kvm/x86_64/xen_shinfo_test.c    | 14 +++++
 6 files changed, 93 insertions(+), 20 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index eee9f857a986..615f5b077d39 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -5306,6 +5306,7 @@ KVM_PV_DUMP
 	union {
 		__u8 long_mode;
 		__u8 vector;
+		__u8 runstate_update_flag;
 		struct {
 			__u64 gfn;
 		} shared_info;
@@ -5383,6 +5384,14 @@ KVM_XEN_ATTR_TYPE_XEN_VERSION
   event channel delivery, so responding within the kernel without
   exiting to userspace is beneficial.
 
+KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG
+  This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates
+  support for KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG. It enables the
+  XEN_RUNSTATE_UPDATE flag which allows guest vCPUs to safely read
+  other vCPUs' vcpu_runstate_info. Xen guests enable this feature via
+  the VM_ASST_TYPE_runstate_update_flag of the HYPERVISOR_vm_assist
+  hypercall.
+
 4.127 KVM_XEN_HVM_GET_ATTR
 --------------------------
 
@@ -8026,12 +8035,13 @@ to userspace.
 This capability indicates the features that Xen supports for hosting Xen
 PVHVM guests. Valid flags are::
 
-  #define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR	(1 << 0)
-  #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL	(1 << 1)
-  #define KVM_XEN_HVM_CONFIG_SHARED_INFO	(1 << 2)
-  #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
-  #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
-  #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND	(1 << 5)
+  #define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR		(1 << 0)
+  #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL		(1 << 1)
+  #define KVM_XEN_HVM_CONFIG_SHARED_INFO		(1 << 2)
+  #define KVM_XEN_HVM_CONFIG_RUNSTATE			(1 << 3)
+  #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL		(1 << 4)
+  #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND		(1 << 5)
+  #define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG	(1 << 6)
 
 The KVM_XEN_HVM_CONFIG_HYPERCALL_MSR flag indicates that the KVM_XEN_HVM_CONFIG
 ioctl is available, for the guest to set its hypercall page.
@@ -8063,6 +8073,18 @@ KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID/TIMER/UPCALL_VECTOR vCPU attributes.
 related to event channel delivery, timers, and the XENVER_version
 interception.
 
+The KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG flag indicates that KVM supports
+the KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG attribute in the KVM_XEN_SET_ATTR
+and KVM_XEN_GET_ATTR ioctls. This controls whether KVM will set the
+XEN_RUNSTATE_UPDATE flag in guest memory mapped vcpu_runstate_info during
+updates of the runstate information. Note that versions of KVM which support
+the RUNSTATE feature above, but not thie RUNSTATE_UPDATE_FLAG feature, will
+always set the XEN_RUNSTATE_UPDATE flag when updating the guest structure,
+which is perhaps counterintuitive. When this flag is advertised, KVM will
+behave more correctly, not using the XEN_RUNSTATE_UPDATE flag until/unless
+specifically enabled (by the guest making the hypercall, causing the VMM
+to enable the KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG attribute).
+
 8.31 KVM_CAP_PPC_MULTITCE
 -------------------------
 
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 70af7240a1d5..283cbb83d6ae 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1113,6 +1113,7 @@ struct msr_bitmap_range {
 struct kvm_xen {
 	u32 xen_version;
 	bool long_mode;
+	bool runstate_update_flag;
 	u8 upcall_vector;
 	struct gfn_to_pfn_cache shinfo_cache;
 	struct idr evtchn_ports;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f18f579ebde8..246bdc9a9154 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4431,7 +4431,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 		    KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL |
 		    KVM_XEN_HVM_CONFIG_EVTCHN_SEND;
 		if (sched_info_on())
-			r |= KVM_XEN_HVM_CONFIG_RUNSTATE;
+			r |= KVM_XEN_HVM_CONFIG_RUNSTATE |
+			     KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG;
 		break;
 #endif
 	case KVM_CAP_SYNC_REGS:
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index d1a506986a32..9187d024d006 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -179,7 +179,8 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 	struct vcpu_runstate_info rs;
 	unsigned long flags;
 	size_t times_ofs;
-	uint8_t *update_bit;
+	uint8_t *update_bit = NULL;
+	uint64_t entry_time;
 	uint64_t *rs_times;
 	int *rs_state;
 
@@ -297,7 +298,8 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 		 */
 		rs_state = gpc1->khva;
 		rs_times = gpc1->khva + times_ofs;
-		update_bit = ((void *)(&rs_times[1])) - 1;
+		if (v->kvm->arch.xen.runstate_update_flag)
+			update_bit = ((void *)(&rs_times[1])) - 1;
 	} else {
 		/*
 		 * The guest's runstate_info is split across two pages and we
@@ -351,12 +353,14 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 		 * The update_bit is still directly in the guest memory,
 		 * via one GPC or the other.
 		 */
-		if (user_len1 >= times_ofs + sizeof(uint64_t))
-			update_bit = gpc1->khva + times_ofs +
-				sizeof(uint64_t) - 1;
-		else
-			update_bit = gpc2->khva + times_ofs +
-				sizeof(uint64_t) - 1 - user_len1;
+		if (v->kvm->arch.xen.runstate_update_flag) {
+			if (user_len1 >= times_ofs + sizeof(uint64_t))
+				update_bit = gpc1->khva + times_ofs +
+					sizeof(uint64_t) - 1;
+			else
+				update_bit = gpc2->khva + times_ofs +
+					sizeof(uint64_t) - 1 - user_len1;
+		}
 
 #ifdef CONFIG_X86_64
 		/*
@@ -376,8 +380,12 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 	 * different cache line to the rest of the 64-bit word, due to
 	 * the (lack of) alignment constraints.
 	 */
-	*update_bit = (vx->runstate_entry_time | XEN_RUNSTATE_UPDATE) >> 56;
-	smp_wmb();
+	entry_time = vx->runstate_entry_time;
+	if (update_bit) {
+		entry_time |= XEN_RUNSTATE_UPDATE;
+		*update_bit = (vx->runstate_entry_time | XEN_RUNSTATE_UPDATE) >> 56;
+		smp_wmb();
+	}
 
 	/*
 	 * Now assemble the actual structure, either on our kernel stack
@@ -385,7 +393,7 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 	 * rs_times pointers were set up above.
 	 */
 	*rs_state = vx->current_runstate;
-	rs_times[0] = vx->runstate_entry_time | XEN_RUNSTATE_UPDATE;
+	rs_times[0] = entry_time;
 	memcpy(rs_times + 1, vx->runstate_times, sizeof(vx->runstate_times));
 
 	/* For the split case, we have to then copy it to the guest. */
@@ -396,8 +404,11 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 	smp_wmb();
 
 	/* Finally, clear the XEN_RUNSTATE_UPDATE bit. */
-	*update_bit = vx->runstate_entry_time >> 56;
-	smp_wmb();
+	if (update_bit) {
+		entry_time &= ~XEN_RUNSTATE_UPDATE;
+		*update_bit = entry_time >> 56;
+		smp_wmb();
+	}
 
 	if (user_len2)
 		read_unlock(&gpc2->lock);
@@ -619,6 +630,17 @@ int kvm_xen_hvm_set_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		r = 0;
 		break;
 
+	case KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG:
+		if (!sched_info_on()) {
+			r = -EOPNOTSUPP;
+			break;
+		}
+		mutex_lock(&kvm->lock);
+		kvm->arch.xen.runstate_update_flag = !!data->u.runstate_update_flag;
+		mutex_unlock(&kvm->lock);
+		r = 0;
+		break;
+
 	default:
 		break;
 	}
@@ -656,6 +678,15 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
 		r = 0;
 		break;
 
+	case KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG:
+		if (!sched_info_on()) {
+			r = -EOPNOTSUPP;
+			break;
+		}
+		data->u.runstate_update_flag = kvm->arch.xen.runstate_update_flag;
+		r = 0;
+		break;
+
 	default:
 		break;
 	}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 7fea12369245..c962b8e426c1 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1270,6 +1270,7 @@ struct kvm_x86_mce {
 #define KVM_XEN_HVM_CONFIG_RUNSTATE		(1 << 3)
 #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL	(1 << 4)
 #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND		(1 << 5)
+#define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG	(1 << 6)
 
 struct kvm_xen_hvm_config {
 	__u32 flags;
@@ -1773,6 +1774,7 @@ struct kvm_xen_hvm_attr {
 	union {
 		__u8 long_mode;
 		__u8 vector;
+		__u8 runstate_update_flag;
 		struct {
 			__u64 gfn;
 		} shared_info;
@@ -1813,6 +1815,8 @@ struct kvm_xen_hvm_attr {
 /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
 #define KVM_XEN_ATTR_TYPE_EVTCHN		0x3
 #define KVM_XEN_ATTR_TYPE_XEN_VERSION		0x4
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */
+#define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG	0x5
 
 /* Per-vCPU Xen attributes */
 #define KVM_XEN_VCPU_GET_ATTR	_IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr)
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index 1f4fd97db959..721f6a693799 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -446,6 +446,7 @@ int main(int argc, char *argv[])
 	TEST_REQUIRE(xen_caps & KVM_XEN_HVM_CONFIG_SHARED_INFO);
 
 	bool do_runstate_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE);
+	bool do_runstate_flag = !!(xen_caps & KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG);
 	bool do_eventfd_tests = !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL);
 	bool do_evtchn_tests = do_eventfd_tests && !!(xen_caps & KVM_XEN_HVM_CONFIG_EVTCHN_SEND);
 
@@ -481,6 +482,19 @@ int main(int argc, char *argv[])
 	};
 	vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &lm);
 
+	if (do_runstate_flag) {
+		struct kvm_xen_hvm_attr ruf = {
+			.type = KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG,
+			.u.runstate_update_flag = 1,
+		};
+		vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &ruf);
+
+		ruf.u.runstate_update_flag = 0;
+		vm_ioctl(vm, KVM_XEN_HVM_GET_ATTR, &ruf);
+		TEST_ASSERT(ruf.u.runstate_update_flag == 1,
+			    "Failed to read back RUNSTATE_UPDATE_FLAG attr");
+	}
+
 	struct kvm_xen_hvm_attr ha = {
 		.type = KVM_XEN_ATTR_TYPE_SHARED_INFO,
 		.u.shared_info.gfn = SHINFO_REGION_GPA / PAGE_SIZE,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update
  2022-11-27 12:22 ` [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update David Woodhouse
@ 2022-11-30 14:05   ` Paul Durrant
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Durrant @ 2022-11-30 14:05 UTC (permalink / raw)
  To: David Woodhouse, Sean Christopherson, Paolo Bonzini; +Cc: Michal Luczaj, kvm

On 27/11/2022 12:22, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Instead of having a completely separate fast path for the case where the
> guest's runstate_info fits entirely in a page, recognise the similarity.
> 
> In both cases, the @update_bit pointer points to the byte containing the
> XEN_RUNSTATE_UPDATE flag directly in the guest, via one of the GPCs.
> 
> In both cases, the actual guest structure (compat or not) is built up
> from the fields in @vx, following preset pointers to the state and times
> fields. The only difference is whether those pointers point to the kernel
> stack (in the split case) or to guest memory directly via the GPC.
> 
> We can unify it by just setting the rs_state and rs_times pointers up
> accordingly for each case. Then the only real difference is that dual
> memcpy which can be made conditional, so the whole of that separate
> fast path can go away, disappearing into the slow path without actually
> doing the slow part.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Reviewed-by: Paul Durrant <paul@xen.org>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 2/2] KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured
  2022-11-27 12:22 ` [PATCH v1 2/2] KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured David Woodhouse
@ 2022-11-30 14:07   ` Paul Durrant
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Durrant @ 2022-11-30 14:07 UTC (permalink / raw)
  To: David Woodhouse, Sean Christopherson, Paolo Bonzini; +Cc: Michal Luczaj, kvm

On 27/11/2022 12:22, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Closer inspection of the Xen code shows that we aren't supposed to be
> using the XEN_RUNSTATE_UPDATE flag unconditionally. It should be
> explicitly enabled by guests through the HYPERVISOR_vm_assist hypercall.
> If we randomly set the top bit of ->state_entry_time for a guest that
> hasn't asked for it and doesn't expect it, that could make the runtimes
> fail to add up and confuse the guest. Without the flag it's perfectly
> safe for a vCPU to read its own vcpu_runstate_info; just not for one
> vCPU to read *another's*.
> 
> I briefly pondered adding a word for the whole set of VMASST_TYPE_*
> flags but the only one we care about for HVM guests is this, so it
> seemed a bit pointless.
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>

Reviewed-by: Paul Durrant <paul@xen.org>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue
  2022-11-27 12:22 [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue David Woodhouse
  2022-11-27 12:22 ` [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update David Woodhouse
  2022-11-27 12:22 ` [PATCH v1 2/2] KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured David Woodhouse
@ 2022-11-30 16:03 ` Paolo Bonzini
  2022-11-30 19:51   ` David Woodhouse
  2 siblings, 1 reply; 10+ messages in thread
From: Paolo Bonzini @ 2022-11-30 16:03 UTC (permalink / raw)
  To: David Woodhouse, Sean Christopherson; +Cc: Paul Durrant, Michal Luczaj, kvm

On 11/27/22 13:22, David Woodhouse wrote:
> Clean the update code up a little bit by unifying the fast and slow
> paths as discussed, and make the update flag conditional to avoid
> confusing older guests that don't ask for it.
> 
> On top of kvm/queue as of today at commit da5f28e10aa7d.
> 
> (This is identical to what I sent a couple of minutes ago, except that
> this time I didn't forget to Cc the list)
> 
> 

Merged, thanks.

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue
  2022-11-30 16:03 ` [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue Paolo Bonzini
@ 2022-11-30 19:51   ` David Woodhouse
  2022-12-02 19:00     ` Paolo Bonzini
  0 siblings, 1 reply; 10+ messages in thread
From: David Woodhouse @ 2022-11-30 19:51 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson; +Cc: Paul Durrant, Michal Luczaj, kvm

[-- Attachment #1: Type: text/plain, Size: 2352 bytes --]

On Wed, 2022-11-30 at 17:03 +0100, Paolo Bonzini wrote:
> On 11/27/22 13:22, David Woodhouse wrote:
> > Clean the update code up a little bit by unifying the fast and slow
> > paths as discussed, and make the update flag conditional to avoid
> > confusing older guests that don't ask for it.
> > 
> > On top of kvm/queue as of today at commit da5f28e10aa7d.
> > 
> > (This is identical to what I sent a couple of minutes ago, except that
> > this time I didn't forget to Cc the list)
> > 
> > 
> 
> Merged, thanks.

Thanks. I've rebased the remaining GPC fixes on top and pushed them out
(along with Metin's SCHEDOP_poll 32-bit compat support) to

https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/gpc-fixes

Metin Kaya (1):
      KVM: x86/xen: add support for 32-bit guests in SCHEDOP_poll

Michal Luczaj (4):
      KVM: Store immutable gfn_to_pfn_cache properties
      KVM: Use gfn_to_pfn_cache's immutable "kvm" in kvm_gpc_check()
      KVM: Clean up hva_to_pfn_retry()
      KVM: Use gfn_to_pfn_cache's immutable "kvm" in kvm_gpc_refresh()

Sean Christopherson (4):
      KVM: Drop KVM's API to allow temporarily unmapping gfn=>pfn cache
      KVM: Do not partially reinitialize gfn=>pfn cache during activation
      KVM: Drop @gpa from exported gfn=>pfn cache check() and refresh() helpers
      KVM: Skip unnecessary "unmap" if gpc is already valid during refresh

I still haven't reinstated the last of those patches to make gpc->len
immutable, although I think we concluded it's fine just to make the
runstate code claim gpc->len=1 and manage its own destiny, right?

In reviewing the merge/squash I spotted a minor cosmetic nit in my
'Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured' commit.
It'd be slightly prettier like this, although the compiler really ought
to emit identical code.

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 5208e05ca9a6..76b7fc6d543a 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -382,7 +382,7 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
 	entry_time = vx->runstate_entry_time;
 	if (update_bit) {
 		entry_time |= XEN_RUNSTATE_UPDATE;
-		*update_bit = (vx->runstate_entry_time | XEN_RUNSTATE_UPDATE) >> 56;
+		*update_bit = entry_time >> 56;
 		smp_wmb();
 	}
 



[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue
  2022-11-30 19:51   ` David Woodhouse
@ 2022-12-02 19:00     ` Paolo Bonzini
  2022-12-02 19:03       ` Sean Christopherson
  2022-12-02 20:17       ` David Woodhouse
  0 siblings, 2 replies; 10+ messages in thread
From: Paolo Bonzini @ 2022-12-02 19:00 UTC (permalink / raw)
  To: David Woodhouse, Sean Christopherson; +Cc: Paul Durrant, Michal Luczaj, kvm

On 11/30/22 20:51, David Woodhouse wrote:
> On Wed, 2022-11-30 at 17:03 +0100, Paolo Bonzini wrote:
>> On 11/27/22 13:22, David Woodhouse wrote:
>>> Clean the update code up a little bit by unifying the fast and slow
>>> paths as discussed, and make the update flag conditional to avoid
>>> confusing older guests that don't ask for it.
>>>
>>> On top of kvm/queue as of today at commit da5f28e10aa7d.
>>>
>>> (This is identical to what I sent a couple of minutes ago, except that
>>> this time I didn't forget to Cc the list)
>>>
>>>
>>
>> Merged, thanks.
> 
> Thanks. I've rebased the remaining GPC fixes on top and pushed them out
> (along with Metin's SCHEDOP_poll 32-bit compat support) to
> 
> https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/gpc-fixes

Oh, so we do pull requests now too?  I'm all for it, but please use 
signed tags. :)

> I still haven't reinstated the last of those patches to make gpc->len
> immutable, although I think we concluded it's fine just to make the
> runstate code claim gpc->len=1 and manage its own destiny, right?

Yeah, I'm not super keen on that either, but I guess it can work with 
any of len == 1 or len == PAGE_SIZE - offset.

Related to this, for 6.3 I will send a cleanup of the API to put 
together lock and check.

Paolo

> In reviewing the merge/squash I spotted a minor cosmetic nit in my
> 'Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured' commit.
> It'd be slightly prettier like this, although the compiler really ought
> to emit identical code.
> 
> diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
> index 5208e05ca9a6..76b7fc6d543a 100644
> --- a/arch/x86/kvm/xen.c
> +++ b/arch/x86/kvm/xen.c
> @@ -382,7 +382,7 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic)
>   	entry_time = vx->runstate_entry_time;
>   	if (update_bit) {
>   		entry_time |= XEN_RUNSTATE_UPDATE;
> -		*update_bit = (vx->runstate_entry_time | XEN_RUNSTATE_UPDATE) >> 56;
> +		*update_bit = entry_time >> 56;
>   		smp_wmb();
>   	}
>   
> 
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue
  2022-12-02 19:00     ` Paolo Bonzini
@ 2022-12-02 19:03       ` Sean Christopherson
  2022-12-02 20:17       ` David Woodhouse
  1 sibling, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2022-12-02 19:03 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: David Woodhouse, Paul Durrant, Michal Luczaj, kvm

On Fri, Dec 02, 2022, Paolo Bonzini wrote:
> On 11/30/22 20:51, David Woodhouse wrote:
> > On Wed, 2022-11-30 at 17:03 +0100, Paolo Bonzini wrote:
> > > On 11/27/22 13:22, David Woodhouse wrote:
> > > > Clean the update code up a little bit by unifying the fast and slow
> > > > paths as discussed, and make the update flag conditional to avoid
> > > > confusing older guests that don't ask for it.
> > > > 
> > > > On top of kvm/queue as of today at commit da5f28e10aa7d.
> > > > 
> > > > (This is identical to what I sent a couple of minutes ago, except that
> > > > this time I didn't forget to Cc the list)
> > > > 
> > > > 
> > > 
> > > Merged, thanks.
> > 
> > Thanks. I've rebased the remaining GPC fixes on top and pushed them out
> > (along with Metin's SCHEDOP_poll 32-bit compat support) to
> > 
> > https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/gpc-fixes
> 
> Oh, so we do pull requests now too?  I'm all for it, but please use signed
> tags. :)
> 
> > I still haven't reinstated the last of those patches to make gpc->len
> > immutable, although I think we concluded it's fine just to make the
> > runstate code claim gpc->len=1 and manage its own destiny, right?
> 
> Yeah, I'm not super keen on that either, but I guess it can work with any of
> len == 1 or len == PAGE_SIZE - offset.
> 
> Related to this, for 6.3 I will send a cleanup of the API to put together
> lock and check.

We ended up with multiple threads on this topic.  Maybe pick up the conversation
here?   https://lore.kernel.org/all/Y4ovbTiLQ2Jy0em9@google.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue
  2022-12-02 19:00     ` Paolo Bonzini
  2022-12-02 19:03       ` Sean Christopherson
@ 2022-12-02 20:17       ` David Woodhouse
  1 sibling, 0 replies; 10+ messages in thread
From: David Woodhouse @ 2022-12-02 20:17 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson; +Cc: Paul Durrant, Michal Luczaj, kvm

[-- Attachment #1: Type: text/plain, Size: 1297 bytes --]

On Fri, 2022-12-02 at 20:00 +0100, Paolo Bonzini wrote:
> > > Merged, thanks.
> > 
> > Thanks. I've rebased the remaining GPC fixes on top and pushed them out
> > (along with Metin's SCHEDOP_poll 32-bit compat support) to
> > 
> > https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/gpc-fixes
> > 
> 
> Oh, so we do pull requests now too?  I'm all for it, but please use 
> signed tags. :)

Not really intended as a pull request (unless you really want one).
I've just kept it rebased on top of the other parts as they've evolved,
so we always have the latest state in a known location. Didn't seem
worth the noise of actually *resending* them repeatedly. 

> > I still haven't reinstated the last of those patches to make gpc->len
> > immutable, although I think we concluded it's fine just to make the
> > runstate code claim gpc->len=1 and manage its own destiny, right?
> 
> Yeah, I'm not super keen on that either, but I guess it can work with 
> any of len == 1 or len == PAGE_SIZE - offset.
> 
> Related to this, for 6.3 I will send a cleanup of the API to put 
> together lock and check.

I suspect now's the time to actually resend those patches on top of the
current kvm/queue, and we can work out the length and lock/check things
on top?


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-12-02 20:18 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-27 12:22 [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue David Woodhouse
2022-11-27 12:22 ` [PATCH v1 1/2] KVM: x86/xen: Reconcile fast and slow paths for runstate update David Woodhouse
2022-11-30 14:05   ` Paul Durrant
2022-11-27 12:22 ` [PATCH v1 2/2] KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured David Woodhouse
2022-11-30 14:07   ` Paul Durrant
2022-11-30 16:03 ` [PATCH v1 0/2] KVM: x86/xen: Runstate cleanups on top of kvm/queue Paolo Bonzini
2022-11-30 19:51   ` David Woodhouse
2022-12-02 19:00     ` Paolo Bonzini
2022-12-02 19:03       ` Sean Christopherson
2022-12-02 20:17       ` David Woodhouse

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.