kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures
@ 2022-09-22 17:01 Marc Zyngier
  2022-09-22 17:01 ` [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state Marc Zyngier
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

[Same distribution list as Gavin's dirty-ring on arm64 series]

As Gavin started posting patches enabling the dirty-ring infrastructure
on arm64 [1], it quickly became apparent that the API was never intended
to work on relaxed memory ordering architectures (owing to its x86
origins).

This series tries to retrofit some ordering into the existing API by:

- relying on acquire/release semantics which are the default on x86,
  but need to be explicit on arm64

- adding a new capability that indicate which flavor is supported, either
  with explicit ordering (arm64) or both implicit and explicit (x86),
  as suggested by Paolo at KVM Forum

- documenting the requirements for this new capability on weakly ordered
  architectures

- updating the selftests to do the right thing

Ideally, this series should be a prefix of Gavin's, plus a small change
to his series:

diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 0309b2d0f2da..7785379c5048 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -32,7 +32,7 @@ menuconfig KVM
 	select KVM_VFIO
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
-	select HAVE_KVM_DIRTY_RING
+	select HAVE_KVM_DIRTY_RING_ORDERED
 	select HAVE_KVM_MSI
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQ_ROUTING

This has been very lightly tested on an arm64 box with Gavin's v3 [2] series.

[1] https://lore.kernel.org/lkml/YyiV%2Fl7O23aw5aaO@xz-m1.local/T/
[2] https://lore.kernel.org/r/20220922003214.276736-1-gshan@redhat.com

Marc Zyngier (6):
  KVM: Use acquire/release semantics when accessing dirty ring GFN state
  KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED
  KVM: Document weakly ordered architecture requirements for dirty ring
  KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to
    store-release
  KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ORDERED of
    available

 Documentation/virt/kvm/api.rst               | 16 +++++++++++++---
 arch/x86/kvm/Kconfig                         |  1 +
 include/linux/kvm_dirty_ring.h               |  6 +++---
 include/uapi/linux/kvm.h                     |  1 +
 tools/testing/selftests/kvm/dirty_log_test.c |  6 ++++--
 tools/testing/selftests/kvm/lib/kvm_util.c   |  5 ++++-
 virt/kvm/Kconfig                             | 14 ++++++++++++++
 virt/kvm/Makefile.kvm                        |  2 +-
 virt/kvm/dirty_ring.c                        |  4 ++--
 virt/kvm/kvm_main.c                          | 11 +++++++++--
 10 files changed, 52 insertions(+), 14 deletions(-)

-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state
  2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
@ 2022-09-22 17:01 ` Marc Zyngier
  2022-09-22 21:38   ` Peter Xu
  2022-09-22 17:01 ` [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option Marc Zyngier
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

The current implementation of the dirty ring has an implicit requirement
that stores to the dirty ring from userspace must be:

- be ordered with one another

- visible from another CPU executing a ring reset

While these implicit requirements work well for x86 (and any other
TSO-like architecture), they do not work for more relaxed architectures
such as arm64 where stores to different addresses can be freely
reordered, and loads from these addresses not observing writes from
another CPU unless the required barriers (or acquire/release semantics)
are used.

In order to start fixing this, upgrade the ring reset accesses:

- the kvm_dirty_gfn_harvested() helper now uses acquire semantics
  so it is ordered after all previous writes, including that from
  userspace

- the kvm_dirty_gfn_set_invalid() helper now uses release semantics
  so that the next_slot and next_offset reads don't drift past
  the entry invalidation

This is only a partial fix as the userspace side also need upgrading.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 virt/kvm/dirty_ring.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index f4c2a6eb1666..784bed80221d 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -79,12 +79,12 @@ static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
 
 static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
 {
-	gfn->flags = KVM_DIRTY_GFN_F_DIRTY;
+	smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_DIRTY);
 }
 
 static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
 {
-	return gfn->flags & KVM_DIRTY_GFN_F_RESET;
+	return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
 }
 
 int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
  2022-09-22 17:01 ` [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state Marc Zyngier
@ 2022-09-22 17:01 ` Marc Zyngier
  2022-09-22 21:48   ` Peter Xu
  2022-09-22 17:01 ` [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED Marc Zyngier
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

In order to differenciate between architectures that require no extra
synchronisation when accessing the dirty ring and those who do,
add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
the latter sort. TSO architectures can obviously advertise both, while
relaxed architectures most only advertise the ORDERED version.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 include/linux/kvm_dirty_ring.h |  6 +++---
 include/uapi/linux/kvm.h       |  1 +
 virt/kvm/Kconfig               | 14 ++++++++++++++
 virt/kvm/Makefile.kvm          |  2 +-
 virt/kvm/kvm_main.c            | 11 +++++++++--
 5 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index 906f899813dc..7a0c90ae9a3f 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -27,7 +27,7 @@ struct kvm_dirty_ring {
 	int index;
 };
 
-#ifndef CONFIG_HAVE_KVM_DIRTY_RING
+#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
 /*
  * If CONFIG_HAVE_HVM_DIRTY_RING not defined, kvm_dirty_ring.o should
  * not be included as well, so define these nop functions for the arch.
@@ -69,7 +69,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 	return true;
 }
 
-#else /* CONFIG_HAVE_KVM_DIRTY_RING */
+#else /* CONFIG_HAVE_KVM_DIRTY_LOG */
 
 u32 kvm_dirty_ring_get_rsvd_entries(void);
 int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
@@ -92,6 +92,6 @@ struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
 void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
 bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
 
-#endif /* CONFIG_HAVE_KVM_DIRTY_RING */
+#endif /* CONFIG_HAVE_KVM_DIRTY_LOG */
 
 #endif	/* KVM_DIRTY_RING_H */
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index eed0315a77a6..c1c9c0c8be5c 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1177,6 +1177,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220
 #define KVM_CAP_S390_ZPCI_OP 221
 #define KVM_CAP_S390_CPU_TOPOLOGY 222
+#define KVM_CAP_DIRTY_LOG_RING_ORDERED 223
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index a8c5c9f06b3c..1023426bf7dd 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -16,8 +16,22 @@ config HAVE_KVM_IRQFD
 config HAVE_KVM_IRQ_ROUTING
        bool
 
+config HAVE_KVM_DIRTY_LOG
+       bool
+
+# Only strongly ordered architectures can select this, as it doesn't
+# put any constraint on userspace ordering. They also can select the
+# _ORDERED version.
 config HAVE_KVM_DIRTY_RING
        bool
+       select HAVE_KVM_DIRTY_LOG
+       depends on X86
+
+# Weakly ordered architectures can only select this, advertising
+# to userspace the additional ordering requirements.
+config HAVE_KVM_DIRTY_RING_ORDERED
+       bool
+       select HAVE_KVM_DIRTY_LOG
 
 config HAVE_KVM_EVENTFD
        bool
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
index 2c27d5d0c367..2bc6d0bb5e5c 100644
--- a/virt/kvm/Makefile.kvm
+++ b/virt/kvm/Makefile.kvm
@@ -10,5 +10,5 @@ kvm-$(CONFIG_KVM_VFIO) += $(KVM)/vfio.o
 kvm-$(CONFIG_KVM_MMIO) += $(KVM)/coalesced_mmio.o
 kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
 kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
-kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
+kvm-$(CONFIG_HAVE_KVM_DIRTY_LOG) += $(KVM)/dirty_ring.o
 kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 584a5bab3af3..cb1c103e2018 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3304,7 +3304,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm,
 {
 	struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
 
-#ifdef CONFIG_HAVE_KVM_DIRTY_RING
+#ifdef CONFIG_HAVE_KVM_DIRTY_LOG
 	if (WARN_ON_ONCE(!vcpu) || WARN_ON_ONCE(vcpu->kvm != kvm))
 		return;
 #endif
@@ -3758,7 +3758,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
 
 static bool kvm_page_in_dirty_ring(struct kvm *kvm, unsigned long pgoff)
 {
-#ifdef CONFIG_HAVE_KVM_DIRTY_RING
+#ifdef CONFIG_HAVE_KVM_DIRTY_LOG
 	return (pgoff >= KVM_DIRTY_LOG_PAGE_OFFSET) &&
 	    (pgoff < KVM_DIRTY_LOG_PAGE_OFFSET +
 	     kvm->dirty_ring_size / PAGE_SIZE);
@@ -4479,6 +4479,12 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
 		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
 #else
 		return 0;
+#endif
+	case KVM_CAP_DIRTY_LOG_RING_ORDERED:
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING_ORDERED
+		return KVM_DIRTY_RING_MAX_ENTRIES * sizeof(struct kvm_dirty_gfn);
+#else
+		return 0;
 #endif
 	case KVM_CAP_BINARY_STATS_FD:
 	case KVM_CAP_SYSTEM_EVENT_DATA:
@@ -4580,6 +4586,7 @@ static int kvm_vm_ioctl_enable_cap_generic(struct kvm *kvm,
 		return 0;
 	}
 	case KVM_CAP_DIRTY_LOG_RING:
+	case KVM_CAP_DIRTY_LOG_RING_ORDERED:
 		return kvm_vm_ioctl_enable_dirty_log_ring(kvm, cap->args[0]);
 	default:
 		return kvm_vm_ioctl_enable_cap(kvm, cap);
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED
  2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
  2022-09-22 17:01 ` [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state Marc Zyngier
  2022-09-22 17:01 ` [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option Marc Zyngier
@ 2022-09-22 17:01 ` Marc Zyngier
  2022-09-23 22:46   ` Peter Xu
  2022-09-22 17:01 ` [PATCH 4/6] KVM: Document weakly ordered architecture requirements for dirty ring Marc Zyngier
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

Since x86 is TSO (give or take), allow it to advertise the new
ORDERED version of the dirty ring capability. No other change is
required for it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/x86/kvm/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index e3cbd7706136..eb63bc31ed1d 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -29,6 +29,7 @@ config KVM
 	select HAVE_KVM_PFNCACHE
 	select HAVE_KVM_IRQFD
 	select HAVE_KVM_DIRTY_RING
+	select HAVE_KVM_DIRTY_RING_ORDERED
 	select IRQ_BYPASS_MANAGER
 	select HAVE_KVM_IRQ_BYPASS
 	select HAVE_KVM_IRQ_ROUTING
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 4/6] KVM: Document weakly ordered architecture requirements for dirty ring
  2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
                   ` (2 preceding siblings ...)
  2022-09-22 17:01 ` [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED Marc Zyngier
@ 2022-09-22 17:01 ` Marc Zyngier
  2022-09-22 17:01 ` [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release Marc Zyngier
  2022-09-22 17:01 ` [PATCH 6/6] KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ORDERED of available Marc Zyngier
  5 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

Now that the kernel can expose to userspace that its dirty ring
management relies on explicit ordering, document these new requirements
for VMMs to do the right thing.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 Documentation/virt/kvm/api.rst | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index abd7c32126ce..0912db16425f 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8019,8 +8019,8 @@ guest according to the bits in the KVM_CPUID_FEATURES CPUID leaf
 (0x40000001). Otherwise, a guest may use the paravirtual features
 regardless of what has actually been exposed through the CPUID leaf.
 
-8.29 KVM_CAP_DIRTY_LOG_RING
----------------------------
+8.29 KVM_CAP_DIRTY_LOG_RING/KVM_CAP_DIRTY_LOG_RING_ORDERED
+----------------------------------------------------------
 
 :Architectures: x86
 :Parameters: args[0] - size of the dirty log ring
@@ -8076,7 +8076,10 @@ The userspace should harvest this GFN and mark the flags from state
 to show that this GFN is harvested and waiting for a reset), and move
 on to the next GFN.  The userspace should continue to do this until the
 flags of a GFN have the DIRTY bit cleared, meaning that it has harvested
-all the dirty GFNs that were available.
+all the dirty GFNs that were available.  Note that on relaxed memory
+architectures, userspace stores to the ring buffer must be ordered,
+for example by using a store-release when available, or by using any
+other memory barrier that will ensure this ordering
 
 It's not necessary for userspace to harvest the all dirty GFNs at once.
 However it must collect the dirty GFNs in sequence, i.e., the userspace
@@ -8106,6 +8109,13 @@ KVM_CAP_DIRTY_LOG_RING with an acceptable dirty ring size, the virtual
 machine will switch to ring-buffer dirty page tracking and further
 KVM_GET_DIRTY_LOG or KVM_CLEAR_DIRTY_LOG ioctls will fail.
 
+NOTE: KVM_CAP_DIRTY_LOG_RING_ORDERED is the only capability that can
+be exposed by relaxed memory architecture, in order to indicate the
+additional memory ordering requirements imposed on userspace when
+mutating an entry from DIRTY to HARVESTED states. Architecture with
+TSO-like ordering (such as x86) can expose both KVM_CAP_DIRTY_LOG_RING
+and KVM_CAP_DIRTY_LOG_RING_ORDERED to userspace.
+
 8.30 KVM_CAP_XEN_HVM
 --------------------
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release
  2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
                   ` (3 preceding siblings ...)
  2022-09-22 17:01 ` [PATCH 4/6] KVM: Document weakly ordered architecture requirements for dirty ring Marc Zyngier
@ 2022-09-22 17:01 ` Marc Zyngier
  2022-09-22 21:38   ` Paolo Bonzini
  2022-09-22 17:01 ` [PATCH 6/6] KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ORDERED of available Marc Zyngier
  5 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

To make sure that all the writes to the log marking the entries
as being in need of reset are observed in order, use a
smp_store_release() when updating the log entry flags.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 9c883c94d478..3d29f4bf4f9c 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -17,6 +17,7 @@
 #include <linux/bitmap.h>
 #include <linux/bitops.h>
 #include <linux/atomic.h>
+#include <asm/barrier.h>
 
 #include "kvm_util.h"
 #include "test_util.h"
@@ -284,7 +285,7 @@ static inline bool dirty_gfn_is_dirtied(struct kvm_dirty_gfn *gfn)
 
 static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn)
 {
-	gfn->flags = KVM_DIRTY_GFN_F_RESET;
+	smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_RESET);
 }
 
 static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 6/6] KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ORDERED of available
  2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
                   ` (4 preceding siblings ...)
  2022-09-22 17:01 ` [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release Marc Zyngier
@ 2022-09-22 17:01 ` Marc Zyngier
  5 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-09-22 17:01 UTC (permalink / raw)
  To: kvmarm, kvm
  Cc: will, catalin.marinas, andrew.jones, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah

Pick KVM_CAP_DIRTY_LOG_RING_ORDERED if exposed by the kernel.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 3 ++-
 tools/testing/selftests/kvm/lib/kvm_util.c   | 5 ++++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 3d29f4bf4f9c..30cdda41b8ec 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -265,7 +265,8 @@ static void default_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int err)
 
 static bool dirty_ring_supported(void)
 {
-	return kvm_has_cap(KVM_CAP_DIRTY_LOG_RING);
+	return (kvm_has_cap(KVM_CAP_DIRTY_LOG_RING) ||
+		kvm_has_cap(KVM_CAP_DIRTY_LOG_RING_ORDERED));
 }
 
 static void dirty_ring_create_vm_done(struct kvm_vm *vm)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 9889fe0d8919..4c031f9fe717 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -82,7 +82,10 @@ unsigned int kvm_check_cap(long cap)
 
 void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size)
 {
-	vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING, ring_size);
+	if (vm_check_cap(vm, KVM_CAP_DIRTY_LOG_RING_ORDERED))
+		vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING_ORDERED, ring_size);
+	else
+		vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING, ring_size);
 	vm->dirty_ring_size = ring_size;
 }
 
-- 
2.34.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state
  2022-09-22 17:01 ` [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state Marc Zyngier
@ 2022-09-22 21:38   ` Peter Xu
  2022-09-22 23:46     ` Gavin Shan
  2022-09-23 14:19     ` Marc Zyngier
  0 siblings, 2 replies; 28+ messages in thread
From: Peter Xu @ 2022-09-22 21:38 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

Marc,

On Thu, Sep 22, 2022 at 06:01:28PM +0100, Marc Zyngier wrote:
> The current implementation of the dirty ring has an implicit requirement
> that stores to the dirty ring from userspace must be:
> 
> - be ordered with one another
> 
> - visible from another CPU executing a ring reset
> 
> While these implicit requirements work well for x86 (and any other
> TSO-like architecture), they do not work for more relaxed architectures
> such as arm64 where stores to different addresses can be freely
> reordered, and loads from these addresses not observing writes from
> another CPU unless the required barriers (or acquire/release semantics)
> are used.
> 
> In order to start fixing this, upgrade the ring reset accesses:
> 
> - the kvm_dirty_gfn_harvested() helper now uses acquire semantics
>   so it is ordered after all previous writes, including that from
>   userspace
> 
> - the kvm_dirty_gfn_set_invalid() helper now uses release semantics
>   so that the next_slot and next_offset reads don't drift past
>   the entry invalidation
> 
> This is only a partial fix as the userspace side also need upgrading.

Paolo has one fix 4802bf910e ("KVM: dirty ring: add missing memory
barrier", 2022-09-01) which has already landed.

I think the other one to reset it was lost too.  I just posted a patch.

https://lore.kernel.org/qemu-devel/20220922213522.68861-1-peterx@redhat.com/
(link still not yet available so far, but should be)

> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  virt/kvm/dirty_ring.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> index f4c2a6eb1666..784bed80221d 100644
> --- a/virt/kvm/dirty_ring.c
> +++ b/virt/kvm/dirty_ring.c
> @@ -79,12 +79,12 @@ static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
>  
>  static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
>  {
> -	gfn->flags = KVM_DIRTY_GFN_F_DIRTY;
> +	smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_DIRTY);

IIUC you meant kvm_dirty_gfn_set_invalid as the comment says?

kvm_dirty_gfn_set_dirtied() has been guarded by smp_wmb() and AFAICT that's
already safe.  Otherwise looks good to me.

Thanks,

>  }
>  
>  static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
>  {
> -	return gfn->flags & KVM_DIRTY_GFN_F_RESET;
> +	return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
>  }
>  
>  int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
> -- 
> 2.34.1
> 

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release
  2022-09-22 17:01 ` [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release Marc Zyngier
@ 2022-09-22 21:38   ` Paolo Bonzini
  2022-09-23 14:49     ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-09-22 21:38 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, zhenyzha, shuah, kvmarm

On Thu, Sep 22, 2022 at 7:02 PM Marc Zyngier <maz@kernel.org> wrote:
> To make sure that all the writes to the log marking the entries
> as being in need of reset are observed in order, use a
> smp_store_release() when updating the log entry flags.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

You also need a load-acquire on the load of gfn->flags in
dirty_gfn_is_dirtied. Otherwise reading cur->slot or cur->offset might
see a stale value.

Paolo

> ---
>  tools/testing/selftests/kvm/dirty_log_test.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index 9c883c94d478..3d29f4bf4f9c 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -17,6 +17,7 @@
>  #include <linux/bitmap.h>
>  #include <linux/bitops.h>
>  #include <linux/atomic.h>
> +#include <asm/barrier.h>
>
>  #include "kvm_util.h"
>  #include "test_util.h"
> @@ -284,7 +285,7 @@ static inline bool dirty_gfn_is_dirtied(struct kvm_dirty_gfn *gfn)
>
>  static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn)
>  {
> -       gfn->flags = KVM_DIRTY_GFN_F_RESET;
> +       smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_RESET);
>  }
>
>  static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
> --
> 2.34.1
>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-22 17:01 ` [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option Marc Zyngier
@ 2022-09-22 21:48   ` Peter Xu
  2022-09-23  0:04     ` Gavin Shan
  2022-09-23 14:28     ` Marc Zyngier
  0 siblings, 2 replies; 28+ messages in thread
From: Peter Xu @ 2022-09-22 21:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> In order to differenciate between architectures that require no extra
> synchronisation when accessing the dirty ring and those who do,
> add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> the latter sort. TSO architectures can obviously advertise both, while
> relaxed architectures most only advertise the ORDERED version.
> 
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  include/linux/kvm_dirty_ring.h |  6 +++---
>  include/uapi/linux/kvm.h       |  1 +
>  virt/kvm/Kconfig               | 14 ++++++++++++++
>  virt/kvm/Makefile.kvm          |  2 +-
>  virt/kvm/kvm_main.c            | 11 +++++++++--
>  5 files changed, 28 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> index 906f899813dc..7a0c90ae9a3f 100644
> --- a/include/linux/kvm_dirty_ring.h
> +++ b/include/linux/kvm_dirty_ring.h
> @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
>  	int index;
>  };
>  
> -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG

s/LOG/LOG_RING/ according to the commit message? Or the name seems too
generic.

Pure question to ask: is it required to have a new cap just for the
ordering?  IIUC if x86 was the only supported anyway before, it means all
released old kvm binaries are always safe even without the strict
orderings.  As long as we rework all the memory ordering bits before
declaring support of yet another arch, we're good.  Or am I wrong?

Thanks,

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state
  2022-09-22 21:38   ` Peter Xu
@ 2022-09-22 23:46     ` Gavin Shan
  2022-09-23 14:40       ` Marc Zyngier
  2022-09-23 14:19     ` Marc Zyngier
  1 sibling, 1 reply; 28+ messages in thread
From: Gavin Shan @ 2022-09-22 23:46 UTC (permalink / raw)
  To: Peter Xu, Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

Hi Peter,

On 9/23/22 7:38 AM, Peter Xu wrote:
> On Thu, Sep 22, 2022 at 06:01:28PM +0100, Marc Zyngier wrote:
>> The current implementation of the dirty ring has an implicit requirement
>> that stores to the dirty ring from userspace must be:
>>
>> - be ordered with one another
>>
>> - visible from another CPU executing a ring reset
>>
>> While these implicit requirements work well for x86 (and any other
>> TSO-like architecture), they do not work for more relaxed architectures
>> such as arm64 where stores to different addresses can be freely
>> reordered, and loads from these addresses not observing writes from
>> another CPU unless the required barriers (or acquire/release semantics)
>> are used.
>>
>> In order to start fixing this, upgrade the ring reset accesses:
>>
>> - the kvm_dirty_gfn_harvested() helper now uses acquire semantics
>>    so it is ordered after all previous writes, including that from
>>    userspace
>>
>> - the kvm_dirty_gfn_set_invalid() helper now uses release semantics
>>    so that the next_slot and next_offset reads don't drift past
>>    the entry invalidation
>>
>> This is only a partial fix as the userspace side also need upgrading.
> 
> Paolo has one fix 4802bf910e ("KVM: dirty ring: add missing memory
> barrier", 2022-09-01) which has already landed.
> 
> I think the other one to reset it was lost too.  I just posted a patch.
> 
> https://lore.kernel.org/qemu-devel/20220922213522.68861-1-peterx@redhat.com/
> (link still not yet available so far, but should be)
> 
>>
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   virt/kvm/dirty_ring.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
>> index f4c2a6eb1666..784bed80221d 100644
>> --- a/virt/kvm/dirty_ring.c
>> +++ b/virt/kvm/dirty_ring.c
>> @@ -79,12 +79,12 @@ static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
>>   
>>   static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
>>   {
>> -	gfn->flags = KVM_DIRTY_GFN_F_DIRTY;
>> +	smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_DIRTY);
> 
> IIUC you meant kvm_dirty_gfn_set_invalid as the comment says?
> 
> kvm_dirty_gfn_set_dirtied() has been guarded by smp_wmb() and AFAICT that's
> already safe.  Otherwise looks good to me.
> 

If I'm understanding the full context, smp_store_release() also enforces
guard on 'gfn->flags' itself. It is needed by user space for the synchronization.

>>   }
>>   
>>   static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
>>   {
>> -	return gfn->flags & KVM_DIRTY_GFN_F_RESET;
>> +	return smp_load_acquire(&gfn->flags) & KVM_DIRTY_GFN_F_RESET;
>>   }
>>   
>>   int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
>> -- 
>> 2.34.1
>>

Thanks,
Gavin

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-22 21:48   ` Peter Xu
@ 2022-09-23  0:04     ` Gavin Shan
  2022-09-23 14:28     ` Marc Zyngier
  1 sibling, 0 replies; 28+ messages in thread
From: Gavin Shan @ 2022-09-23  0:04 UTC (permalink / raw)
  To: Peter Xu, Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

Hi Peter and Marc,

On 9/23/22 7:48 AM, Peter Xu wrote:
> On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
>> In order to differenciate between architectures that require no extra
>> synchronisation when accessing the dirty ring and those who do,
>> add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
>> the latter sort. TSO architectures can obviously advertise both, while
>> relaxed architectures most only advertise the ORDERED version.
>>
>> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   include/linux/kvm_dirty_ring.h |  6 +++---
>>   include/uapi/linux/kvm.h       |  1 +
>>   virt/kvm/Kconfig               | 14 ++++++++++++++
>>   virt/kvm/Makefile.kvm          |  2 +-
>>   virt/kvm/kvm_main.c            | 11 +++++++++--
>>   5 files changed, 28 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
>> index 906f899813dc..7a0c90ae9a3f 100644
>> --- a/include/linux/kvm_dirty_ring.h
>> +++ b/include/linux/kvm_dirty_ring.h
>> @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
>>   	int index;
>>   };
>>   
>> -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
>> +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> 
> s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> generic.
> 
> Pure question to ask: is it required to have a new cap just for the
> ordering?  IIUC if x86 was the only supported anyway before, it means all
> released old kvm binaries are always safe even without the strict
> orderings.  As long as we rework all the memory ordering bits before
> declaring support of yet another arch, we're good.  Or am I wrong?
> 

I have same questions. The name of CONFIG_HAVE_KVM_DIRTY_LOG is too
generic at least. I'm wandering why we even need other two kernel config
options, which are HAVE_KVM_DIRTY_{RING, RING_ORDER}.

- The ordering because of smp_load_acquire/smp_store_release is unconditionally
   applied to kvm_dirty_gfn_set_dirtied() and kvm_dirty_gfn_harvested() in PATCH[1/6].
- Both kernel config options are enabled on x86 in PATCH[3/6]

It means we needn't to differentiate strict/relaxed ordering by the extra
capability and kernel config options. If it makes sense, how about to let user
space decide strict ordering is needed base on the architecture (x86 vs ARM64 for now).


Thanks,
Gavin

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state
  2022-09-22 21:38   ` Peter Xu
  2022-09-22 23:46     ` Gavin Shan
@ 2022-09-23 14:19     ` Marc Zyngier
  2022-09-23 14:22       ` Paolo Bonzini
  1 sibling, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-23 14:19 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Thu, 22 Sep 2022 22:38:33 +0100,
Peter Xu <peterx@redhat.com> wrote:
> 
> Marc,
> 
> On Thu, Sep 22, 2022 at 06:01:28PM +0100, Marc Zyngier wrote:
> > The current implementation of the dirty ring has an implicit requirement
> > that stores to the dirty ring from userspace must be:
> > 
> > - be ordered with one another
> > 
> > - visible from another CPU executing a ring reset
> > 
> > While these implicit requirements work well for x86 (and any other
> > TSO-like architecture), they do not work for more relaxed architectures
> > such as arm64 where stores to different addresses can be freely
> > reordered, and loads from these addresses not observing writes from
> > another CPU unless the required barriers (or acquire/release semantics)
> > are used.
> > 
> > In order to start fixing this, upgrade the ring reset accesses:
> > 
> > - the kvm_dirty_gfn_harvested() helper now uses acquire semantics
> >   so it is ordered after all previous writes, including that from
> >   userspace
> > 
> > - the kvm_dirty_gfn_set_invalid() helper now uses release semantics
> >   so that the next_slot and next_offset reads don't drift past
> >   the entry invalidation
> > 
> > This is only a partial fix as the userspace side also need upgrading.
> 
> Paolo has one fix 4802bf910e ("KVM: dirty ring: add missing memory
> barrier", 2022-09-01) which has already landed.

What is this commit? It doesn't exist in the kernel as far as I can see.

> 
> I think the other one to reset it was lost too.  I just posted a patch.
> 
> https://lore.kernel.org/qemu-devel/20220922213522.68861-1-peterx@redhat.com/
> (link still not yet available so far, but should be)

That's a QEMU patch, right?

> 
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  virt/kvm/dirty_ring.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> > index f4c2a6eb1666..784bed80221d 100644
> > --- a/virt/kvm/dirty_ring.c
> > +++ b/virt/kvm/dirty_ring.c
> > @@ -79,12 +79,12 @@ static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
> >  
> >  static inline void kvm_dirty_gfn_set_dirtied(struct kvm_dirty_gfn *gfn)
> >  {
> > -	gfn->flags = KVM_DIRTY_GFN_F_DIRTY;
> > +	smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_DIRTY);
> 
> IIUC you meant kvm_dirty_gfn_set_invalid as the comment says?

Gah, you're right, I redid the patch at the last minute and messed it
up....

> kvm_dirty_gfn_set_dirtied() has been guarded by smp_wmb() and AFAICT that's
> already safe.  Otherwise looks good to me.

Indeed. Let me fix this.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state
  2022-09-23 14:19     ` Marc Zyngier
@ 2022-09-23 14:22       ` Paolo Bonzini
  0 siblings, 0 replies; 28+ messages in thread
From: Paolo Bonzini @ 2022-09-23 14:22 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, zhenyzha, shuah, kvmarm

On Fri, Sep 23, 2022 at 4:20 PM Marc Zyngier <maz@kernel.org> wrote:
> > > This is only a partial fix as the userspace side also need upgrading.
> >
> > Paolo has one fix 4802bf910e ("KVM: dirty ring: add missing memory
> > barrier", 2022-09-01) which has already landed.
>
> What is this commit? It doesn't exist in the kernel as far as I can see.

That's the load_acquire in QEMU, and the store_release part is in 7.2
as well (commit 52281c6d11, "KVM: use store-release to mark dirty
pages as harvested", 2022-09-18).

So all that QEMU is missing is the new capability.

Paolo

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-22 21:48   ` Peter Xu
  2022-09-23  0:04     ` Gavin Shan
@ 2022-09-23 14:28     ` Marc Zyngier
  2022-09-23 18:26       ` Peter Xu
  1 sibling, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-23 14:28 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Thu, 22 Sep 2022 22:48:19 +0100,
Peter Xu <peterx@redhat.com> wrote:
> 
> On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> > In order to differenciate between architectures that require no extra
> > synchronisation when accessing the dirty ring and those who do,
> > add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> > the latter sort. TSO architectures can obviously advertise both, while
> > relaxed architectures most only advertise the ORDERED version.
> > 
> > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  include/linux/kvm_dirty_ring.h |  6 +++---
> >  include/uapi/linux/kvm.h       |  1 +
> >  virt/kvm/Kconfig               | 14 ++++++++++++++
> >  virt/kvm/Makefile.kvm          |  2 +-
> >  virt/kvm/kvm_main.c            | 11 +++++++++--
> >  5 files changed, 28 insertions(+), 6 deletions(-)
> > 
> > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > index 906f899813dc..7a0c90ae9a3f 100644
> > --- a/include/linux/kvm_dirty_ring.h
> > +++ b/include/linux/kvm_dirty_ring.h
> > @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
> >  	int index;
> >  };
> >  
> > -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> > +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> 
> s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> generic.

The commit message talks about the capability, while the above is the
config option. If you find the names inappropriate, feel free to
suggest alternatives (for all I care, they could be called FOO, BAR
and BAZ).

> Pure question to ask: is it required to have a new cap just for the
> ordering?  IIUC if x86 was the only supported anyway before, it means all
> released old kvm binaries are always safe even without the strict
> orderings.  As long as we rework all the memory ordering bits before
> declaring support of yet another arch, we're good.  Or am I wrong?

Someone will show up with an old userspace which probes for the sole
existing capability, and things start failing subtly. It is quite
likely that the userspace code is built for all architectures, and we
want to make sure that userspace actively buys into the new ordering
requirements. A simple way to do this is to expose a new capability,
making the new requirement obvious. Architectures with relaxed
ordering semantics will only implement the new one, while x86 will
implement both.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state
  2022-09-22 23:46     ` Gavin Shan
@ 2022-09-23 14:40       ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-09-23 14:40 UTC (permalink / raw)
  To: Gavin Shan
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Fri, 23 Sep 2022 00:46:58 +0100,
Gavin Shan <gshan@redhat.com> wrote:
> 
> Hi Peter,
> 
> On 9/23/22 7:38 AM, Peter Xu wrote:
> > On Thu, Sep 22, 2022 at 06:01:28PM +0100, Marc Zyngier wrote:
> >> The current implementation of the dirty ring has an implicit requirement
> >> that stores to the dirty ring from userspace must be:
> >> 
> >> - be ordered with one another
> >> 
> >> - visible from another CPU executing a ring reset
> >> 
> >> While these implicit requirements work well for x86 (and any other
> >> TSO-like architecture), they do not work for more relaxed architectures
> >> such as arm64 where stores to different addresses can be freely
> >> reordered, and loads from these addresses not observing writes from
> >> another CPU unless the required barriers (or acquire/release semantics)
> >> are used.
> >> 
> >> In order to start fixing this, upgrade the ring reset accesses:
> >> 
> >> - the kvm_dirty_gfn_harvested() helper now uses acquire semantics
> >>    so it is ordered after all previous writes, including that from
> >>    userspace
> >> 
> >> - the kvm_dirty_gfn_set_invalid() helper now uses release semantics
> >>    so that the next_slot and next_offset reads don't drift past
> >>    the entry invalidation
> >> 
> >> This is only a partial fix as the userspace side also need upgrading.
> > 
> > Paolo has one fix 4802bf910e ("KVM: dirty ring: add missing memory
> > barrier", 2022-09-01) which has already landed.
> > 
> > I think the other one to reset it was lost too.  I just posted a patch.
> > 
> > https://lore.kernel.org/qemu-devel/20220922213522.68861-1-peterx@redhat.com/
> > (link still not yet available so far, but should be)
> > 
> >> 
> >> Signed-off-by: Marc Zyngier <maz@kernel.org>
> >> ---
> >>   virt/kvm/dirty_ring.c | 4 ++--
> >>   1 file changed, 2 insertions(+), 2 deletions(-)
> >> 
> >> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> >> index f4c2a6eb1666..784bed80221d 100644
> >> --- a/virt/kvm/dirty_ring.c
> >> +++ b/virt/kvm/dirty_ring.c
> >> @@ -79,12 +79,12 @@ static inline void kvm_dirty_gfn_set_invalid(struct kvm_dirty_gfn *gfn)
> >>     static inline void kvm_dirty_gfn_set_dirtied(struct
> >> kvm_dirty_gfn *gfn)
> >>   {
> >> -	gfn->flags = KVM_DIRTY_GFN_F_DIRTY;
> >> +	smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_DIRTY);
> > 
> > IIUC you meant kvm_dirty_gfn_set_invalid as the comment says?
> > 
> > kvm_dirty_gfn_set_dirtied() has been guarded by smp_wmb() and AFAICT that's
> > already safe.  Otherwise looks good to me.
> > 
> 
> If I'm understanding the full context, smp_store_release() also
> enforces guard on 'gfn->flags' itself. It is needed by user space
> for the synchronization.

There are multiple things at play here:

- userspace needs a store-release when making the flags 'harvested',
  so that the kernel using a load-acquire can observe this write (and
  avoid the roach-motel effect of a non-acquire load)

- the kernel needs a store-release when making the flags 'invalid',
  preventing this write from occuring before the next_* fields have
  been sampled

On the ring production side, there is a heavy handed smp_wmb(), which
makes things pretty safe.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release
  2022-09-22 21:38   ` Paolo Bonzini
@ 2022-09-23 14:49     ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2022-09-23 14:49 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, zhenyzha, shuah, kvmarm

On Thu, 22 Sep 2022 22:38:58 +0100,
Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
> On Thu, Sep 22, 2022 at 7:02 PM Marc Zyngier <maz@kernel.org> wrote:
> > To make sure that all the writes to the log marking the entries
> > as being in need of reset are observed in order, use a
> > smp_store_release() when updating the log entry flags.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> 
> You also need a load-acquire on the load of gfn->flags in
> dirty_gfn_is_dirtied. Otherwise reading cur->slot or cur->offset might
> see a stale value.

Ah, indeed. smp_wmb() is implemented as DMB ISHST, which only orders
writes, and not loads against writes. Global barriers are just
confusing.

/me goes and repaint the stuff...

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-23 14:28     ` Marc Zyngier
@ 2022-09-23 18:26       ` Peter Xu
  2022-09-23 21:23         ` Paolo Bonzini
  2022-09-24  8:51         ` Marc Zyngier
  0 siblings, 2 replies; 28+ messages in thread
From: Peter Xu @ 2022-09-23 18:26 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Fri, Sep 23, 2022 at 03:28:34PM +0100, Marc Zyngier wrote:
> On Thu, 22 Sep 2022 22:48:19 +0100,
> Peter Xu <peterx@redhat.com> wrote:
> > 
> > On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> > > In order to differenciate between architectures that require no extra
> > > synchronisation when accessing the dirty ring and those who do,
> > > add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> > > the latter sort. TSO architectures can obviously advertise both, while
> > > relaxed architectures most only advertise the ORDERED version.
> > > 
> > > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  include/linux/kvm_dirty_ring.h |  6 +++---
> > >  include/uapi/linux/kvm.h       |  1 +
> > >  virt/kvm/Kconfig               | 14 ++++++++++++++
> > >  virt/kvm/Makefile.kvm          |  2 +-
> > >  virt/kvm/kvm_main.c            | 11 +++++++++--
> > >  5 files changed, 28 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > > index 906f899813dc..7a0c90ae9a3f 100644
> > > --- a/include/linux/kvm_dirty_ring.h
> > > +++ b/include/linux/kvm_dirty_ring.h
> > > @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
> > >  	int index;
> > >  };
> > >  
> > > -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> > > +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> > 
> > s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> > generic.
> 
> The commit message talks about the capability, while the above is the
> config option. If you find the names inappropriate, feel free to
> suggest alternatives (for all I care, they could be called FOO, BAR
> and BAZ).

The existing name from David looks better than the new one.. to me.

> 
> > Pure question to ask: is it required to have a new cap just for the
> > ordering?  IIUC if x86 was the only supported anyway before, it means all
> > released old kvm binaries are always safe even without the strict
> > orderings.  As long as we rework all the memory ordering bits before
> > declaring support of yet another arch, we're good.  Or am I wrong?
> 
> Someone will show up with an old userspace which probes for the sole
> existing capability, and things start failing subtly. It is quite
> likely that the userspace code is built for all architectures,

I didn't quite follow here.  Since both kvm/qemu dirty ring was only
supported on x86, I don't see the risk.

Assuming we've the old binary.

If to run on old kernel, it'll work like before.

If to run on new kernel, the kernel will behave stricter on memory barriers
but should still be compatible with the old behavior (not vice versa, so
I'll understand if we're loosing the ordering, but we're not..).

Any further elaboration would be greatly helpful.

Thanks,

> and we
> want to make sure that userspace actively buys into the new ordering
> requirements. A simple way to do this is to expose a new capability,
> making the new requirement obvious. Architectures with relaxed
> ordering semantics will only implement the new one, while x86 will
> implement both.

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-23 18:26       ` Peter Xu
@ 2022-09-23 21:23         ` Paolo Bonzini
  2022-09-23 22:34           ` Peter Xu
  2022-09-24  8:51         ` Marc Zyngier
  1 sibling, 1 reply; 28+ messages in thread
From: Paolo Bonzini @ 2022-09-23 21:23 UTC (permalink / raw)
  To: Peter Xu
  Cc: Catalin Marinas, kvm, Marc Zyngier, Andrew Jones, Will Deacon,
	Shan Gavin, Ben Gardon, David Matlack, Zhenyu Zhang, Shuah Khan,
	KVM ARM

Il ven 23 set 2022, 20:26 Peter Xu <peterx@redhat.com> ha scritto:
>
> > Someone will show up with an old userspace which probes for the sole
> > existing capability, and things start failing subtly. It is quite
> > likely that the userspace code is built for all architectures,
>
> I didn't quite follow here.  Since both kvm/qemu dirty ring was only
> supported on x86, I don't see the risk.

Say you run a new ARM kernel on old userspace, and the new kernel uses
KVM_CAP_DIRTY_LOG_RING. Userspace will try to use the dirty page ring
buffer even though it lacks the memory barriers that were just
introduced in QEMU.

The new capability means "the dirty page ring buffer is supported and,
by the way, you're supposed to do everything right with respect to
ordering of loads and stores; you can't get away without it like you
could on x86".

Paolo

>
> Assuming we've the old binary.
>
> If to run on old kernel, it'll work like before.
>
> If to run on new kernel, the kernel will behave stricter on memory barriers
> but should still be compatible with the old behavior (not vice versa, so
> I'll understand if we're loosing the ordering, but we're not..).
>
> Any further elaboration would be greatly helpful.
>
> Thanks,
>
> > and we
> > want to make sure that userspace actively buys into the new ordering
> > requirements. A simple way to do this is to expose a new capability,
> > making the new requirement obvious. Architectures with relaxed
> > ordering semantics will only implement the new one, while x86 will
> > implement both.
>
> --
> Peter Xu
>

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-23 21:23         ` Paolo Bonzini
@ 2022-09-23 22:34           ` Peter Xu
  0 siblings, 0 replies; 28+ messages in thread
From: Peter Xu @ 2022-09-23 22:34 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Catalin Marinas, kvm, Marc Zyngier, Andrew Jones, Will Deacon,
	Shan Gavin, Ben Gardon, David Matlack, Zhenyu Zhang, Shuah Khan,
	KVM ARM

On Fri, Sep 23, 2022 at 11:23:24PM +0200, Paolo Bonzini wrote:
> Il ven 23 set 2022, 20:26 Peter Xu <peterx@redhat.com> ha scritto:
> >
> > > Someone will show up with an old userspace which probes for the sole
> > > existing capability, and things start failing subtly. It is quite
> > > likely that the userspace code is built for all architectures,
> >
> > I didn't quite follow here.  Since both kvm/qemu dirty ring was only
> > supported on x86, I don't see the risk.
> 
> Say you run a new ARM kernel on old userspace, and the new kernel uses
> KVM_CAP_DIRTY_LOG_RING. Userspace will try to use the dirty page ring
> buffer even though it lacks the memory barriers that were just
> introduced in QEMU.
> 
> The new capability means "the dirty page ring buffer is supported and,
> by the way, you're supposed to do everything right with respect to
> ordering of loads and stores; you can't get away without it like you
> could on x86".

I understand now, thanks both.

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED
  2022-09-22 17:01 ` [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED Marc Zyngier
@ 2022-09-23 22:46   ` Peter Xu
  2022-09-24  8:47     ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Peter Xu @ 2022-09-23 22:46 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Thu, Sep 22, 2022 at 06:01:30PM +0100, Marc Zyngier wrote:
> Since x86 is TSO (give or take), allow it to advertise the new
> ORDERED version of the dirty ring capability. No other change is
> required for it.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/x86/kvm/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index e3cbd7706136..eb63bc31ed1d 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -29,6 +29,7 @@ config KVM
>  	select HAVE_KVM_PFNCACHE
>  	select HAVE_KVM_IRQFD
>  	select HAVE_KVM_DIRTY_RING
> +	select HAVE_KVM_DIRTY_RING_ORDERED
>  	select IRQ_BYPASS_MANAGER
>  	select HAVE_KVM_IRQ_BYPASS
>  	select HAVE_KVM_IRQ_ROUTING

Before patch 2-3, we only have HAVE_KVM_DIRTY_RING.

After that, we'll have:

HAVE_KVM_DIRTY_LOG
HAVE_KVM_DIRTY_RING
HAVE_KVM_DIRTY_RING_ORDERED

I'm wondering whether we can just keep using the old HAVE_KVM_DIRTY_RING,
but just declare a new KVM_CAP_DIRTY_LOG_RING_ORDERED only after all memory
barrier patches merged (after patch 1).

IIUC it's a matter of whether any of the arch would like to support
!ORDERED version of dirty ring at all, but then IIUC we'll need to have the
memory barriers conditional too or not sure how it'll help.

Thanks,

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED
  2022-09-23 22:46   ` Peter Xu
@ 2022-09-24  8:47     ` Marc Zyngier
  2022-09-24 13:29       ` Peter Xu
  0 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-24  8:47 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Fri, 23 Sep 2022 23:46:40 +0100,
Peter Xu <peterx@redhat.com> wrote:
> 
> On Thu, Sep 22, 2022 at 06:01:30PM +0100, Marc Zyngier wrote:
> > Since x86 is TSO (give or take), allow it to advertise the new
> > ORDERED version of the dirty ring capability. No other change is
> > required for it.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/x86/kvm/Kconfig | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > index e3cbd7706136..eb63bc31ed1d 100644
> > --- a/arch/x86/kvm/Kconfig
> > +++ b/arch/x86/kvm/Kconfig
> > @@ -29,6 +29,7 @@ config KVM
> >  	select HAVE_KVM_PFNCACHE
> >  	select HAVE_KVM_IRQFD
> >  	select HAVE_KVM_DIRTY_RING
> > +	select HAVE_KVM_DIRTY_RING_ORDERED
> >  	select IRQ_BYPASS_MANAGER
> >  	select HAVE_KVM_IRQ_BYPASS
> >  	select HAVE_KVM_IRQ_ROUTING
> 
> Before patch 2-3, we only have HAVE_KVM_DIRTY_RING.
> 
> After that, we'll have:
> 
> HAVE_KVM_DIRTY_LOG
> HAVE_KVM_DIRTY_RING
> HAVE_KVM_DIRTY_RING_ORDERED
> 
> I'm wondering whether we can just keep using the old HAVE_KVM_DIRTY_RING,
> but just declare a new KVM_CAP_DIRTY_LOG_RING_ORDERED only after all memory
> barrier patches merged (after patch 1).

The problem is to decide, on a per architecture basis, how to expose
the old property. I'm happy to key it on being x86 specific, but that
feels pretty gross, and totally unnecessary for strongly ordered
architectures (s390?).

> IIUC it's a matter of whether any of the arch would like to support
> !ORDERED version of dirty ring at all, but then IIUC we'll need to have the
> memory barriers conditional too or not sure how it'll help.

Memory barriers do not affect the semantics of the userspace, while
the lack thereof do. On strongly ordered architectures,
acquire/release is usually "free", because that's implied by their
memory model. If it thus free for these to implement both versions of
the API.

So are we discussing the cost of couple of (mostly invisible) config
options?

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-23 18:26       ` Peter Xu
  2022-09-23 21:23         ` Paolo Bonzini
@ 2022-09-24  8:51         ` Marc Zyngier
  2022-09-24 11:26           ` Marc Zyngier
  1 sibling, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-24  8:51 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Fri, 23 Sep 2022 19:26:18 +0100,
Peter Xu <peterx@redhat.com> wrote:
> 
> On Fri, Sep 23, 2022 at 03:28:34PM +0100, Marc Zyngier wrote:
> > On Thu, 22 Sep 2022 22:48:19 +0100,
> > Peter Xu <peterx@redhat.com> wrote:
> > > 
> > > On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> > > > In order to differenciate between architectures that require no extra
> > > > synchronisation when accessing the dirty ring and those who do,
> > > > add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> > > > the latter sort. TSO architectures can obviously advertise both, while
> > > > relaxed architectures most only advertise the ORDERED version.
> > > > 
> > > > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > ---
> > > >  include/linux/kvm_dirty_ring.h |  6 +++---
> > > >  include/uapi/linux/kvm.h       |  1 +
> > > >  virt/kvm/Kconfig               | 14 ++++++++++++++
> > > >  virt/kvm/Makefile.kvm          |  2 +-
> > > >  virt/kvm/kvm_main.c            | 11 +++++++++--
> > > >  5 files changed, 28 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > > > index 906f899813dc..7a0c90ae9a3f 100644
> > > > --- a/include/linux/kvm_dirty_ring.h
> > > > +++ b/include/linux/kvm_dirty_ring.h
> > > > @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
> > > >  	int index;
> > > >  };
> > > >  
> > > > -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> > > > +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> > > 
> > > s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> > > generic.
> > 
> > The commit message talks about the capability, while the above is the
> > config option. If you find the names inappropriate, feel free to
> > suggest alternatives (for all I care, they could be called FOO, BAR
> > and BAZ).
> 
> The existing name from David looks better than the new one.. to me.

I'm happy to bikeshed, but please spell it out for me. If we follow
the current scheme, we need 3 configuration symbols (of which we
already have one), and 2 capabilities (of which we already have one).

Do you have any concrete proposal for those?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-24  8:51         ` Marc Zyngier
@ 2022-09-24 11:26           ` Marc Zyngier
  2022-09-24 13:22             ` Peter Xu
  0 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-24 11:26 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Sat, 24 Sep 2022 09:51:39 +0100,
Marc Zyngier <maz@kernel.org> wrote:
> 
> On Fri, 23 Sep 2022 19:26:18 +0100,
> Peter Xu <peterx@redhat.com> wrote:
> > 
> > On Fri, Sep 23, 2022 at 03:28:34PM +0100, Marc Zyngier wrote:
> > > On Thu, 22 Sep 2022 22:48:19 +0100,
> > > Peter Xu <peterx@redhat.com> wrote:
> > > > 
> > > > On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> > > > > In order to differenciate between architectures that require no extra
> > > > > synchronisation when accessing the dirty ring and those who do,
> > > > > add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> > > > > the latter sort. TSO architectures can obviously advertise both, while
> > > > > relaxed architectures most only advertise the ORDERED version.
> > > > > 
> > > > > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > > ---
> > > > >  include/linux/kvm_dirty_ring.h |  6 +++---
> > > > >  include/uapi/linux/kvm.h       |  1 +
> > > > >  virt/kvm/Kconfig               | 14 ++++++++++++++
> > > > >  virt/kvm/Makefile.kvm          |  2 +-
> > > > >  virt/kvm/kvm_main.c            | 11 +++++++++--
> > > > >  5 files changed, 28 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > > > > index 906f899813dc..7a0c90ae9a3f 100644
> > > > > --- a/include/linux/kvm_dirty_ring.h
> > > > > +++ b/include/linux/kvm_dirty_ring.h
> > > > > @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
> > > > >  	int index;
> > > > >  };
> > > > >  
> > > > > -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> > > > > +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> > > > 
> > > > s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> > > > generic.
> > > 
> > > The commit message talks about the capability, while the above is the
> > > config option. If you find the names inappropriate, feel free to
> > > suggest alternatives (for all I care, they could be called FOO, BAR
> > > and BAZ).
> > 
> > The existing name from David looks better than the new one.. to me.
> 
> I'm happy to bikeshed, but please spell it out for me. If we follow
> the current scheme, we need 3 configuration symbols (of which we
> already have one), and 2 capabilities (of which we already have one).
> 
> Do you have any concrete proposal for those?

In order to make some forward progress, I've reworked the series[1]
with another proposal for those:

Config symbols:

- HAVE_KVM_DIRTY_RING:
  * mostly the same meaning as today
  * not directly selected by any architecture
  * doesn't expose any capability on its own

- HAVE_KVM_DIRTY_RING_TSO:
  * only for strongly ordered architectures
  * selects HAVE_KVM_DIRTY_RING
  * exposes KVM_CAP_DIRTY_LOG_RING
  * selected by x86

- HAVE_KVM_DIRTY_RING_ACQ_REL:
  * selects HAVE_KVM_DIRTY_RING
  * exposes KVM_CAP_DIRTY_LOG_RING_ACQ_REL
  * selected by arm64 and x86

Capabilities:

- KVM_CAP_DIRTY_LOG_RING: the good old x86-specific stuff, advertised
  when HAVE_KVM_DIRTY_RING_TSO is selected

- KVM_CAP_DIRTY_LOG_RING_ACQ_REL: the new acquire/release semantics,
  advertised when HAVE_KVM_DIRTY_RING_ACQ_REL is selected

This significantly reduces the churn and makes things slightly more
explicit.

Thoughts?

	M.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/dirty-log-ordered-bikeshed

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-24 11:26           ` Marc Zyngier
@ 2022-09-24 13:22             ` Peter Xu
  2022-09-24 18:57               ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Peter Xu @ 2022-09-24 13:22 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Sat, Sep 24, 2022 at 12:26:53PM +0100, Marc Zyngier wrote:
> On Sat, 24 Sep 2022 09:51:39 +0100,
> Marc Zyngier <maz@kernel.org> wrote:
> > 
> > On Fri, 23 Sep 2022 19:26:18 +0100,
> > Peter Xu <peterx@redhat.com> wrote:
> > > 
> > > On Fri, Sep 23, 2022 at 03:28:34PM +0100, Marc Zyngier wrote:
> > > > On Thu, 22 Sep 2022 22:48:19 +0100,
> > > > Peter Xu <peterx@redhat.com> wrote:
> > > > > 
> > > > > On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> > > > > > In order to differenciate between architectures that require no extra
> > > > > > synchronisation when accessing the dirty ring and those who do,
> > > > > > add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> > > > > > the latter sort. TSO architectures can obviously advertise both, while
> > > > > > relaxed architectures most only advertise the ORDERED version.
> > > > > > 
> > > > > > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > > > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > > > ---
> > > > > >  include/linux/kvm_dirty_ring.h |  6 +++---
> > > > > >  include/uapi/linux/kvm.h       |  1 +
> > > > > >  virt/kvm/Kconfig               | 14 ++++++++++++++
> > > > > >  virt/kvm/Makefile.kvm          |  2 +-
> > > > > >  virt/kvm/kvm_main.c            | 11 +++++++++--
> > > > > >  5 files changed, 28 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > > > > > index 906f899813dc..7a0c90ae9a3f 100644
> > > > > > --- a/include/linux/kvm_dirty_ring.h
> > > > > > +++ b/include/linux/kvm_dirty_ring.h
> > > > > > @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
> > > > > >  	int index;
> > > > > >  };
> > > > > >  
> > > > > > -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> > > > > > +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> > > > > 
> > > > > s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> > > > > generic.
> > > > 
> > > > The commit message talks about the capability, while the above is the
> > > > config option. If you find the names inappropriate, feel free to
> > > > suggest alternatives (for all I care, they could be called FOO, BAR
> > > > and BAZ).
> > > 
> > > The existing name from David looks better than the new one.. to me.
> > 
> > I'm happy to bikeshed, but please spell it out for me. If we follow
> > the current scheme, we need 3 configuration symbols (of which we
> > already have one), and 2 capabilities (of which we already have one).

I hope it's not bikeshedding.  I normally don't comment on namings at all
because many of them can be "bikeshedding" to me.  But this one is so
special because it directly collides with KVM_GET_DIRTY_LOG, which is other
method of dirty tracking.

> > 
> > Do you have any concrete proposal for those?
> 
> In order to make some forward progress, I've reworked the series[1]
> with another proposal for those:
> 
> Config symbols:
> 
> - HAVE_KVM_DIRTY_RING:
>   * mostly the same meaning as today
>   * not directly selected by any architecture
>   * doesn't expose any capability on its own
> 
> - HAVE_KVM_DIRTY_RING_TSO:
>   * only for strongly ordered architectures
>   * selects HAVE_KVM_DIRTY_RING
>   * exposes KVM_CAP_DIRTY_LOG_RING
>   * selected by x86
> 
> - HAVE_KVM_DIRTY_RING_ACQ_REL:
>   * selects HAVE_KVM_DIRTY_RING
>   * exposes KVM_CAP_DIRTY_LOG_RING_ACQ_REL
>   * selected by arm64 and x86
> 
> Capabilities:
> 
> - KVM_CAP_DIRTY_LOG_RING: the good old x86-specific stuff, advertised
>   when HAVE_KVM_DIRTY_RING_TSO is selected
> 
> - KVM_CAP_DIRTY_LOG_RING_ACQ_REL: the new acquire/release semantics,
>   advertised when HAVE_KVM_DIRTY_RING_ACQ_REL is selected
> 
> This significantly reduces the churn and makes things slightly more
> explicit.

This looks good to me, thanks.

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED
  2022-09-24  8:47     ` Marc Zyngier
@ 2022-09-24 13:29       ` Peter Xu
  0 siblings, 0 replies; 28+ messages in thread
From: Peter Xu @ 2022-09-24 13:29 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Sat, Sep 24, 2022 at 09:47:59AM +0100, Marc Zyngier wrote:
> On Fri, 23 Sep 2022 23:46:40 +0100,
> Peter Xu <peterx@redhat.com> wrote:
> > 
> > On Thu, Sep 22, 2022 at 06:01:30PM +0100, Marc Zyngier wrote:
> > > Since x86 is TSO (give or take), allow it to advertise the new
> > > ORDERED version of the dirty ring capability. No other change is
> > > required for it.
> > > 
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/x86/kvm/Kconfig | 1 +
> > >  1 file changed, 1 insertion(+)
> > > 
> > > diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> > > index e3cbd7706136..eb63bc31ed1d 100644
> > > --- a/arch/x86/kvm/Kconfig
> > > +++ b/arch/x86/kvm/Kconfig
> > > @@ -29,6 +29,7 @@ config KVM
> > >  	select HAVE_KVM_PFNCACHE
> > >  	select HAVE_KVM_IRQFD
> > >  	select HAVE_KVM_DIRTY_RING
> > > +	select HAVE_KVM_DIRTY_RING_ORDERED
> > >  	select IRQ_BYPASS_MANAGER
> > >  	select HAVE_KVM_IRQ_BYPASS
> > >  	select HAVE_KVM_IRQ_ROUTING
> > 
> > Before patch 2-3, we only have HAVE_KVM_DIRTY_RING.
> > 
> > After that, we'll have:
> > 
> > HAVE_KVM_DIRTY_LOG
> > HAVE_KVM_DIRTY_RING
> > HAVE_KVM_DIRTY_RING_ORDERED
> > 
> > I'm wondering whether we can just keep using the old HAVE_KVM_DIRTY_RING,
> > but just declare a new KVM_CAP_DIRTY_LOG_RING_ORDERED only after all memory
> > barrier patches merged (after patch 1).
> 
> The problem is to decide, on a per architecture basis, how to expose
> the old property. I'm happy to key it on being x86 specific, but that
> feels pretty gross, and totally unnecessary for strongly ordered
> architectures (s390?).
> 
> > IIUC it's a matter of whether any of the arch would like to support
> > !ORDERED version of dirty ring at all, but then IIUC we'll need to have the
> > memory barriers conditional too or not sure how it'll help.
> 
> Memory barriers do not affect the semantics of the userspace, while
> the lack thereof do. On strongly ordered architectures,
> acquire/release is usually "free", because that's implied by their
> memory model. If it thus free for these to implement both versions of
> the API.

Right, that's why I thought it won't help.  now I see what you meant,
indeed if without the three config we'll need a x86 ifdef which may not be
as clean as this approach.  Thanks for explaining.

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-24 13:22             ` Peter Xu
@ 2022-09-24 18:57               ` Marc Zyngier
  2022-09-25 23:17                 ` Gavin Shan
  0 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2022-09-24 18:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Sat, 24 Sep 2022 14:22:32 +0100,
Peter Xu <peterx@redhat.com> wrote:
> 
> On Sat, Sep 24, 2022 at 12:26:53PM +0100, Marc Zyngier wrote:
> > On Sat, 24 Sep 2022 09:51:39 +0100,
> > Marc Zyngier <maz@kernel.org> wrote:
> > > 
> > > I'm happy to bikeshed, but please spell it out for me. If we follow
> > > the current scheme, we need 3 configuration symbols (of which we
> > > already have one), and 2 capabilities (of which we already have one).
> 
> I hope it's not bikeshedding.  I normally don't comment on namings at all
> because many of them can be "bikeshedding" to me.  But this one is so
> special because it directly collides with KVM_GET_DIRTY_LOG, which is other
> method of dirty tracking.

Fair enough. I'm notoriously bad at sticking a name to things, so I'm
always happy to receive suggestions.

> 
> > > 
> > > Do you have any concrete proposal for those?
> > 
> > In order to make some forward progress, I've reworked the series[1]
> > with another proposal for those:
> > 
> > Config symbols:
> > 
> > - HAVE_KVM_DIRTY_RING:
> >   * mostly the same meaning as today
> >   * not directly selected by any architecture
> >   * doesn't expose any capability on its own
> > 
> > - HAVE_KVM_DIRTY_RING_TSO:
> >   * only for strongly ordered architectures
> >   * selects HAVE_KVM_DIRTY_RING
> >   * exposes KVM_CAP_DIRTY_LOG_RING
> >   * selected by x86
> > 
> > - HAVE_KVM_DIRTY_RING_ACQ_REL:
> >   * selects HAVE_KVM_DIRTY_RING
> >   * exposes KVM_CAP_DIRTY_LOG_RING_ACQ_REL
> >   * selected by arm64 and x86
> > 
> > Capabilities:
> > 
> > - KVM_CAP_DIRTY_LOG_RING: the good old x86-specific stuff, advertised
> >   when HAVE_KVM_DIRTY_RING_TSO is selected
> > 
> > - KVM_CAP_DIRTY_LOG_RING_ACQ_REL: the new acquire/release semantics,
> >   advertised when HAVE_KVM_DIRTY_RING_ACQ_REL is selected
> > 
> > This significantly reduces the churn and makes things slightly more
> > explicit.
> 
> This looks good to me, thanks.

OK, thanks for having a quick look. I'll repost this shortly, after
I'm done reviewing Gavin's series.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option
  2022-09-24 18:57               ` Marc Zyngier
@ 2022-09-25 23:17                 ` Gavin Shan
  0 siblings, 0 replies; 28+ messages in thread
From: Gavin Shan @ 2022-09-25 23:17 UTC (permalink / raw)
  To: Marc Zyngier, Peter Xu
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

Hi Marc,

On 9/25/22 4:57 AM, Marc Zyngier wrote:
> On Sat, 24 Sep 2022 14:22:32 +0100,
> Peter Xu <peterx@redhat.com> wrote:
>>
>> On Sat, Sep 24, 2022 at 12:26:53PM +0100, Marc Zyngier wrote:
>>> On Sat, 24 Sep 2022 09:51:39 +0100,
>>> Marc Zyngier <maz@kernel.org> wrote:
>>>>
>>>> I'm happy to bikeshed, but please spell it out for me. If we follow
>>>> the current scheme, we need 3 configuration symbols (of which we
>>>> already have one), and 2 capabilities (of which we already have one).
>>
>> I hope it's not bikeshedding.  I normally don't comment on namings at all
>> because many of them can be "bikeshedding" to me.  But this one is so
>> special because it directly collides with KVM_GET_DIRTY_LOG, which is other
>> method of dirty tracking.
> 
> Fair enough. I'm notoriously bad at sticking a name to things, so I'm
> always happy to receive suggestions.
> 
>>
>>>>
>>>> Do you have any concrete proposal for those?
>>>
>>> In order to make some forward progress, I've reworked the series[1]
>>> with another proposal for those:
>>>
>>> Config symbols:
>>>
>>> - HAVE_KVM_DIRTY_RING:
>>>    * mostly the same meaning as today
>>>    * not directly selected by any architecture
>>>    * doesn't expose any capability on its own
>>>
>>> - HAVE_KVM_DIRTY_RING_TSO:
>>>    * only for strongly ordered architectures
>>>    * selects HAVE_KVM_DIRTY_RING
>>>    * exposes KVM_CAP_DIRTY_LOG_RING
>>>    * selected by x86
>>>
>>> - HAVE_KVM_DIRTY_RING_ACQ_REL:
>>>    * selects HAVE_KVM_DIRTY_RING
>>>    * exposes KVM_CAP_DIRTY_LOG_RING_ACQ_REL
>>>    * selected by arm64 and x86
>>>
>>> Capabilities:
>>>
>>> - KVM_CAP_DIRTY_LOG_RING: the good old x86-specific stuff, advertised
>>>    when HAVE_KVM_DIRTY_RING_TSO is selected
>>>
>>> - KVM_CAP_DIRTY_LOG_RING_ACQ_REL: the new acquire/release semantics,
>>>    advertised when HAVE_KVM_DIRTY_RING_ACQ_REL is selected
>>>
>>> This significantly reduces the churn and makes things slightly more
>>> explicit.
>>
>> This looks good to me, thanks.
> 
> OK, thanks for having a quick look. I'll repost this shortly, after
> I'm done reviewing Gavin's series.
> 

The config options and capabilities look good to me either. I will
post my v4 series, which will rebased on your v2 series.

Thanks,
Gavin

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2022-09-25 23:18 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-22 17:01 [PATCH 0/6] KVM: Fix dirty-ring ordering on weakly ordered architectures Marc Zyngier
2022-09-22 17:01 ` [PATCH 1/6] KVM: Use acquire/release semantics when accessing dirty ring GFN state Marc Zyngier
2022-09-22 21:38   ` Peter Xu
2022-09-22 23:46     ` Gavin Shan
2022-09-23 14:40       ` Marc Zyngier
2022-09-23 14:19     ` Marc Zyngier
2022-09-23 14:22       ` Paolo Bonzini
2022-09-22 17:01 ` [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option Marc Zyngier
2022-09-22 21:48   ` Peter Xu
2022-09-23  0:04     ` Gavin Shan
2022-09-23 14:28     ` Marc Zyngier
2022-09-23 18:26       ` Peter Xu
2022-09-23 21:23         ` Paolo Bonzini
2022-09-23 22:34           ` Peter Xu
2022-09-24  8:51         ` Marc Zyngier
2022-09-24 11:26           ` Marc Zyngier
2022-09-24 13:22             ` Peter Xu
2022-09-24 18:57               ` Marc Zyngier
2022-09-25 23:17                 ` Gavin Shan
2022-09-22 17:01 ` [PATCH 3/6] KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ORDERED Marc Zyngier
2022-09-23 22:46   ` Peter Xu
2022-09-24  8:47     ` Marc Zyngier
2022-09-24 13:29       ` Peter Xu
2022-09-22 17:01 ` [PATCH 4/6] KVM: Document weakly ordered architecture requirements for dirty ring Marc Zyngier
2022-09-22 17:01 ` [PATCH 5/6] KVM: selftests: dirty-log: Upgrade dirty_gfn_set_collected() to store-release Marc Zyngier
2022-09-22 21:38   ` Paolo Bonzini
2022-09-23 14:49     ` Marc Zyngier
2022-09-22 17:01 ` [PATCH 6/6] KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ORDERED of available Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).