kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking
@ 2022-09-22  0:32 Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 1/6] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL Gavin Shan
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

This series enables the ring-based dirty memory tracking for ARM64.
The feature has been available and enabled on x86 for a while. It
is beneficial when the number of dirty pages is small in a checkpointing
system or live migration scenario. More details can be found from
fb04a1eddb1a ("KVM: X86: Implement ring-based dirty memory tracking").

The generic part has been comprehensive, meaning there isn't too much
work, needed to extend it to ARM64.

  PATCH[1]   introduces KVM_REQ_RING_SOFT_FULL for x86
  PATCH[2]   moves declaration of kvm_cpu_dirty_log_size()
  PATCH[3]   enables the feature on ARM64
  PATCH[4-6] improves kvm/selftests/dirty_log_test

v2: https://lore.kernel.org/lkml/YyiV%2Fl7O23aw5aaO@xz-m1.local/T/
v1: https://lore.kernel.org/lkml/20220819005601.198436-1-gshan@redhat.com

Testing
=======
(1) kvm/selftests/dirty_log_test
(2) Live migration by QEMU

Changelog
=========
v3:
  * Check KVM_REQ_RING_SOFT_RULL inside kvm_request_pending()  (Peter)
  * Move declaration of kvm_cpu_dirty_log_size()               (test-robot)
v2:
  * Introduce KVM_REQ_RING_SOFT_FULL                           (Marc)
  * Changelog improvement                                      (Marc)
  * Fix dirty_log_test without knowing host page size          (Drew)

Gavin Shan (6):
  KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL
  KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to
    kvm_dirty_ring.h
  KVM: arm64: Enable ring-based dirty memory tracking
  KVM: selftests: Use host page size to map ring buffer in
    dirty_log_test
  KVM: selftests: Clear dirty ring states between two modes in
    dirty_log_test
  KVM: selftests: Automate choosing dirty ring size in dirty_log_test

 Documentation/virt/kvm/api.rst               |  2 +-
 arch/arm64/include/uapi/asm/kvm.h            |  1 +
 arch/arm64/kvm/Kconfig                       |  1 +
 arch/arm64/kvm/arm.c                         |  8 +++
 arch/x86/include/asm/kvm_host.h              |  2 -
 arch/x86/kvm/mmu/mmu.c                       |  2 +
 arch/x86/kvm/x86.c                           | 19 +++----
 include/linux/kvm_dirty_ring.h               |  1 +
 include/linux/kvm_host.h                     |  1 +
 tools/testing/selftests/kvm/dirty_log_test.c | 53 ++++++++++++++------
 tools/testing/selftests/kvm/lib/kvm_util.c   |  2 +-
 virt/kvm/dirty_ring.c                        |  4 ++
 12 files changed, 69 insertions(+), 27 deletions(-)

-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/6] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL
  2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
@ 2022-09-22  0:32 ` Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h Gavin Shan
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

This adds KVM_REQ_RING_SOFT_FULL, which is raised when the dirty
ring of the specific VCPU becomes softly full in kvm_dirty_ring_push().
The VCPU is enforced to exit when the request is raised and its
dirty ring is softly full on its entrance.

Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/x86/kvm/x86.c       | 19 ++++++++++---------
 include/linux/kvm_host.h |  1 +
 virt/kvm/dirty_ring.c    |  4 ++++
 3 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 43a6a7efc6ec..18a4da71989e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10264,16 +10264,17 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	bool req_immediate_exit = false;
 
-	/* Forbid vmenter if vcpu dirty ring is soft-full */
-	if (unlikely(vcpu->kvm->dirty_ring_size &&
-		     kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) {
-		vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
-		trace_kvm_dirty_ring_exit(vcpu);
-		r = 0;
-		goto out;
-	}
-
 	if (kvm_request_pending(vcpu)) {
+		/* Forbid vmenter if vcpu dirty ring is soft-full */
+		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
+		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
+			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
+			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
+			trace_kvm_dirty_ring_exit(vcpu);
+			r = 0;
+			goto out;
+		}
+
 		if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
 			r = -EIO;
 			goto out;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index f4519d3689e1..53fa3134fee0 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -157,6 +157,7 @@ static inline bool is_error_page(struct page *page)
 #define KVM_REQ_VM_DEAD           (1 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
 #define KVM_REQ_UNBLOCK           2
 #define KVM_REQ_UNHALT            3
+#define KVM_REQ_RING_SOFT_FULL    4
 #define KVM_REQUEST_ARCH_BASE     8
 
 /*
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index f4c2a6eb1666..f0e49937bc9e 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -149,6 +149,7 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring)
 
 void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset)
 {
+	struct kvm_vcpu *vcpu = container_of(ring, struct kvm_vcpu, dirty_ring);
 	struct kvm_dirty_gfn *entry;
 
 	/* It should never get full */
@@ -166,6 +167,9 @@ void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset)
 	kvm_dirty_gfn_set_dirtied(entry);
 	ring->dirty_index++;
 	trace_kvm_dirty_ring_push(ring, slot, offset);
+
+	if (kvm_dirty_ring_soft_full(ring))
+		kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
 }
 
 struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset)
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h
  2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 1/6] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL Gavin Shan
@ 2022-09-22  0:32 ` Gavin Shan
  2022-09-24 19:12   ` Marc Zyngier
  2022-09-22  0:32 ` [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

Not all architectures like ARM64 need to override the function. Move
its declaration to kvm_dirty_ring.h to avoid the following compiling
warning on ARM64 when the feature is enabled.

  arch/arm64/kvm/../../../virt/kvm/dirty_ring.c:14:12:        \
  warning: no previous prototype for 'kvm_cpu_dirty_log_size' \
  [-Wmissing-prototypes]                                      \
  int __weak kvm_cpu_dirty_log_size(void)

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 arch/x86/include/asm/kvm_host.h | 2 --
 arch/x86/kvm/mmu/mmu.c          | 2 ++
 include/linux/kvm_dirty_ring.h  | 1 +
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c96c43c313a..4c0fd517282b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2082,8 +2082,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu)
 #define GET_SMSTATE(type, buf, offset)		\
 	(*(type *)((buf) + (offset) - 0x7e00))
 
-int kvm_cpu_dirty_log_size(void);
-
 int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages);
 
 #define KVM_CLOCK_VALID_FLAGS						\
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e418ef3ecfcb..b3eb6a3627ec 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1349,10 +1349,12 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
 		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
 }
 
+#ifdef CONFIG_HAVE_KVM_DIRTY_RING
 int kvm_cpu_dirty_log_size(void)
 {
 	return kvm_x86_ops.cpu_dirty_log_size;
 }
+#endif
 
 bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 				    struct kvm_memory_slot *slot, u64 gfn,
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index 906f899813dc..8c6755981c9b 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -71,6 +71,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 
 #else /* CONFIG_HAVE_KVM_DIRTY_RING */
 
+int kvm_cpu_dirty_log_size(void);
 u32 kvm_dirty_ring_get_rsvd_entries(void);
 int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
 
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking
  2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 1/6] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h Gavin Shan
@ 2022-09-22  0:32 ` Gavin Shan
  2022-09-24 20:27   ` Marc Zyngier
  2022-09-22  0:32 ` [PATCH v3 4/6] KVM: selftests: Use host page size to map ring buffer in dirty_log_test Gavin Shan
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

This enables the ring-based dirty memory tracking on ARM64. The
feature is configured by CONFIG_HAVE_KVM_DIRTY_RING, detected and
enabled by KVM_CAP_DIRTY_LOG_RING. A ring buffer is created on every
VCPU when the feature is enabled. Each entry in the ring buffer is
described by 'struct kvm_dirty_gfn'.

A ring buffer entry is pushed when a page becomes dirty on host,
and pulled by userspace after the ring buffer is mapped at physical
page offset KVM_DIRTY_LOG_PAGE_OFFSET. The specific VCPU is enforced
to exit if its ring buffer becomes softly full. Besides, the ring
buffer can be reset by ioctl command KVM_RESET_DIRTY_RINGS to release
those pulled ring buffer entries.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 Documentation/virt/kvm/api.rst    | 2 +-
 arch/arm64/include/uapi/asm/kvm.h | 1 +
 arch/arm64/kvm/Kconfig            | 1 +
 arch/arm64/kvm/arm.c              | 8 ++++++++
 4 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index abd7c32126ce..19fa1ac017ed 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8022,7 +8022,7 @@ regardless of what has actually been exposed through the CPUID leaf.
 8.29 KVM_CAP_DIRTY_LOG_RING
 ---------------------------
 
-:Architectures: x86
+:Architectures: x86, arm64
 :Parameters: args[0] - size of the dirty log ring
 
 KVM is capable of tracking dirty memory using ring buffers that are
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 316917b98707..a7a857f1784d 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -43,6 +43,7 @@
 #define __KVM_HAVE_VCPU_EVENTS
 
 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
+#define KVM_DIRTY_LOG_PAGE_OFFSET 64
 
 #define KVM_REG_SIZE(id)						\
 	(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 815cc118c675..0309b2d0f2da 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -32,6 +32,7 @@ menuconfig KVM
 	select KVM_VFIO
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
+	select HAVE_KVM_DIRTY_RING
 	select HAVE_KVM_MSI
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQ_ROUTING
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2ff0ef62abad..76816f8e082b 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -747,6 +747,14 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
 
 		if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
 			return kvm_vcpu_suspend(vcpu);
+
+		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
+		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
+			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
+			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
+			trace_kvm_dirty_ring_exit(vcpu);
+			return 0;
+		}
 	}
 
 	return 1;
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/6] KVM: selftests: Use host page size to map ring buffer in dirty_log_test
  2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
                   ` (2 preceding siblings ...)
  2022-09-22  0:32 ` [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
@ 2022-09-22  0:32 ` Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 5/6] KVM: selftests: Clear dirty ring states between two modes " Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 6/6] KVM: selftests: Automate choosing dirty ring size " Gavin Shan
  5 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

In vcpu_map_dirty_ring(), the guest's page size is used to figure out
the offset in the virtual area. It works fine when we have same page
sizes on host and guest. However, it fails when the page sizes on host
and guest are different on arm64, like below error messages indicates.

  # ./dirty_log_test -M dirty-ring -m 7
  Setting log mode to: 'dirty-ring'
  Test iterations: 32, interval: 10 (ms)
  Testing guest mode: PA-bits:40,  VA-bits:48, 64K pages
  guest physical test memory offset: 0xffbffc0000
  vcpu stops because vcpu is kicked out...
  Notifying vcpu to continue
  vcpu continues now.
  ==== Test Assertion Failure ====
  lib/kvm_util.c:1477: addr == MAP_FAILED
  pid=9000 tid=9000 errno=0 - Success
  1  0x0000000000405f5b: vcpu_map_dirty_ring at kvm_util.c:1477
  2  0x0000000000402ebb: dirty_ring_collect_dirty_pages at dirty_log_test.c:349
  3  0x00000000004029b3: log_mode_collect_dirty_pages at dirty_log_test.c:478
  4  (inlined by) run_test at dirty_log_test.c:778
  5  (inlined by) run_test at dirty_log_test.c:691
  6  0x0000000000403a57: for_each_guest_mode at guest_modes.c:105
  7  0x0000000000401ccf: main at dirty_log_test.c:921
  8  0x0000ffffb06ec79b: ?? ??:0
  9  0x0000ffffb06ec86b: ?? ??:0
  10 0x0000000000401def: _start at ??:?
  Dirty ring mapped private

Fix the issue by using host's page size to map the ring buffer.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 9889fe0d8919..4e823cbe6b48 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1464,7 +1464,7 @@ struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu)
 
 void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu)
 {
-	uint32_t page_size = vcpu->vm->page_size;
+	uint32_t page_size = getpagesize();
 	uint32_t size = vcpu->vm->dirty_ring_size;
 
 	TEST_ASSERT(size > 0, "Should enable dirty ring first");
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 5/6] KVM: selftests: Clear dirty ring states between two modes in dirty_log_test
  2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
                   ` (3 preceding siblings ...)
  2022-09-22  0:32 ` [PATCH v3 4/6] KVM: selftests: Use host page size to map ring buffer in dirty_log_test Gavin Shan
@ 2022-09-22  0:32 ` Gavin Shan
  2022-09-22  0:32 ` [PATCH v3 6/6] KVM: selftests: Automate choosing dirty ring size " Gavin Shan
  5 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

There are two states, which need to be cleared before next mode
is executed. Otherwise, we will hit failure as the following messages
indicate.

- The variable 'dirty_ring_vcpu_ring_full' shared by main and vcpu
  thread. It's indicating if the vcpu exit due to full ring buffer.
  The value can be carried from previous mode (VM_MODE_P40V48_4K) to
  current one (VM_MODE_P40V48_64K) when VM_MODE_P40V48_16K isn't
  supported.

- The current ring buffer index needs to be reset before next mode
  (VM_MODE_P40V48_64K) is executed. Otherwise, the stale value is
  carried from previous mode (VM_MODE_P40V48_4K).

  # ./dirty_log_test -M dirty-ring
  Setting log mode to: 'dirty-ring'
  Test iterations: 32, interval: 10 (ms)
  Testing guest mode: PA-bits:40,  VA-bits:48,  4K pages
  guest physical test memory offset: 0xffbfffc000
    :
  Dirtied 995328 pages
  Total bits checked: dirty (1012434), clear (7114123), track_next (966700)
  Testing guest mode: PA-bits:40,  VA-bits:48, 64K pages
  guest physical test memory offset: 0xffbffc0000
  vcpu stops because vcpu is kicked out...
  vcpu continues now.
  Notifying vcpu to continue
  Iteration 1 collected 0 pages
  vcpu stops because dirty ring is full...
  vcpu continues now.
  vcpu stops because dirty ring is full...
  vcpu continues now.
  vcpu stops because dirty ring is full...
  ==== Test Assertion Failure ====
  dirty_log_test.c:369: cleared == count
  pid=10541 tid=10541 errno=22 - Invalid argument
     1	0x0000000000403087: dirty_ring_collect_dirty_pages at dirty_log_test.c:369
     2	0x0000000000402a0b: log_mode_collect_dirty_pages at dirty_log_test.c:492
     3	 (inlined by) run_test at dirty_log_test.c:795
     4	 (inlined by) run_test at dirty_log_test.c:705
     5	0x0000000000403a37: for_each_guest_mode at guest_modes.c:100
     6	0x0000000000401ccf: main at dirty_log_test.c:938
     7	0x0000ffff9ecd279b: ?? ??:0
     8	0x0000ffff9ecd286b: ?? ??:0
     9	0x0000000000401def: _start at ??:?
  Reset dirty pages (0) mismatch with collected (35566)

Fix the issues by clearing 'dirty_ring_vcpu_ring_full' and the ring
buffer index before next new mode is to be executed.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 27 ++++++++++++--------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 9c883c94d478..7d91df7e036f 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -225,13 +225,15 @@ static void clear_log_create_vm_done(struct kvm_vm *vm)
 }
 
 static void dirty_log_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
-					  void *bitmap, uint32_t num_pages)
+					  void *bitmap, uint32_t num_pages,
+					  uint32_t *unused)
 {
 	kvm_vm_get_dirty_log(vcpu->vm, slot, bitmap);
 }
 
 static void clear_log_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
-					  void *bitmap, uint32_t num_pages)
+					  void *bitmap, uint32_t num_pages,
+					  uint32_t *unused)
 {
 	kvm_vm_get_dirty_log(vcpu->vm, slot, bitmap);
 	kvm_vm_clear_dirty_log(vcpu->vm, slot, bitmap, 0, num_pages);
@@ -327,10 +329,9 @@ static void dirty_ring_continue_vcpu(void)
 }
 
 static void dirty_ring_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
-					   void *bitmap, uint32_t num_pages)
+					   void *bitmap, uint32_t num_pages,
+					   uint32_t *ring_buf_idx)
 {
-	/* We only have one vcpu */
-	static uint32_t fetch_index = 0;
 	uint32_t count = 0, cleared;
 	bool continued_vcpu = false;
 
@@ -347,7 +348,8 @@ static void dirty_ring_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
 
 	/* Only have one vcpu */
 	count = dirty_ring_collect_one(vcpu_map_dirty_ring(vcpu),
-				       slot, bitmap, num_pages, &fetch_index);
+				       slot, bitmap, num_pages,
+				       ring_buf_idx);
 
 	cleared = kvm_vm_reset_dirty_ring(vcpu->vm);
 
@@ -404,7 +406,8 @@ struct log_mode {
 	void (*create_vm_done)(struct kvm_vm *vm);
 	/* Hook to collect the dirty pages into the bitmap provided */
 	void (*collect_dirty_pages) (struct kvm_vcpu *vcpu, int slot,
-				     void *bitmap, uint32_t num_pages);
+				     void *bitmap, uint32_t num_pages,
+				     uint32_t *ring_buf_idx);
 	/* Hook to call when after each vcpu run */
 	void (*after_vcpu_run)(struct kvm_vcpu *vcpu, int ret, int err);
 	void (*before_vcpu_join) (void);
@@ -469,13 +472,14 @@ static void log_mode_create_vm_done(struct kvm_vm *vm)
 }
 
 static void log_mode_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
-					 void *bitmap, uint32_t num_pages)
+					 void *bitmap, uint32_t num_pages,
+					 uint32_t *ring_buf_idx)
 {
 	struct log_mode *mode = &log_modes[host_log_mode];
 
 	TEST_ASSERT(mode->collect_dirty_pages != NULL,
 		    "collect_dirty_pages() is required for any log mode!");
-	mode->collect_dirty_pages(vcpu, slot, bitmap, num_pages);
+	mode->collect_dirty_pages(vcpu, slot, bitmap, num_pages, ring_buf_idx);
 }
 
 static void log_mode_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int err)
@@ -694,6 +698,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct kvm_vcpu *vcpu;
 	struct kvm_vm *vm;
 	unsigned long *bmap;
+	uint32_t ring_buf_idx = 0;
 
 	if (!log_mode_supported()) {
 		print_skip("Log mode '%s' not supported",
@@ -769,6 +774,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	host_dirty_count = 0;
 	host_clear_count = 0;
 	host_track_next_count = 0;
+	WRITE_ONCE(dirty_ring_vcpu_ring_full, false);
 
 	pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu);
 
@@ -776,7 +782,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		/* Give the vcpu thread some time to dirty some pages */
 		usleep(p->interval * 1000);
 		log_mode_collect_dirty_pages(vcpu, TEST_MEM_SLOT_INDEX,
-					     bmap, host_num_pages);
+					     bmap, host_num_pages,
+					     &ring_buf_idx);
 
 		/*
 		 * See vcpu_sync_stop_requested definition for details on why
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 6/6] KVM: selftests: Automate choosing dirty ring size in dirty_log_test
  2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
                   ` (4 preceding siblings ...)
  2022-09-22  0:32 ` [PATCH v3 5/6] KVM: selftests: Clear dirty ring states between two modes " Gavin Shan
@ 2022-09-22  0:32 ` Gavin Shan
  5 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-22  0:32 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, maz, bgardon, andrew.jones, will, shan.gavin,
	catalin.marinas, dmatlack, pbonzini, zhenyzha, shuah

In the dirty ring case, we rely on vcpu exit due to full dirty ring
state. On ARM64 system, there are 4096 host pages when the host
page size is 64KB. In this case, the vcpu never exits due to the
full dirty ring state. The similar case is 4KB page size on host
and 64KB page size on guest. The vcpu corrupts same set of host
pages, but the dirty page information isn't collected in the main
thread. This leads to infinite loop as the following log shows.

  # ./dirty_log_test -M dirty-ring -c 65536 -m 5
  Setting log mode to: 'dirty-ring'
  Test iterations: 32, interval: 10 (ms)
  Testing guest mode: PA-bits:40,  VA-bits:48,  4K pages
  guest physical test memory offset: 0xffbffe0000
  vcpu stops because vcpu is kicked out...
  Notifying vcpu to continue
  vcpu continues now.
  Iteration 1 collected 576 pages
  <No more output afterwards>

Fix the issue by automatically choosing the best dirty ring size,
to ensure vcpu exit due to full dirty ring state. The option '-c'
becomes a hint to the dirty ring count, instead of the value of it.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 26 +++++++++++++++++---
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 7d91df7e036f..47ac2c719ade 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -23,6 +23,9 @@
 #include "guest_modes.h"
 #include "processor.h"
 
+#define DIRTY_MEM_BITS 30 /* 1G */
+#define PAGE_SHIFT_4K  12
+
 /* The memory slot index to track dirty pages */
 #define TEST_MEM_SLOT_INDEX		1
 
@@ -271,6 +274,24 @@ static bool dirty_ring_supported(void)
 
 static void dirty_ring_create_vm_done(struct kvm_vm *vm)
 {
+	uint64_t pages;
+	uint32_t limit;
+
+	/*
+	 * We rely on vcpu exit due to full dirty ring state. Adjust
+	 * the ring buffer size to ensure we're able to reach the
+	 * full dirty ring state.
+	 */
+	pages = (1ul << (DIRTY_MEM_BITS - vm->page_shift)) + 3;
+	pages = vm_adjust_num_guest_pages(vm->mode, pages);
+	if (vm->page_size < getpagesize())
+		pages = vm_num_host_pages(vm->mode, pages);
+
+	limit = 1 << (31 - __builtin_clz(pages));
+	test_dirty_ring_count = 1 << (31 - __builtin_clz(test_dirty_ring_count));
+	test_dirty_ring_count = min(limit, test_dirty_ring_count);
+	pr_info("dirty ring count: 0x%x\n", test_dirty_ring_count);
+
 	/*
 	 * Switch to dirty ring mode after VM creation but before any
 	 * of the vcpu creation.
@@ -683,9 +704,6 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, struct kvm_vcpu **vcpu,
 	return vm;
 }
 
-#define DIRTY_MEM_BITS 30 /* 1G */
-#define PAGE_SHIFT_4K  12
-
 struct test_params {
 	unsigned long iterations;
 	unsigned long interval;
@@ -828,7 +846,7 @@ static void help(char *name)
 	printf("usage: %s [-h] [-i iterations] [-I interval] "
 	       "[-p offset] [-m mode]\n", name);
 	puts("");
-	printf(" -c: specify dirty ring size, in number of entries\n");
+	printf(" -c: hint to dirty ring size, in number of entries\n");
 	printf("     (only useful for dirty-ring test; default: %"PRIu32")\n",
 	       TEST_DIRTY_RING_COUNT);
 	printf(" -i: specify iteration counts (default: %"PRIu64")\n",
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h
  2022-09-22  0:32 ` [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h Gavin Shan
@ 2022-09-24 19:12   ` Marc Zyngier
  2022-09-25 23:09     ` Gavin Shan
  0 siblings, 1 reply; 11+ messages in thread
From: Marc Zyngier @ 2022-09-24 19:12 UTC (permalink / raw)
  To: Gavin Shan
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Thu, 22 Sep 2022 01:32:10 +0100,
Gavin Shan <gshan@redhat.com> wrote:
> 
> Not all architectures like ARM64 need to override the function. Move
> its declaration to kvm_dirty_ring.h to avoid the following compiling
> warning on ARM64 when the feature is enabled.
> 
>   arch/arm64/kvm/../../../virt/kvm/dirty_ring.c:14:12:        \
>   warning: no previous prototype for 'kvm_cpu_dirty_log_size' \
>   [-Wmissing-prototypes]                                      \
>   int __weak kvm_cpu_dirty_log_size(void)
>
> Reported-by: kernel test robot <lkp@intel.com>
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 2 --
>  arch/x86/kvm/mmu/mmu.c          | 2 ++
>  include/linux/kvm_dirty_ring.h  | 1 +
>  3 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 2c96c43c313a..4c0fd517282b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2082,8 +2082,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu)
>  #define GET_SMSTATE(type, buf, offset)		\
>  	(*(type *)((buf) + (offset) - 0x7e00))
>  
> -int kvm_cpu_dirty_log_size(void);
> -
>  int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages);
>  
>  #define KVM_CLOCK_VALID_FLAGS						\
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index e418ef3ecfcb..b3eb6a3627ec 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1349,10 +1349,12 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>  		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
>  }
>  
> +#ifdef CONFIG_HAVE_KVM_DIRTY_RING

I think you can drop the ifdeffery, as HAVE_KVM_DIRTY_RING is
unconditionally selected by the arch Kconfig.

>  int kvm_cpu_dirty_log_size(void)
>  {
>  	return kvm_x86_ops.cpu_dirty_log_size;
>  }
> +#endif
>  
>  bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
>  				    struct kvm_memory_slot *slot, u64 gfn,
> diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> index 906f899813dc..8c6755981c9b 100644
> --- a/include/linux/kvm_dirty_ring.h
> +++ b/include/linux/kvm_dirty_ring.h
> @@ -71,6 +71,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
>  
>  #else /* CONFIG_HAVE_KVM_DIRTY_RING */
>  
> +int kvm_cpu_dirty_log_size(void);
>  u32 kvm_dirty_ring_get_rsvd_entries(void);
>  int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking
  2022-09-22  0:32 ` [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
@ 2022-09-24 20:27   ` Marc Zyngier
  2022-09-25 23:13     ` Gavin Shan
  0 siblings, 1 reply; 11+ messages in thread
From: Marc Zyngier @ 2022-09-24 20:27 UTC (permalink / raw)
  To: Gavin Shan
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

On Thu, 22 Sep 2022 01:32:11 +0100,
Gavin Shan <gshan@redhat.com> wrote:
> 
> This enables the ring-based dirty memory tracking on ARM64. The
> feature is configured by CONFIG_HAVE_KVM_DIRTY_RING, detected and
> enabled by KVM_CAP_DIRTY_LOG_RING. A ring buffer is created on every
> VCPU when the feature is enabled. Each entry in the ring buffer is
> described by 'struct kvm_dirty_gfn'.
> 
> A ring buffer entry is pushed when a page becomes dirty on host,
> and pulled by userspace after the ring buffer is mapped at physical
> page offset KVM_DIRTY_LOG_PAGE_OFFSET. The specific VCPU is enforced
> to exit if its ring buffer becomes softly full. Besides, the ring
> buffer can be reset by ioctl command KVM_RESET_DIRTY_RINGS to release
> those pulled ring buffer entries.

I think you can cut this message short. This description was useful
when the feature was initially merged, but this is only a "plumb the
damn thing" patch.

> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  Documentation/virt/kvm/api.rst    | 2 +-
>  arch/arm64/include/uapi/asm/kvm.h | 1 +
>  arch/arm64/kvm/Kconfig            | 1 +
>  arch/arm64/kvm/arm.c              | 8 ++++++++
>  4 files changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index abd7c32126ce..19fa1ac017ed 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -8022,7 +8022,7 @@ regardless of what has actually been exposed through the CPUID leaf.
>  8.29 KVM_CAP_DIRTY_LOG_RING
>  ---------------------------
>  
> -:Architectures: x86
> +:Architectures: x86, arm64
>  :Parameters: args[0] - size of the dirty log ring
>  
>  KVM is capable of tracking dirty memory using ring buffers that are
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 316917b98707..a7a857f1784d 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -43,6 +43,7 @@
>  #define __KVM_HAVE_VCPU_EVENTS
>  
>  #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
> +#define KVM_DIRTY_LOG_PAGE_OFFSET 64
>  
>  #define KVM_REG_SIZE(id)						\
>  	(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index 815cc118c675..0309b2d0f2da 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -32,6 +32,7 @@ menuconfig KVM
>  	select KVM_VFIO
>  	select HAVE_KVM_EVENTFD
>  	select HAVE_KVM_IRQFD
> +	select HAVE_KVM_DIRTY_RING
>  	select HAVE_KVM_MSI
>  	select HAVE_KVM_IRQCHIP
>  	select HAVE_KVM_IRQ_ROUTING
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 2ff0ef62abad..76816f8e082b 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -747,6 +747,14 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
>  
>  		if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
>  			return kvm_vcpu_suspend(vcpu);
> +
> +		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
> +		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
> +			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
> +			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> +			trace_kvm_dirty_ring_exit(vcpu);
> +			return 0;
> +		}

This is *very* similar to the x86 code. Could we move it to common
code? Something like the diff below, to be for most of it squashed
into patch #1.

Thanks,

	M.

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 76816f8e082b..93a16cdbe163 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -748,13 +748,8 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
 		if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
 			return kvm_vcpu_suspend(vcpu);
 
-		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
-		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
-			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
-			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
-			trace_kvm_dirty_ring_exit(vcpu);
+		if (kvm_dirty_ring_check_request(vcpu))
 			return 0;
-		}
 	}
 
 	return 1;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index eb7d0d7654bb..48f2519b1db7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10249,11 +10249,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 
 	if (kvm_request_pending(vcpu)) {
 		/* Forbid vmenter if vcpu dirty ring is soft-full */
-		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
-		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
-			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
-			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
-			trace_kvm_dirty_ring_exit(vcpu);
+		if (kvm_dirty_ring_check_request(vcpu)) {
 			r = 0;
 			goto out;
 		}
diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
index 8c6755981c9b..6e484220adc0 100644
--- a/include/linux/kvm_dirty_ring.h
+++ b/include/linux/kvm_dirty_ring.h
@@ -64,11 +64,6 @@ static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
 {
 }
 
-static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
-{
-	return true;
-}
-
 #else /* CONFIG_HAVE_KVM_DIRTY_RING */
 
 int kvm_cpu_dirty_log_size(void);
@@ -91,7 +86,7 @@ void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset);
 struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
 
 void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
-bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
+bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu);
 
 #endif /* CONFIG_HAVE_KVM_DIRTY_RING */
 
diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
index 69c8c90d489d..436d7cded5bf 100644
--- a/virt/kvm/dirty_ring.c
+++ b/virt/kvm/dirty_ring.c
@@ -26,7 +26,7 @@ static u32 kvm_dirty_ring_used(struct kvm_dirty_ring *ring)
 	return READ_ONCE(ring->dirty_index) - READ_ONCE(ring->reset_index);
 }
 
-bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
+static bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
 {
 	return kvm_dirty_ring_used(ring) >= ring->soft_limit;
 }
@@ -182,3 +182,16 @@ void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
 	vfree(ring->dirty_gfns);
 	ring->dirty_gfns = NULL;
 }
+
+bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu)
+{
+	if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
+	    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
+		kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
+		vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
+		trace_kvm_dirty_ring_exit(vcpu);
+		return true;
+	}
+
+	return false;
+}

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h
  2022-09-24 19:12   ` Marc Zyngier
@ 2022-09-25 23:09     ` Gavin Shan
  0 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-25 23:09 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

Hi Marc,

On 9/25/22 5:12 AM, Marc Zyngier wrote:
> On Thu, 22 Sep 2022 01:32:10 +0100,
> Gavin Shan <gshan@redhat.com> wrote:
>>
>> Not all architectures like ARM64 need to override the function. Move
>> its declaration to kvm_dirty_ring.h to avoid the following compiling
>> warning on ARM64 when the feature is enabled.
>>
>>    arch/arm64/kvm/../../../virt/kvm/dirty_ring.c:14:12:        \
>>    warning: no previous prototype for 'kvm_cpu_dirty_log_size' \
>>    [-Wmissing-prototypes]                                      \
>>    int __weak kvm_cpu_dirty_log_size(void)
>>
>> Reported-by: kernel test robot <lkp@intel.com>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   arch/x86/include/asm/kvm_host.h | 2 --
>>   arch/x86/kvm/mmu/mmu.c          | 2 ++
>>   include/linux/kvm_dirty_ring.h  | 1 +
>>   3 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 2c96c43c313a..4c0fd517282b 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -2082,8 +2082,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu)
>>   #define GET_SMSTATE(type, buf, offset)		\
>>   	(*(type *)((buf) + (offset) - 0x7e00))
>>   
>> -int kvm_cpu_dirty_log_size(void);
>> -
>>   int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages);
>>   
>>   #define KVM_CLOCK_VALID_FLAGS						\
>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>> index e418ef3ecfcb..b3eb6a3627ec 100644
>> --- a/arch/x86/kvm/mmu/mmu.c
>> +++ b/arch/x86/kvm/mmu/mmu.c
>> @@ -1349,10 +1349,12 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
>>   		kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask);
>>   }
>>   
>> +#ifdef CONFIG_HAVE_KVM_DIRTY_RING
> 
> I think you can drop the ifdeffery, as HAVE_KVM_DIRTY_RING is
> unconditionally selected by the arch Kconfig.
> 

Yeah, I think so. Lets drop it in v4, which will be rebased on top
of your v2 series.

>>   int kvm_cpu_dirty_log_size(void)
>>   {
>>   	return kvm_x86_ops.cpu_dirty_log_size;
>>   }
>> +#endif
>>   
>>   bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
>>   				    struct kvm_memory_slot *slot, u64 gfn,
>> diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
>> index 906f899813dc..8c6755981c9b 100644
>> --- a/include/linux/kvm_dirty_ring.h
>> +++ b/include/linux/kvm_dirty_ring.h
>> @@ -71,6 +71,7 @@ static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
>>   
>>   #else /* CONFIG_HAVE_KVM_DIRTY_RING */
>>   
>> +int kvm_cpu_dirty_log_size(void);
>>   u32 kvm_dirty_ring_get_rsvd_entries(void);
>>   int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size);
> 

Thanks,
Gavin

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking
  2022-09-24 20:27   ` Marc Zyngier
@ 2022-09-25 23:13     ` Gavin Shan
  0 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-09-25 23:13 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, catalin.marinas, andrew.jones, will, shan.gavin, bgardon,
	dmatlack, pbonzini, zhenyzha, shuah, kvmarm

Hi Marc,

On 9/25/22 6:27 AM, Marc Zyngier wrote:
> On Thu, 22 Sep 2022 01:32:11 +0100,
> Gavin Shan <gshan@redhat.com> wrote:
>>
>> This enables the ring-based dirty memory tracking on ARM64. The
>> feature is configured by CONFIG_HAVE_KVM_DIRTY_RING, detected and
>> enabled by KVM_CAP_DIRTY_LOG_RING. A ring buffer is created on every
>> VCPU when the feature is enabled. Each entry in the ring buffer is
>> described by 'struct kvm_dirty_gfn'.
>>
>> A ring buffer entry is pushed when a page becomes dirty on host,
>> and pulled by userspace after the ring buffer is mapped at physical
>> page offset KVM_DIRTY_LOG_PAGE_OFFSET. The specific VCPU is enforced
>> to exit if its ring buffer becomes softly full. Besides, the ring
>> buffer can be reset by ioctl command KVM_RESET_DIRTY_RINGS to release
>> those pulled ring buffer entries.
> 
> I think you can cut this message short. This description was useful
> when the feature was initially merged, but this is only a "plumb the
> damn thing" patch.
> 

Agreed. I will modify the changelog in v4 accordingly, thanks.

>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   Documentation/virt/kvm/api.rst    | 2 +-
>>   arch/arm64/include/uapi/asm/kvm.h | 1 +
>>   arch/arm64/kvm/Kconfig            | 1 +
>>   arch/arm64/kvm/arm.c              | 8 ++++++++
>>   4 files changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
>> index abd7c32126ce..19fa1ac017ed 100644
>> --- a/Documentation/virt/kvm/api.rst
>> +++ b/Documentation/virt/kvm/api.rst
>> @@ -8022,7 +8022,7 @@ regardless of what has actually been exposed through the CPUID leaf.
>>   8.29 KVM_CAP_DIRTY_LOG_RING
>>   ---------------------------
>>   
>> -:Architectures: x86
>> +:Architectures: x86, arm64
>>   :Parameters: args[0] - size of the dirty log ring
>>   
>>   KVM is capable of tracking dirty memory using ring buffers that are
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
>> index 316917b98707..a7a857f1784d 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -43,6 +43,7 @@
>>   #define __KVM_HAVE_VCPU_EVENTS
>>   
>>   #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
>> +#define KVM_DIRTY_LOG_PAGE_OFFSET 64
>>   
>>   #define KVM_REG_SIZE(id)						\
>>   	(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>> index 815cc118c675..0309b2d0f2da 100644
>> --- a/arch/arm64/kvm/Kconfig
>> +++ b/arch/arm64/kvm/Kconfig
>> @@ -32,6 +32,7 @@ menuconfig KVM
>>   	select KVM_VFIO
>>   	select HAVE_KVM_EVENTFD
>>   	select HAVE_KVM_IRQFD
>> +	select HAVE_KVM_DIRTY_RING
>>   	select HAVE_KVM_MSI
>>   	select HAVE_KVM_IRQCHIP
>>   	select HAVE_KVM_IRQ_ROUTING
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 2ff0ef62abad..76816f8e082b 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -747,6 +747,14 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
>>   
>>   		if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
>>   			return kvm_vcpu_suspend(vcpu);
>> +
>> +		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
>> +		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
>> +			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
>> +			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
>> +			trace_kvm_dirty_ring_exit(vcpu);
>> +			return 0;
>> +		}
> 
> This is *very* similar to the x86 code. Could we move it to common
> code? Something like the diff below, to be for most of it squashed
> into patch #1.
> 

Sure, the additional changes make sense to me. I will fold them to
PATCH[v4 1/6].

Thanks,
Gavin

> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 76816f8e082b..93a16cdbe163 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -748,13 +748,8 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
>   		if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
>   			return kvm_vcpu_suspend(vcpu);
>   
> -		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
> -		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
> -			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
> -			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> -			trace_kvm_dirty_ring_exit(vcpu);
> +		if (kvm_dirty_ring_check_request(vcpu))
>   			return 0;
> -		}
>   	}
>   
>   	return 1;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index eb7d0d7654bb..48f2519b1db7 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10249,11 +10249,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>   
>   	if (kvm_request_pending(vcpu)) {
>   		/* Forbid vmenter if vcpu dirty ring is soft-full */
> -		if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
> -		    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
> -			kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
> -			vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> -			trace_kvm_dirty_ring_exit(vcpu);
> +		if (kvm_dirty_ring_check_request(vcpu)) {
>   			r = 0;
>   			goto out;
>   		}
> diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> index 8c6755981c9b..6e484220adc0 100644
> --- a/include/linux/kvm_dirty_ring.h
> +++ b/include/linux/kvm_dirty_ring.h
> @@ -64,11 +64,6 @@ static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
>   {
>   }
>   
> -static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
> -{
> -	return true;
> -}
> -
>   #else /* CONFIG_HAVE_KVM_DIRTY_RING */
>   
>   int kvm_cpu_dirty_log_size(void);
> @@ -91,7 +86,7 @@ void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset);
>   struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset);
>   
>   void kvm_dirty_ring_free(struct kvm_dirty_ring *ring);
> -bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring);
> +bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu);
>   
>   #endif /* CONFIG_HAVE_KVM_DIRTY_RING */
>   
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> index 69c8c90d489d..436d7cded5bf 100644
> --- a/virt/kvm/dirty_ring.c
> +++ b/virt/kvm/dirty_ring.c
> @@ -26,7 +26,7 @@ static u32 kvm_dirty_ring_used(struct kvm_dirty_ring *ring)
>   	return READ_ONCE(ring->dirty_index) - READ_ONCE(ring->reset_index);
>   }
>   
> -bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
> +static bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring)
>   {
>   	return kvm_dirty_ring_used(ring) >= ring->soft_limit;
>   }
> @@ -182,3 +182,16 @@ void kvm_dirty_ring_free(struct kvm_dirty_ring *ring)
>   	vfree(ring->dirty_gfns);
>   	ring->dirty_gfns = NULL;
>   }
> +
> +bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu)
> +{
> +	if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) &&
> +	    kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) {
> +		kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu);
> +		vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> +		trace_kvm_dirty_ring_exit(vcpu);
> +		return true;
> +	}
> +
> +	return false;
> +}
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-09-25 23:13 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-22  0:32 [PATCH v3 0/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
2022-09-22  0:32 ` [PATCH v3 1/6] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL Gavin Shan
2022-09-22  0:32 ` [PATCH v3 2/6] KVM: x86: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h Gavin Shan
2022-09-24 19:12   ` Marc Zyngier
2022-09-25 23:09     ` Gavin Shan
2022-09-22  0:32 ` [PATCH v3 3/6] KVM: arm64: Enable ring-based dirty memory tracking Gavin Shan
2022-09-24 20:27   ` Marc Zyngier
2022-09-25 23:13     ` Gavin Shan
2022-09-22  0:32 ` [PATCH v3 4/6] KVM: selftests: Use host page size to map ring buffer in dirty_log_test Gavin Shan
2022-09-22  0:32 ` [PATCH v3 5/6] KVM: selftests: Clear dirty ring states between two modes " Gavin Shan
2022-09-22  0:32 ` [PATCH v3 6/6] KVM: selftests: Automate choosing dirty ring size " Gavin Shan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).