kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Add a dirty logging performance test
@ 2020-10-27 23:37 Ben Gardon
  2020-10-27 23:37 ` [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test Ben Gardon
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Ben Gardon @ 2020-10-27 23:37 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Peter Xu, Andrew Jones, Peter Shier,
	Sean Christopherson, Thomas Huth, Peter Feiner, Ben Gardon

Currently KVM lacks a simple, userspace agnostic, performance benchmark for
dirty logging. Such a benchmark will be beneficial for ensuring that dirty
logging performance does not regress, and to give a common baseline for
validating performance improvements. The dirty log perf test introduced in
this series builds on aspects of the existing demand paging perf test and
provides time-based performance metrics for enabling and disabling dirty
logging, getting the dirty log, and dirtying memory.

While the test currently only has a build target for x86, I expect it will
work on, or be easily modified to support other architectures.

Ben Gardon (5):
  KVM: selftests: Factor code out of demand_paging_test
  KVM: selftests: Remove address rounding in guest code
  KVM: selftests: Simplify demand_paging_test with timespec_diff_now
  KVM: selftests: Add wrfract to common guest code
  KVM: selftests: Introduce the dirty log perf test

 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/demand_paging_test.c        | 230 ++---------
 .../selftests/kvm/dirty_log_perf_test.c       | 382 ++++++++++++++++++
 .../selftests/kvm/include/perf_test_util.h    | 192 +++++++++
 .../testing/selftests/kvm/include/test_util.h |   2 +
 tools/testing/selftests/kvm/lib/test_util.c   |  22 +-
 7 files changed, 635 insertions(+), 195 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/dirty_log_perf_test.c
 create mode 100644 tools/testing/selftests/kvm/include/perf_test_util.h

-- 
2.29.0.rc2.309.g374f81d7ae-goog


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test
  2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
@ 2020-10-27 23:37 ` Ben Gardon
  2020-11-02 21:23   ` Peter Xu
  2020-10-27 23:37 ` [PATCH 2/5] KVM: selftests: Remove address rounding in guest code Ben Gardon
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Ben Gardon @ 2020-10-27 23:37 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Peter Xu, Andrew Jones, Peter Shier,
	Sean Christopherson, Thomas Huth, Peter Feiner, Ben Gardon

Much of the code in demand_paging_test can be reused by other, similar
multi-vCPU-memory-touching-perfromance-tests. Factor that common code
out for reuse.

No functional change expected.

This series was tested by running the following invocations on an Intel
Skylake machine:
dirty_log_perf_test -b 20m -i 100 -v 64
dirty_log_perf_test -b 20g -i 5 -v 4
dirty_log_perf_test -b 4g -i 5 -v 32
demand_paging_test -b 20m -v 64
demand_paging_test -b 20g -v 4
demand_paging_test -b 4g -v 32
All behaved as expected.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 204 ++----------------
 .../selftests/kvm/include/perf_test_util.h    | 187 ++++++++++++++++
 2 files changed, 210 insertions(+), 181 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/perf_test_util.h

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 360cd3ea4cd67..4251e98ceb69f 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -21,18 +21,12 @@
 #include <linux/bitops.h>
 #include <linux/userfaultfd.h>
 
-#include "test_util.h"
-#include "kvm_util.h"
+#include "perf_test_util.h"
 #include "processor.h"
+#include "test_util.h"
 
 #ifdef __NR_userfaultfd
 
-/* The memory slot index demand page */
-#define TEST_MEM_SLOT_INDEX		1
-
-/* Default guest test virtual memory offset */
-#define DEFAULT_GUEST_TEST_MEM		0xc0000000
-
 #define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
 
 #ifdef PRINT_PER_PAGE_UPDATES
@@ -47,75 +41,14 @@
 #define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__)
 #endif
 
-#define MAX_VCPUS 512
-
-/*
- * Guest/Host shared variables. Ensure addr_gva2hva() and/or
- * sync_global_to/from_guest() are used when accessing from
- * the host. READ/WRITE_ONCE() should also be used with anything
- * that may change.
- */
-static uint64_t host_page_size;
-static uint64_t guest_page_size;
-
 static char *guest_data_prototype;
 
-/*
- * Guest physical memory offset of the testing memory slot.
- * This will be set to the topmost valid physical address minus
- * the test memory size.
- */
-static uint64_t guest_test_phys_mem;
-
-/*
- * Guest virtual memory offset of the testing memory slot.
- * Must not conflict with identity mapped test code.
- */
-static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
-
-struct vcpu_args {
-	uint64_t gva;
-	uint64_t pages;
-
-	/* Only used by the host userspace part of the vCPU thread */
-	int vcpu_id;
-	struct kvm_vm *vm;
-};
-
-static struct vcpu_args vcpu_args[MAX_VCPUS];
-
-/*
- * Continuously write to the first 8 bytes of each page in the demand paging
- * memory region.
- */
-static void guest_code(uint32_t vcpu_id)
-{
-	uint64_t gva;
-	uint64_t pages;
-	int i;
-
-	/* Make sure vCPU args data structure is not corrupt. */
-	GUEST_ASSERT(vcpu_args[vcpu_id].vcpu_id == vcpu_id);
-
-	gva = vcpu_args[vcpu_id].gva;
-	pages = vcpu_args[vcpu_id].pages;
-
-	for (i = 0; i < pages; i++) {
-		uint64_t addr = gva + (i * guest_page_size);
-
-		addr &= ~(host_page_size - 1);
-		*(uint64_t *)addr = 0x0123456789ABCDEF;
-	}
-
-	GUEST_SYNC(1);
-}
-
 static void *vcpu_worker(void *data)
 {
 	int ret;
-	struct vcpu_args *args = (struct vcpu_args *)data;
-	struct kvm_vm *vm = args->vm;
-	int vcpu_id = args->vcpu_id;
+	struct vcpu_args *vcpu_args = (struct vcpu_args *)data;
+	int vcpu_id = vcpu_args->vcpu_id;
+	struct kvm_vm *vm = perf_test_args.vm;
 	struct kvm_run *run;
 	struct timespec start, end, ts_diff;
 
@@ -141,39 +74,6 @@ static void *vcpu_worker(void *data)
 	return NULL;
 }
 
-#define PAGE_SHIFT_4K  12
-#define PTES_PER_4K_PT 512
-
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
-				uint64_t vcpu_memory_bytes)
-{
-	struct kvm_vm *vm;
-	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
-
-	/* Account for a few pages per-vCPU for stacks */
-	pages += DEFAULT_STACK_PGS * vcpus;
-
-	/*
-	 * Reserve twice the ammount of memory needed to map the test region and
-	 * the page table / stacks region, at 4k, for page tables. Do the
-	 * calculation with 4K page size: the smallest of all archs. (e.g., 64K
-	 * page size guest will need even less memory for page tables).
-	 */
-	pages += (2 * pages) / PTES_PER_4K_PT;
-	pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
-		 PTES_PER_4K_PT;
-	pages = vm_adjust_num_guest_pages(mode, pages);
-
-	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
-
-	vm = _vm_create(mode, pages, O_RDWR);
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-#ifdef __x86_64__
-	vm_create_irqchip(vm);
-#endif
-	return vm;
-}
-
 static int handle_uffd_page_request(int uffd, uint64_t addr)
 {
 	pid_t tid;
@@ -186,7 +86,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 
 	copy.src = (uint64_t)guest_data_prototype;
 	copy.dst = addr;
-	copy.len = host_page_size;
+	copy.len = perf_test_args.host_page_size;
 	copy.mode = 0;
 
 	clock_gettime(CLOCK_MONOTONIC, &start);
@@ -203,7 +103,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 	PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid,
 		       timespec_to_ns(timespec_sub(end, start)));
 	PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n",
-		       host_page_size, addr, tid);
+		       perf_test_args.host_page_size, addr, tid);
 
 	return 0;
 }
@@ -360,64 +260,21 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	struct timespec start, end, ts_diff;
 	int *pipefds = NULL;
 	struct kvm_vm *vm;
-	uint64_t guest_num_pages;
 	int vcpu_id;
 	int r;
 
 	vm = create_vm(mode, vcpus, vcpu_memory_bytes);
 
-	guest_page_size = vm_get_page_size(vm);
-
-	TEST_ASSERT(vcpu_memory_bytes % guest_page_size == 0,
-		    "Guest memory size is not guest page size aligned.");
-
-	guest_num_pages = (vcpus * vcpu_memory_bytes) / guest_page_size;
-	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
-
-	/*
-	 * If there should be more memory in the guest test region than there
-	 * can be pages in the guest, it will definitely cause problems.
-	 */
-	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
-		    "Requested more guest memory than address space allows.\n"
-		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
-		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
-		    vcpu_memory_bytes);
-
-	host_page_size = getpagesize();
-	TEST_ASSERT(vcpu_memory_bytes % host_page_size == 0,
-		    "Guest memory size is not host page size aligned.");
-
-	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
-			      guest_page_size;
-	guest_test_phys_mem &= ~(host_page_size - 1);
-
-#ifdef __s390x__
-	/* Align to 1M (segment size) */
-	guest_test_phys_mem &= ~((1 << 20) - 1);
-#endif
-
-	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
-
-	/* Add an extra memory slot for testing demand paging */
-	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-				    guest_test_phys_mem,
-				    TEST_MEM_SLOT_INDEX,
-				    guest_num_pages, 0);
-
-	/* Do mapping for the demand paging memory slot */
-	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0);
-
-	ucall_init(vm, NULL);
-
-	guest_data_prototype = malloc(host_page_size);
+	guest_data_prototype = malloc(perf_test_args.host_page_size);
 	TEST_ASSERT(guest_data_prototype,
 		    "Failed to allocate buffer for guest data pattern");
-	memset(guest_data_prototype, 0xAB, host_page_size);
+	memset(guest_data_prototype, 0xAB, perf_test_args.host_page_size);
 
 	vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
+	add_vcpus(vm, vcpus, vcpu_memory_bytes);
+
 	if (use_uffd) {
 		uffd_handler_threads =
 			malloc(vcpus * sizeof(*uffd_handler_threads));
@@ -428,22 +285,18 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 		pipefds = malloc(sizeof(int) * vcpus * 2);
 		TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd");
-	}
 
-	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
-		vm_paddr_t vcpu_gpa;
-		void *vcpu_hva;
-
-		vm_vcpu_add_default(vm, vcpu_id, guest_code);
+		for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+			vm_paddr_t vcpu_gpa;
+			void *vcpu_hva;
 
-		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes);
-		PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
-			       vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes);
+			vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes);
+			PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
+				       vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes);
 
-		/* Cache the HVA pointer of the region */
-		vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
+			/* Cache the HVA pointer of the region */
+			vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
 
-		if (use_uffd) {
 			/*
 			 * Set up user fault fd to handle demand paging
 			 * requests.
@@ -460,22 +313,10 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 			if (r < 0)
 				exit(-r);
 		}
-
-#ifdef __x86_64__
-		vcpu_set_cpuid(vm, vcpu_id, kvm_get_supported_cpuid());
-#endif
-
-		vcpu_args[vcpu_id].vm = vm;
-		vcpu_args[vcpu_id].vcpu_id = vcpu_id;
-		vcpu_args[vcpu_id].gva = guest_test_virt_mem +
-					 (vcpu_id * vcpu_memory_bytes);
-		vcpu_args[vcpu_id].pages = vcpu_memory_bytes / guest_page_size;
 	}
 
 	/* Export the shared variables to the guest */
-	sync_global_to_guest(vm, host_page_size);
-	sync_global_to_guest(vm, guest_page_size);
-	sync_global_to_guest(vm, vcpu_args);
+	sync_global_to_guest(vm, perf_test_args);
 
 	pr_info("Finished creating vCPUs and starting uffd threads\n");
 
@@ -483,7 +324,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
 		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
-			       &vcpu_args[vcpu_id]);
+			       &perf_test_args.vcpu_args[vcpu_id]);
 	}
 
 	pr_info("Started all vCPUs\n");
@@ -514,7 +355,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	pr_info("Total guest execution time: %ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
 	pr_info("Overall demand paging rate: %f pgs/sec\n",
-		guest_num_pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0));
+		perf_test_args.vcpu_args[0].pages * vcpus /
+		((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0));
 
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
new file mode 100644
index 0000000000000..f71f0858a1f29
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -0,0 +1,187 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * tools/testing/selftests/kvm/include/perf_test_util.h
+ *
+ * Copyright (C) 2020, Google LLC.
+ */
+
+#ifndef SELFTEST_KVM_PERF_TEST_UTIL_H
+#define SELFTEST_KVM_PERF_TEST_UTIL_H
+
+#include "kvm_util.h"
+#include "processor.h"
+
+#define MAX_VCPUS 512
+
+#define PAGE_SHIFT_4K  12
+#define PTES_PER_4K_PT 512
+
+#define TEST_MEM_SLOT_INDEX		1
+
+/* Default guest test virtual memory offset */
+#define DEFAULT_GUEST_TEST_MEM		0xc0000000
+
+/*
+ * Guest physical memory offset of the testing memory slot.
+ * This will be set to the topmost valid physical address minus
+ * the test memory size.
+ */
+static uint64_t guest_test_phys_mem;
+
+/*
+ * Guest virtual memory offset of the testing memory slot.
+ * Must not conflict with identity mapped test code.
+ */
+static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+
+struct vcpu_args {
+	uint64_t gva;
+	uint64_t pages;
+
+	/* Only used by the host userspace part of the vCPU thread */
+	int vcpu_id;
+};
+
+struct perf_test_args {
+	struct kvm_vm *vm;
+	uint64_t host_page_size;
+	uint64_t guest_page_size;
+
+	struct vcpu_args vcpu_args[MAX_VCPUS];
+};
+
+static struct perf_test_args perf_test_args;
+
+/*
+ * Continuously write to the first 8 bytes of each page in the
+ * specified region.
+ */
+static void guest_code(uint32_t vcpu_id)
+{
+	struct vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
+	uint64_t gva;
+	uint64_t pages;
+	int i;
+
+	/* Make sure vCPU args data structure is not corrupt. */
+	GUEST_ASSERT(vcpu_args->vcpu_id == vcpu_id);
+
+	gva = vcpu_args->gva;
+	pages = vcpu_args->pages;
+
+	for (i = 0; i < pages; i++) {
+		uint64_t addr = gva + (i * perf_test_args.guest_page_size);
+
+		addr &= ~(perf_test_args.host_page_size - 1);
+		*(uint64_t *)addr = 0x0123456789ABCDEF;
+	}
+
+	GUEST_SYNC(1);
+}
+
+static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
+				uint64_t vcpu_memory_bytes)
+{
+	struct kvm_vm *vm;
+	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
+	uint64_t guest_num_pages;
+
+	/* Account for a few pages per-vCPU for stacks */
+	pages += DEFAULT_STACK_PGS * vcpus;
+
+	/*
+	 * Reserve twice the ammount of memory needed to map the test region and
+	 * the page table / stacks region, at 4k, for page tables. Do the
+	 * calculation with 4K page size: the smallest of all archs. (e.g., 64K
+	 * page size guest will need even less memory for page tables).
+	 */
+	pages += (2 * pages) / PTES_PER_4K_PT;
+	pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
+		 PTES_PER_4K_PT;
+	pages = vm_adjust_num_guest_pages(mode, pages);
+
+	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
+
+	vm = _vm_create(mode, pages, O_RDWR);
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+#ifdef __x86_64__
+	vm_create_irqchip(vm);
+#endif
+
+	perf_test_args.vm = vm;
+	perf_test_args.guest_page_size = vm_get_page_size(vm);
+	perf_test_args.host_page_size = getpagesize();
+
+	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,
+		    "Guest memory size is not guest page size aligned.");
+
+	guest_num_pages = (vcpus * vcpu_memory_bytes) /
+			  perf_test_args.guest_page_size;
+	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
+
+	/*
+	 * If there should be more memory in the guest test region than there
+	 * can be pages in the guest, it will definitely cause problems.
+	 */
+	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
+		    "Requested more guest memory than address space allows.\n"
+		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
+		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
+		    vcpu_memory_bytes);
+
+	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
+		    "Guest memory size is not host page size aligned.");
+
+	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
+			      perf_test_args.guest_page_size;
+	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
+
+#ifdef __s390x__
+	/* Align to 1M (segment size) */
+	guest_test_phys_mem &= ~((1 << 20) - 1);
+#endif
+
+	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
+
+	/* Add an extra memory slot for testing */
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+				    guest_test_phys_mem,
+				    TEST_MEM_SLOT_INDEX,
+				    guest_num_pages, 0);
+
+	/* Do mapping for the demand paging memory slot */
+	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0);
+
+	ucall_init(vm, NULL);
+
+	return vm;
+}
+
+static void add_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes)
+{
+	vm_paddr_t vcpu_gpa;
+	struct vcpu_args *vcpu_args;
+	int vcpu_id;
+
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
+
+		vm_vcpu_add_default(vm, vcpu_id, guest_code);
+
+#ifdef __x86_64__
+		vcpu_set_cpuid(vm, vcpu_id, kvm_get_supported_cpuid());
+#endif
+
+		vcpu_args->vcpu_id = vcpu_id;
+		vcpu_args->gva = guest_test_virt_mem +
+				 (vcpu_id * vcpu_memory_bytes);
+		vcpu_args->pages = vcpu_memory_bytes /
+				   perf_test_args.guest_page_size;
+
+		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes);
+		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
+			 vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes);
+	}
+}
+
+#endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */
-- 
2.29.0.rc2.309.g374f81d7ae-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/5] KVM: selftests: Remove address rounding in guest code
  2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
  2020-10-27 23:37 ` [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test Ben Gardon
@ 2020-10-27 23:37 ` Ben Gardon
  2020-11-02 21:25   ` Peter Xu
  2020-10-27 23:37 ` [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now Ben Gardon
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Ben Gardon @ 2020-10-27 23:37 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Peter Xu, Andrew Jones, Peter Shier,
	Sean Christopherson, Thomas Huth, Peter Feiner, Ben Gardon

Rounding the address the guest writes to a host page boundary
will only have an effect if the host page size is larger than the guest
page size, but in that case the guest write would still go to the same
host page. There's no reason to round the address down, so remove the
rounding to simplify the demand paging test.

This series was tested by running the following invocations on an Intel
Skylake machine:
dirty_log_perf_test -b 20m -i 100 -v 64
dirty_log_perf_test -b 20g -i 5 -v 4
dirty_log_perf_test -b 4g -i 5 -v 32
demand_paging_test -b 20m -v 64
demand_paging_test -b 20g -v 4
demand_paging_test -b 4g -v 32
All behaved as expected.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 tools/testing/selftests/kvm/include/perf_test_util.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index f71f0858a1f29..838f946700f0c 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -72,7 +72,6 @@ static void guest_code(uint32_t vcpu_id)
 	for (i = 0; i < pages; i++) {
 		uint64_t addr = gva + (i * perf_test_args.guest_page_size);
 
-		addr &= ~(perf_test_args.host_page_size - 1);
 		*(uint64_t *)addr = 0x0123456789ABCDEF;
 	}
 
-- 
2.29.0.rc2.309.g374f81d7ae-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now
  2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
  2020-10-27 23:37 ` [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test Ben Gardon
  2020-10-27 23:37 ` [PATCH 2/5] KVM: selftests: Remove address rounding in guest code Ben Gardon
@ 2020-10-27 23:37 ` Ben Gardon
  2020-11-02 21:27   ` Peter Xu
  2020-10-27 23:37 ` [PATCH 4/5] KVM: selftests: Add wrfract to common guest code Ben Gardon
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Ben Gardon @ 2020-10-27 23:37 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Peter Xu, Andrew Jones, Peter Shier,
	Sean Christopherson, Thomas Huth, Peter Feiner, Ben Gardon

Add a helper function to get the current time and return the time since
a given start time. Use that function to simplify the timekeeping in the
demand paging test.

This series was tested by running the following invocations on an Intel
Skylake machine:
dirty_log_perf_test -b 20m -i 100 -v 64
dirty_log_perf_test -b 20g -i 5 -v 4
dirty_log_perf_test -b 4g -i 5 -v 32
demand_paging_test -b 20m -v 64
demand_paging_test -b 20g -v 4
demand_paging_test -b 4g -v 32
All behaved as expected.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 26 +++++++++----------
 .../testing/selftests/kvm/include/test_util.h |  1 +
 tools/testing/selftests/kvm/lib/test_util.c   | 15 +++++++++--
 3 files changed, 27 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 4251e98ceb69f..7de6feb000760 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -50,7 +50,8 @@ static void *vcpu_worker(void *data)
 	int vcpu_id = vcpu_args->vcpu_id;
 	struct kvm_vm *vm = perf_test_args.vm;
 	struct kvm_run *run;
-	struct timespec start, end, ts_diff;
+	struct timespec start;
+	struct timespec ts_diff;
 
 	vcpu_args_set(vm, vcpu_id, 1, vcpu_id);
 	run = vcpu_state(vm, vcpu_id);
@@ -66,8 +67,7 @@ static void *vcpu_worker(void *data)
 			    exit_reason_str(run->exit_reason));
 	}
 
-	clock_gettime(CLOCK_MONOTONIC, &end);
-	ts_diff = timespec_sub(end, start);
+	ts_diff = timespec_diff_now(start);
 	PER_VCPU_DEBUG("vCPU %d execution time: %ld.%.9lds\n", vcpu_id,
 		       ts_diff.tv_sec, ts_diff.tv_nsec);
 
@@ -78,7 +78,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 {
 	pid_t tid;
 	struct timespec start;
-	struct timespec end;
+	struct timespec ts_diff;
 	struct uffdio_copy copy;
 	int r;
 
@@ -98,10 +98,10 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 		return r;
 	}
 
-	clock_gettime(CLOCK_MONOTONIC, &end);
+	ts_diff = timespec_diff_now(start);
 
 	PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid,
-		       timespec_to_ns(timespec_sub(end, start)));
+		       timespec_to_ns(ts_diff));
 	PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n",
 		       perf_test_args.host_page_size, addr, tid);
 
@@ -123,7 +123,8 @@ static void *uffd_handler_thread_fn(void *arg)
 	int pipefd = uffd_args->pipefd;
 	useconds_t delay = uffd_args->delay;
 	int64_t pages = 0;
-	struct timespec start, end, ts_diff;
+	struct timespec start;
+	struct timespec ts_diff;
 
 	clock_gettime(CLOCK_MONOTONIC, &start);
 	while (!quit_uffd_thread) {
@@ -192,8 +193,7 @@ static void *uffd_handler_thread_fn(void *arg)
 		pages++;
 	}
 
-	clock_gettime(CLOCK_MONOTONIC, &end);
-	ts_diff = timespec_sub(end, start);
+	ts_diff = timespec_diff_now(start);
 	PER_VCPU_DEBUG("userfaulted %ld pages over %ld.%.9lds. (%f/sec)\n",
 		       pages, ts_diff.tv_sec, ts_diff.tv_nsec,
 		       pages / ((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0));
@@ -257,7 +257,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	pthread_t *vcpu_threads;
 	pthread_t *uffd_handler_threads = NULL;
 	struct uffd_handler_args *uffd_args = NULL;
-	struct timespec start, end, ts_diff;
+	struct timespec start;
+	struct timespec ts_diff;
 	int *pipefds = NULL;
 	struct kvm_vm *vm;
 	int vcpu_id;
@@ -335,9 +336,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		PER_VCPU_DEBUG("Joined thread for vCPU %d\n", vcpu_id);
 	}
 
-	pr_info("All vCPU threads joined\n");
+	ts_diff = timespec_diff_now(start);
 
-	clock_gettime(CLOCK_MONOTONIC, &end);
+	pr_info("All vCPU threads joined\n");
 
 	if (use_uffd) {
 		char c;
@@ -351,7 +352,6 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		}
 	}
 
-	ts_diff = timespec_sub(end, start);
 	pr_info("Total guest execution time: %ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
 	pr_info("Overall demand paging rate: %f pgs/sec\n",
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 5eb01bf51b86f..1cc036ddb0c5e 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -64,5 +64,6 @@ int64_t timespec_to_ns(struct timespec ts);
 struct timespec timespec_add_ns(struct timespec ts, int64_t ns);
 struct timespec timespec_add(struct timespec ts1, struct timespec ts2);
 struct timespec timespec_sub(struct timespec ts1, struct timespec ts2);
+struct timespec timespec_diff_now(struct timespec start);
 
 #endif /* SELFTEST_KVM_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 689e97c27ee24..1a46c2c48c7cb 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -4,10 +4,13 @@
  *
  * Copyright (C) 2020, Google LLC.
  */
-#include <stdlib.h>
+
+#include <assert.h>
 #include <ctype.h>
 #include <limits.h>
-#include <assert.h>
+#include <stdlib.h>
+#include <time.h>
+
 #include "test_util.h"
 
 /*
@@ -81,6 +84,14 @@ struct timespec timespec_sub(struct timespec ts1, struct timespec ts2)
 	return timespec_add_ns((struct timespec){0}, ns1 - ns2);
 }
 
+struct timespec timespec_diff_now(struct timespec start)
+{
+	struct timespec end;
+
+	clock_gettime(CLOCK_MONOTONIC, &end);
+	return timespec_sub(end, start);
+}
+
 void print_skip(const char *fmt, ...)
 {
 	va_list ap;
-- 
2.29.0.rc2.309.g374f81d7ae-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/5] KVM: selftests: Add wrfract to common guest code
  2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
                   ` (2 preceding siblings ...)
  2020-10-27 23:37 ` [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now Ben Gardon
@ 2020-10-27 23:37 ` Ben Gardon
  2020-10-27 23:37 ` [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test Ben Gardon
  2020-11-06 12:48 ` [PATCH 0/5] Add a dirty logging performance test Paolo Bonzini
  5 siblings, 0 replies; 18+ messages in thread
From: Ben Gardon @ 2020-10-27 23:37 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Peter Xu, Andrew Jones, Peter Shier,
	Sean Christopherson, Thomas Huth, Peter Feiner, Ben Gardon

Wrfract will be used by the dirty logging perf test introduced later in
this series to dirty memory sparsely.

This series was tested by running the following invocations on an Intel
Skylake machine:
dirty_log_perf_test -b 20m -i 100 -v 64
dirty_log_perf_test -b 20g -i 5 -v 4
dirty_log_perf_test -b 4g -i 5 -v 32
demand_paging_test -b 20m -v 64
demand_paging_test -b 20g -v 4
demand_paging_test -b 4g -v 32
All behaved as expected.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 tools/testing/selftests/kvm/demand_paging_test.c     | 2 ++
 tools/testing/selftests/kvm/include/perf_test_util.h | 6 +++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 7de6feb000760..47defc65aedac 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -266,6 +266,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	vm = create_vm(mode, vcpus, vcpu_memory_bytes);
 
+	perf_test_args.wr_fract = 1;
+
 	guest_data_prototype = malloc(perf_test_args.host_page_size);
 	TEST_ASSERT(guest_data_prototype,
 		    "Failed to allocate buffer for guest data pattern");
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 838f946700f0c..1716300469c04 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -46,6 +46,7 @@ struct perf_test_args {
 	struct kvm_vm *vm;
 	uint64_t host_page_size;
 	uint64_t guest_page_size;
+	int wr_fract;
 
 	struct vcpu_args vcpu_args[MAX_VCPUS];
 };
@@ -72,7 +73,10 @@ static void guest_code(uint32_t vcpu_id)
 	for (i = 0; i < pages; i++) {
 		uint64_t addr = gva + (i * perf_test_args.guest_page_size);
 
-		*(uint64_t *)addr = 0x0123456789ABCDEF;
+		if (i % perf_test_args.wr_fract == 0)
+			*(uint64_t *)addr = 0x0123456789ABCDEF;
+		else
+			READ_ONCE(*(uint64_t *)addr);
 	}
 
 	GUEST_SYNC(1);
-- 
2.29.0.rc2.309.g374f81d7ae-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test
  2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
                   ` (3 preceding siblings ...)
  2020-10-27 23:37 ` [PATCH 4/5] KVM: selftests: Add wrfract to common guest code Ben Gardon
@ 2020-10-27 23:37 ` Ben Gardon
  2020-11-02 22:21   ` Peter Xu
  2020-11-06 12:48 ` [PATCH 0/5] Add a dirty logging performance test Paolo Bonzini
  5 siblings, 1 reply; 18+ messages in thread
From: Ben Gardon @ 2020-10-27 23:37 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Peter Xu, Andrew Jones, Peter Shier,
	Sean Christopherson, Thomas Huth, Peter Feiner, Ben Gardon

The dirty log perf test will time verious dirty logging operations
(enabling dirty logging, dirtying memory, getting the dirty log,
clearing the dirty log, and disabling dirty logging) in order to
quantify dirty logging performance. This test can be used to inform
future performance improvements to KVM's dirty logging infrastructure.

This series was tested by running the following invocations on an Intel
Skylake machine:
dirty_log_perf_test -b 20m -i 100 -v 64
dirty_log_perf_test -b 20g -i 5 -v 4
dirty_log_perf_test -b 4g -i 5 -v 32
demand_paging_test -b 20m -v 64
demand_paging_test -b 20g -v 4
demand_paging_test -b 4g -v 32
All behaved as expected.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/dirty_log_perf_test.c       | 382 ++++++++++++++++++
 .../selftests/kvm/include/perf_test_util.h    |  18 +-
 .../testing/selftests/kvm/include/test_util.h |   1 +
 tools/testing/selftests/kvm/lib/test_util.c   |   7 +
 6 files changed, 402 insertions(+), 8 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/dirty_log_perf_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 307ceaadbbb99..d5dac5810d7ab 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -23,6 +23,7 @@
 /clear_dirty_log_test
 /demand_paging_test
 /dirty_log_test
+/dirty_log_perf_test
 /kvm_create_max_vcpus
 /set_memory_region_test
 /steal_time
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 7ebe71fbca534..6889cf5b3e72c 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -60,6 +60,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/user_msr_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
+TEST_GEN_PROGS_x86_64 += dirty_log_perf_test
 TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += set_memory_region_test
 TEST_GEN_PROGS_x86_64 += steal_time
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
new file mode 100644
index 0000000000000..04604a26e5aea
--- /dev/null
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -0,0 +1,382 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KVM dirty page logging performance test
+ *
+ * Based on dirty_log_test.c
+ *
+ * Copyright (C) 2018, Red Hat, Inc.
+ * Copyright (C) 2020, Google, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <time.h>
+#include <pthread.h>
+#include <linux/bitmap.h>
+#include <linux/bitops.h>
+
+#include "kvm_util.h"
+#include "perf_test_util.h"
+#include "processor.h"
+#include "test_util.h"
+
+/* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/
+#define TEST_HOST_LOOP_N		2UL
+
+#define DEFAULT_VCPU_MEMORY_BYTES (1UL << 30) /* 1G */
+
+/* Host variables */
+static bool host_quit;
+static uint64_t iteration;
+static uint64_t vcpu_last_completed_iteration[MAX_VCPUS];
+
+static void *vcpu_worker(void *data)
+{
+	int ret;
+	struct kvm_vm *vm = perf_test_args.vm;
+	uint64_t pages_count = 0;
+	struct kvm_run *run;
+	struct timespec start;
+	struct timespec ts_diff;
+	struct timespec total = (struct timespec){0};
+	struct timespec avg;
+	struct vcpu_args *vcpu_args = (struct vcpu_args *)data;
+	int vcpu_id = vcpu_args->vcpu_id;
+
+	vcpu_args_set(vm, vcpu_id, 1, vcpu_id);
+	run = vcpu_state(vm, vcpu_id);
+
+	while (!READ_ONCE(host_quit)) {
+		uint64_t current_iteration = READ_ONCE(iteration);
+
+		clock_gettime(CLOCK_MONOTONIC, &start);
+		ret = _vcpu_run(vm, vcpu_id);
+		ts_diff = timespec_diff_now(start);
+
+		TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
+		TEST_ASSERT(get_ucall(vm, vcpu_id, NULL) == UCALL_SYNC,
+			    "Invalid guest sync status: exit_reason=%s\n",
+			    exit_reason_str(run->exit_reason));
+
+		pr_debug("Got sync event from vCPU %d\n", vcpu_id);
+		vcpu_last_completed_iteration[vcpu_id] = current_iteration;
+		pr_debug("vCPU %d updated last completed iteration to %lu\n",
+			 vcpu_id, vcpu_last_completed_iteration[vcpu_id]);
+
+		if (current_iteration) {
+			pages_count += vcpu_args->pages;
+			total = timespec_add(total, ts_diff);
+			pr_debug("vCPU %d iteration %lu dirty memory time: %ld.%.9lds\n",
+				vcpu_id, current_iteration, ts_diff.tv_sec,
+				ts_diff.tv_nsec);
+		} else {
+			pr_debug("vCPU %d iteration %lu populate memory time: %ld.%.9lds\n",
+				vcpu_id, current_iteration, ts_diff.tv_sec,
+				ts_diff.tv_nsec);
+		}
+
+		while (current_iteration == READ_ONCE(iteration) &&
+		       !READ_ONCE(host_quit)) {}
+	}
+
+	avg = timespec_div(total, vcpu_last_completed_iteration[vcpu_id]);
+	pr_debug("\nvCPU %d dirtied 0x%lx pages over %lu iterations in %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n",
+		vcpu_id, pages_count, vcpu_last_completed_iteration[vcpu_id],
+		total.tv_sec, total.tv_nsec, avg.tv_sec, avg.tv_nsec);
+
+	return NULL;
+}
+
+#ifdef USE_CLEAR_DIRTY_LOG
+static u64 dirty_log_manual_caps;
+#endif
+
+static void run_test(enum vm_guest_mode mode, unsigned long iterations,
+		     uint64_t phys_offset, int vcpus,
+		     uint64_t vcpu_memory_bytes, int wr_fract)
+{
+	pthread_t *vcpu_threads;
+	struct kvm_vm *vm;
+	unsigned long *bmap;
+	uint64_t guest_num_pages;
+	uint64_t host_num_pages;
+	int vcpu_id;
+	struct timespec start;
+	struct timespec ts_diff;
+	struct timespec get_dirty_log_total = (struct timespec){0};
+	struct timespec vcpu_dirty_total = (struct timespec){0};
+	struct timespec avg;
+#ifdef USE_CLEAR_DIRTY_LOG
+	struct kvm_enable_cap cap = {};
+	struct timespec clear_dirty_log_total = (struct timespec){0};
+#endif
+
+	vm = create_vm(mode, vcpus, vcpu_memory_bytes);
+
+	perf_test_args.wr_fract = wr_fract;
+
+	guest_num_pages = (vcpus * vcpu_memory_bytes) >> vm_get_page_shift(vm);
+	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
+	host_num_pages = vm_num_host_pages(mode, guest_num_pages);
+	bmap = bitmap_alloc(host_num_pages);
+
+#ifdef USE_CLEAR_DIRTY_LOG
+	cap.cap = KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2;
+	cap.args[0] = dirty_log_manual_caps;
+	vm_enable_cap(vm, &cap);
+#endif
+
+	vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads));
+	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
+
+	add_vcpus(vm, vcpus, vcpu_memory_bytes);
+
+	sync_global_to_guest(vm, perf_test_args);
+
+	/* Start the iterations */
+	iteration = 0;
+	host_quit = false;
+
+	clock_gettime(CLOCK_MONOTONIC, &start);
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
+			       &perf_test_args.vcpu_args[vcpu_id]);
+	}
+
+	/* Allow the vCPU to populate memory */
+	pr_debug("Starting iteration %lu - Populating\n", iteration);
+	while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)
+		pr_debug("Waiting for vcpu_last_completed_iteration == %lu\n",
+			iteration);
+
+	ts_diff = timespec_diff_now(start);
+	pr_info("Populate memory time: %ld.%.9lds\n",
+		ts_diff.tv_sec, ts_diff.tv_nsec);
+
+	/* Enable dirty logging */
+	clock_gettime(CLOCK_MONOTONIC, &start);
+	vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX,
+				KVM_MEM_LOG_DIRTY_PAGES);
+	ts_diff = timespec_diff_now(start);
+	pr_info("Enabling dirty logging time: %ld.%.9lds\n\n",
+		ts_diff.tv_sec, ts_diff.tv_nsec);
+
+	while (iteration < iterations) {
+		/*
+		 * Incrementing the iteration number will start the vCPUs
+		 * dirtying memory again.
+		 */
+		clock_gettime(CLOCK_MONOTONIC, &start);
+		iteration++;
+
+		pr_debug("Starting iteration %lu\n", iteration);
+		for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+			while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)
+				pr_debug("Waiting for vCPU %d vcpu_last_completed_iteration == %lu\n",
+					 vcpu_id, iteration);
+		}
+
+		ts_diff = timespec_diff_now(start);
+		vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff);
+		pr_info("Iteration %lu dirty memory time: %ld.%.9lds\n",
+			iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
+
+		clock_gettime(CLOCK_MONOTONIC, &start);
+		kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap);
+
+		ts_diff = timespec_diff_now(start);
+		get_dirty_log_total = timespec_add(get_dirty_log_total,
+						   ts_diff);
+		pr_info("Iteration %lu get dirty log time: %ld.%.9lds\n",
+			iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
+
+#ifdef USE_CLEAR_DIRTY_LOG
+		clock_gettime(CLOCK_MONOTONIC, &start);
+		kvm_vm_clear_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap, 0,
+				       host_num_pages);
+
+		ts_diff = timespec_diff_now(start);
+		clear_dirty_log_total = timespec_add(clear_dirty_log_total,
+						     ts_diff);
+		pr_info("Iteration %lu clear dirty log time: %ld.%.9lds\n",
+			iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
+#endif
+	}
+
+	/* Tell the vcpu thread to quit */
+	host_quit = true;
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++)
+		pthread_join(vcpu_threads[vcpu_id], NULL);
+
+	/* Disable dirty logging */
+	clock_gettime(CLOCK_MONOTONIC, &start);
+	vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, 0);
+	ts_diff = timespec_diff_now(start);
+	pr_info("Disabling dirty logging time: %ld.%.9lds\n",
+		ts_diff.tv_sec, ts_diff.tv_nsec);
+
+	avg = timespec_div(get_dirty_log_total, iterations);
+	pr_info("Get dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n",
+		iterations, get_dirty_log_total.tv_sec,
+		get_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec);
+
+#ifdef USE_CLEAR_DIRTY_LOG
+	avg = timespec_div(clear_dirty_log_total, iterations);
+	pr_info("Clear dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n",
+		iterations, clear_dirty_log_total.tv_sec,
+		clear_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec);
+#endif
+
+	free(bmap);
+	free(vcpu_threads);
+	ucall_uninit(vm);
+	kvm_vm_free(vm);
+}
+
+struct guest_mode {
+	bool supported;
+	bool enabled;
+};
+static struct guest_mode guest_modes[NUM_VM_MODES];
+
+#define guest_mode_init(mode, supported, enabled) ({ \
+	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
+})
+
+static void help(char *name)
+{
+	int i;
+
+	puts("");
+	printf("usage: %s [-h] [-i iterations] [-p offset] "
+	       "[-m mode] [-b vcpu bytes] [-v vcpus]\n", name);
+	puts("");
+	printf(" -i: specify iteration counts (default: %"PRIu64")\n",
+	       TEST_HOST_LOOP_N);
+	printf(" -p: specify guest physical test memory offset\n"
+	       "     Warning: a low offset can conflict with the loaded test code.\n");
+	printf(" -m: specify the guest mode ID to test "
+	       "(default: test all supported modes)\n"
+	       "     This option may be used multiple times.\n"
+	       "     Guest mode IDs:\n");
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
+		       guest_modes[i].supported ? " (supported)" : "");
+	}
+	printf(" -b: specify the size of the memory region which should be\n"
+	       "     dirtied by each vCPU. e.g. 10M or 3G.\n"
+	       "     (default: 1G)\n");
+	printf(" -f: specify the fraction of pages which should be written to\n"
+	       "     as opposed to simply read, in the form\n"
+	       "     1/<fraction of pages to write>.\n"
+	       "     (default: 1 i.e. all pages are written to.)\n");
+	printf(" -v: specify the number of vCPUs to run.\n");
+	puts("");
+	exit(0);
+}
+
+int main(int argc, char *argv[])
+{
+	unsigned long iterations = TEST_HOST_LOOP_N;
+	uint64_t vcpu_memory_bytes = DEFAULT_VCPU_MEMORY_BYTES;
+	bool mode_selected = false;
+	uint64_t phys_offset = 0;
+	unsigned int mode;
+	int opt, i;
+	int wr_fract = 1;
+	int vcpus = 1;
+
+#ifdef USE_CLEAR_DIRTY_LOG
+	dirty_log_manual_caps =
+		kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2);
+	if (!dirty_log_manual_caps) {
+		print_skip("KVM_CLEAR_DIRTY_LOG not available");
+		exit(KSFT_SKIP);
+	}
+	dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE |
+				  KVM_DIRTY_LOG_INITIALLY_SET);
+#endif
+
+#ifdef __x86_64__
+	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
+#endif
+#ifdef __aarch64__
+	guest_mode_init(VM_MODE_P40V48_4K, true, true);
+	guest_mode_init(VM_MODE_P40V48_64K, true, true);
+
+	{
+		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
+
+		if (limit >= 52)
+			guest_mode_init(VM_MODE_P52V48_64K, true, true);
+		if (limit >= 48) {
+			guest_mode_init(VM_MODE_P48V48_4K, true, true);
+			guest_mode_init(VM_MODE_P48V48_64K, true, true);
+		}
+	}
+#endif
+#ifdef __s390x__
+	guest_mode_init(VM_MODE_P40V48_4K, true, true);
+#endif
+
+	while ((opt = getopt(argc, argv, "hi:p:m:b:f:v:")) != -1) {
+		switch (opt) {
+		case 'i':
+			iterations = strtol(optarg, NULL, 10);
+			break;
+		case 'p':
+			phys_offset = strtoull(optarg, NULL, 0);
+			break;
+		case 'm':
+			if (!mode_selected) {
+				for (i = 0; i < NUM_VM_MODES; ++i)
+					guest_modes[i].enabled = false;
+				mode_selected = true;
+			}
+			mode = strtoul(optarg, NULL, 10);
+			TEST_ASSERT(mode < NUM_VM_MODES,
+				    "Guest mode ID %d too big", mode);
+			guest_modes[mode].enabled = true;
+			break;
+		case 'b':
+			vcpu_memory_bytes = parse_size(optarg);
+			break;
+		case 'f':
+			wr_fract = atoi(optarg);
+			TEST_ASSERT(wr_fract >= 1,
+				    "Write fraction cannot be less than one");
+			break;
+		case 'v':
+			vcpus = atoi(optarg);
+			TEST_ASSERT(vcpus > 0,
+				    "Must have a positive number of vCPUs");
+			TEST_ASSERT(vcpus <= MAX_VCPUS,
+				    "This test does not currently support\n"
+				    "more than %d vCPUs.", MAX_VCPUS);
+			break;
+		case 'h':
+		default:
+			help(argv[0]);
+			break;
+		}
+	}
+
+	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
+
+	pr_info("Test iterations: %"PRIu64"\n",	iterations);
+
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		if (!guest_modes[i].enabled)
+			continue;
+		TEST_ASSERT(guest_modes[i].supported,
+			    "Guest mode ID %d (%s) not supported.",
+			    i, vm_guest_mode_string(i));
+		run_test(i, iterations, phys_offset, vcpus, vcpu_memory_bytes,
+			 wr_fract);
+	}
+
+	return 0;
+}
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 1716300469c04..87c4844a3df32 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -70,16 +70,18 @@ static void guest_code(uint32_t vcpu_id)
 	gva = vcpu_args->gva;
 	pages = vcpu_args->pages;
 
-	for (i = 0; i < pages; i++) {
-		uint64_t addr = gva + (i * perf_test_args.guest_page_size);
+	while (true) {
+		for (i = 0; i < pages; i++) {
+			uint64_t addr = gva + (i * perf_test_args.guest_page_size);
 
-		if (i % perf_test_args.wr_fract == 0)
-			*(uint64_t *)addr = 0x0123456789ABCDEF;
-		else
-			READ_ONCE(*(uint64_t *)addr);
-	}
+			if (i % perf_test_args.wr_fract == 0)
+				*(uint64_t *)addr = 0x0123456789ABCDEF;
+			else
+				READ_ONCE(*(uint64_t *)addr);
+		}
 
-	GUEST_SYNC(1);
+		GUEST_SYNC(1);
+	}
 }
 
 static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 1cc036ddb0c5e..ffffa560436ba 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -65,5 +65,6 @@ struct timespec timespec_add_ns(struct timespec ts, int64_t ns);
 struct timespec timespec_add(struct timespec ts1, struct timespec ts2);
 struct timespec timespec_sub(struct timespec ts1, struct timespec ts2);
 struct timespec timespec_diff_now(struct timespec start);
+struct timespec timespec_div(struct timespec ts, int divisor);
 
 #endif /* SELFTEST_KVM_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 1a46c2c48c7cb..8e04c0b1608e6 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -92,6 +92,13 @@ struct timespec timespec_diff_now(struct timespec start)
 	return timespec_sub(end, start);
 }
 
+struct timespec timespec_div(struct timespec ts, int divisor)
+{
+	int64_t ns = timespec_to_ns(ts) / divisor;
+
+	return timespec_add_ns((struct timespec){0}, ns);
+}
+
 void print_skip(const char *fmt, ...)
 {
 	va_list ap;
-- 
2.29.0.rc2.309.g374f81d7ae-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test
  2020-10-27 23:37 ` [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test Ben Gardon
@ 2020-11-02 21:23   ` Peter Xu
  2020-11-02 22:57     ` Ben Gardon
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Xu @ 2020-11-02 21:23 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Tue, Oct 27, 2020 at 04:37:29PM -0700, Ben Gardon wrote:
> Much of the code in demand_paging_test can be reused by other, similar
> multi-vCPU-memory-touching-perfromance-tests. Factor that common code
> out for reuse.
> 
> No functional change expected.

Is there explicit reason to put the common code in a header rather than
perf_test_util.c?  No strong opinion on this especially this is test code,
just curious.  Since iiuc .c file is still preferred for things like this.

> 
> This series was tested by running the following invocations on an Intel
> Skylake machine:
> dirty_log_perf_test -b 20m -i 100 -v 64
> dirty_log_perf_test -b 20g -i 5 -v 4
> dirty_log_perf_test -b 4g -i 5 -v 32
> demand_paging_test -b 20m -v 64
> demand_paging_test -b 20g -v 4
> demand_paging_test -b 4g -v 32
> All behaved as expected.

May move this chunk to the cover letter to avoid keeping it in every commit
(btw, you mentioned "this series" but I feel like you meant "you verified that
after applying each of the commits").

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/5] KVM: selftests: Remove address rounding in guest code
  2020-10-27 23:37 ` [PATCH 2/5] KVM: selftests: Remove address rounding in guest code Ben Gardon
@ 2020-11-02 21:25   ` Peter Xu
  0 siblings, 0 replies; 18+ messages in thread
From: Peter Xu @ 2020-11-02 21:25 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Tue, Oct 27, 2020 at 04:37:30PM -0700, Ben Gardon wrote:
> Rounding the address the guest writes to a host page boundary
> will only have an effect if the host page size is larger than the guest
> page size, but in that case the guest write would still go to the same
> host page. There's no reason to round the address down, so remove the
> rounding to simplify the demand paging test.
> 
> This series was tested by running the following invocations on an Intel
> Skylake machine:
> dirty_log_perf_test -b 20m -i 100 -v 64
> dirty_log_perf_test -b 20g -i 5 -v 4
> dirty_log_perf_test -b 4g -i 5 -v 32
> demand_paging_test -b 20m -v 64
> demand_paging_test -b 20g -v 4
> demand_paging_test -b 4g -v 32
> All behaved as expected.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>

Nit: would be better to be before the code movement.  In all cases:

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now
  2020-10-27 23:37 ` [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now Ben Gardon
@ 2020-11-02 21:27   ` Peter Xu
  2020-11-02 22:59     ` Ben Gardon
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Xu @ 2020-11-02 21:27 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Tue, Oct 27, 2020 at 04:37:31PM -0700, Ben Gardon wrote:
> Add a helper function to get the current time and return the time since
> a given start time. Use that function to simplify the timekeeping in the
> demand paging test.

Nit: timespec_diff_now() sounds less charming than timespec_elapsed() to
me... "diff_now" is longer, and it also does not show positive/negative of the
results (which in this case should always be end-start). "elapsed" should
always mean something positive.

With/Without the change above:

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test
  2020-10-27 23:37 ` [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test Ben Gardon
@ 2020-11-02 22:21   ` Peter Xu
  2020-11-02 23:56     ` Ben Gardon
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Xu @ 2020-11-02 22:21 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Tue, Oct 27, 2020 at 04:37:33PM -0700, Ben Gardon wrote:
> The dirty log perf test will time verious dirty logging operations
> (enabling dirty logging, dirtying memory, getting the dirty log,
> clearing the dirty log, and disabling dirty logging) in order to
> quantify dirty logging performance. This test can be used to inform
> future performance improvements to KVM's dirty logging infrastructure.

One thing to mention is that there're a few patches in the kvm dirty ring
series that reworked the dirty log test quite a bit (to add similar test for
dirty ring).  For example:

  https://lore.kernel.org/kvm/20201023183358.50607-11-peterx@redhat.com/

Just a FYI if we're going to use separate test programs.  Merging this tests
should benefit in many ways, of course (e.g., dirty ring may directly runnable
with the perf tests too; so we can manually enable this "perf mode" as a new
parameter in dirty_log_test, if possible?), however I don't know how hard -
maybe there's some good reason to keep them separate...

[...]

> +static void run_test(enum vm_guest_mode mode, unsigned long iterations,
> +		     uint64_t phys_offset, int vcpus,
> +		     uint64_t vcpu_memory_bytes, int wr_fract)
> +{

[...]

> +	/* Start the iterations */
> +	iteration = 0;
> +	host_quit = false;
> +
> +	clock_gettime(CLOCK_MONOTONIC, &start);
> +	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> +		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
> +			       &perf_test_args.vcpu_args[vcpu_id]);
> +	}
> +
> +	/* Allow the vCPU to populate memory */
> +	pr_debug("Starting iteration %lu - Populating\n", iteration);
> +	while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)
> +		pr_debug("Waiting for vcpu_last_completed_iteration == %lu\n",
> +			iteration);

Isn't array vcpu_last_completed_iteration[] initialized to all zeros?  If so, I
feel like this "while" won't run as expected to wait for populating mem.

The flooding pr_debug() seems a bit scary too if the mem size is huge..  How
about a pr_debug() after the loop (so if we don't see that it means it hanged)?

(There's another similar pr_debug() after this point too within a loop)

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test
  2020-11-02 21:23   ` Peter Xu
@ 2020-11-02 22:57     ` Ben Gardon
  0 siblings, 0 replies; 18+ messages in thread
From: Ben Gardon @ 2020-11-02 22:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: LKML, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Mon, Nov 2, 2020 at 1:24 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Tue, Oct 27, 2020 at 04:37:29PM -0700, Ben Gardon wrote:
> > Much of the code in demand_paging_test can be reused by other, similar
> > multi-vCPU-memory-touching-perfromance-tests. Factor that common code
> > out for reuse.
> >
> > No functional change expected.
>
> Is there explicit reason to put the common code in a header rather than
> perf_test_util.c?  No strong opinion on this especially this is test code,
> just curious.  Since iiuc .c file is still preferred for things like this.
>

I don't have a compelling reason not to put the common code in a .c
file, and I agree that would probably be a better place for it. I'll
do that in a v2.

> >
> > This series was tested by running the following invocations on an Intel
> > Skylake machine:
> > dirty_log_perf_test -b 20m -i 100 -v 64
> > dirty_log_perf_test -b 20g -i 5 -v 4
> > dirty_log_perf_test -b 4g -i 5 -v 32
> > demand_paging_test -b 20m -v 64
> > demand_paging_test -b 20g -v 4
> > demand_paging_test -b 4g -v 32
> > All behaved as expected.
>
> May move this chunk to the cover letter to avoid keeping it in every commit
> (btw, you mentioned "this series" but I feel like you meant "you verified that
> after applying each of the commits").

I can move the testing description to the cover letter. I can only
definitively say I ran those tests at the last commit in the series,
so I'll amend the description to specify that the commits were tested
all together and not one by one.

>
> Thanks,
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now
  2020-11-02 21:27   ` Peter Xu
@ 2020-11-02 22:59     ` Ben Gardon
  0 siblings, 0 replies; 18+ messages in thread
From: Ben Gardon @ 2020-11-02 22:59 UTC (permalink / raw)
  To: Peter Xu
  Cc: LKML, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Mon, Nov 2, 2020 at 1:27 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Tue, Oct 27, 2020 at 04:37:31PM -0700, Ben Gardon wrote:
> > Add a helper function to get the current time and return the time since
> > a given start time. Use that function to simplify the timekeeping in the
> > demand paging test.
>
> Nit: timespec_diff_now() sounds less charming than timespec_elapsed() to
> me... "diff_now" is longer, and it also does not show positive/negative of the
> results (which in this case should always be end-start). "elapsed" should
> always mean something positive.

That's a great suggestion and much clearer. I'll make that change in v2.

>
> With/Without the change above:
>
> Reviewed-by: Peter Xu <peterx@redhat.com>
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test
  2020-11-02 22:21   ` Peter Xu
@ 2020-11-02 23:56     ` Ben Gardon
  2020-11-03  1:12       ` Peter Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Ben Gardon @ 2020-11-02 23:56 UTC (permalink / raw)
  To: Peter Xu
  Cc: LKML, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Mon, Nov 2, 2020 at 2:21 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Tue, Oct 27, 2020 at 04:37:33PM -0700, Ben Gardon wrote:
> > The dirty log perf test will time verious dirty logging operations
> > (enabling dirty logging, dirtying memory, getting the dirty log,
> > clearing the dirty log, and disabling dirty logging) in order to
> > quantify dirty logging performance. This test can be used to inform
> > future performance improvements to KVM's dirty logging infrastructure.
>
> One thing to mention is that there're a few patches in the kvm dirty ring
> series that reworked the dirty log test quite a bit (to add similar test for
> dirty ring).  For example:
>
>   https://lore.kernel.org/kvm/20201023183358.50607-11-peterx@redhat.com/
>
> Just a FYI if we're going to use separate test programs.  Merging this tests
> should benefit in many ways, of course (e.g., dirty ring may directly runnable
> with the perf tests too; so we can manually enable this "perf mode" as a new
> parameter in dirty_log_test, if possible?), however I don't know how hard -
> maybe there's some good reason to keep them separate...

Absolutely, we definitely need a performance test for both modes. I'll
take a look at the patch you linked and see what it would take to
support dirty ring in this test.
Do you think that should be done in this series, or would it make
sense to add as a follow up?

>
> [...]
>
> > +static void run_test(enum vm_guest_mode mode, unsigned long iterations,
> > +                  uint64_t phys_offset, int vcpus,
> > +                  uint64_t vcpu_memory_bytes, int wr_fract)
> > +{
>
> [...]
>
> > +     /* Start the iterations */
> > +     iteration = 0;
> > +     host_quit = false;
> > +
> > +     clock_gettime(CLOCK_MONOTONIC, &start);
> > +     for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> > +             pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
> > +                            &perf_test_args.vcpu_args[vcpu_id]);
> > +     }
> > +
> > +     /* Allow the vCPU to populate memory */
> > +     pr_debug("Starting iteration %lu - Populating\n", iteration);
> > +     while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)
> > +             pr_debug("Waiting for vcpu_last_completed_iteration == %lu\n",
> > +                     iteration);
>
> Isn't array vcpu_last_completed_iteration[] initialized to all zeros?  If so, I
> feel like this "while" won't run as expected to wait for populating mem.

I think you are totally right. The array should be initialized to -1,
which I realize isn't a uint and unsigned integer overflow is bad, so
the array should be converted to ints too.
I suppose I didn't catch this because it would just make the
populating pass 0 look really short and pass 1 really long. I remember
seeing that behavior but not realizing that it was caused by a test
bug. I will correct this, thank you for pointing that out.

>
> The flooding pr_debug() seems a bit scary too if the mem size is huge..  How
> about a pr_debug() after the loop (so if we don't see that it means it hanged)?

I don't think the number of messages on pr_debug will be proportional
to the size of memory, but rather the product of iterations and vCPUs.
That said, that's still a lot of messages.
My assumption was that if you've gone to the trouble to turn on debug
logging, it's easier to comment log lines out than add them, but I'm
also happy to just move this to a single message after the loop.

>
> (There's another similar pr_debug() after this point too within a loop)
>
> Thanks,
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test
  2020-11-02 23:56     ` Ben Gardon
@ 2020-11-03  1:12       ` Peter Xu
  2020-11-03 22:17         ` Ben Gardon
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Xu @ 2020-11-03  1:12 UTC (permalink / raw)
  To: Ben Gardon
  Cc: LKML, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Mon, Nov 02, 2020 at 03:56:05PM -0800, Ben Gardon wrote:
> On Mon, Nov 2, 2020 at 2:21 PM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Tue, Oct 27, 2020 at 04:37:33PM -0700, Ben Gardon wrote:
> > > The dirty log perf test will time verious dirty logging operations
> > > (enabling dirty logging, dirtying memory, getting the dirty log,
> > > clearing the dirty log, and disabling dirty logging) in order to
> > > quantify dirty logging performance. This test can be used to inform
> > > future performance improvements to KVM's dirty logging infrastructure.
> >
> > One thing to mention is that there're a few patches in the kvm dirty ring
> > series that reworked the dirty log test quite a bit (to add similar test for
> > dirty ring).  For example:
> >
> >   https://lore.kernel.org/kvm/20201023183358.50607-11-peterx@redhat.com/
> >
> > Just a FYI if we're going to use separate test programs.  Merging this tests
> > should benefit in many ways, of course (e.g., dirty ring may directly runnable
> > with the perf tests too; so we can manually enable this "perf mode" as a new
> > parameter in dirty_log_test, if possible?), however I don't know how hard -
> > maybe there's some good reason to keep them separate...
> 
> Absolutely, we definitely need a performance test for both modes. I'll
> take a look at the patch you linked and see what it would take to
> support dirty ring in this test.

That would be highly appreciated.

> Do you think that should be done in this series, or would it make
> sense to add as a follow up?

To me I slightly lean toward working upon those patches, since we should
potentially share quite some code there (e.g., the clear dirty log cleanup
seems necessary, or not easy to add the dirty ring tests anyway).  But current
one is still ok to me at least as initial version - we should always be more
tolerant for test cases, aren't we? :)

So maybe we can wait for a 3rd opinion before you change the direction.

> 
> >
> > [...]
> >
> > > +static void run_test(enum vm_guest_mode mode, unsigned long iterations,
> > > +                  uint64_t phys_offset, int vcpus,
> > > +                  uint64_t vcpu_memory_bytes, int wr_fract)
> > > +{
> >
> > [...]
> >
> > > +     /* Start the iterations */
> > > +     iteration = 0;
> > > +     host_quit = false;
> > > +
> > > +     clock_gettime(CLOCK_MONOTONIC, &start);
> > > +     for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> > > +             pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
> > > +                            &perf_test_args.vcpu_args[vcpu_id]);
> > > +     }
> > > +
> > > +     /* Allow the vCPU to populate memory */
> > > +     pr_debug("Starting iteration %lu - Populating\n", iteration);
> > > +     while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)
> > > +             pr_debug("Waiting for vcpu_last_completed_iteration == %lu\n",
> > > +                     iteration);
> >
> > Isn't array vcpu_last_completed_iteration[] initialized to all zeros?  If so, I
> > feel like this "while" won't run as expected to wait for populating mem.
> 
> I think you are totally right. The array should be initialized to -1,
> which I realize isn't a uint and unsigned integer overflow is bad, so
> the array should be converted to ints too.
> I suppose I didn't catch this because it would just make the
> populating pass 0 look really short and pass 1 really long. I remember
> seeing that behavior but not realizing that it was caused by a test
> bug. I will correct this, thank you for pointing that out.
> 
> >
> > The flooding pr_debug() seems a bit scary too if the mem size is huge..  How
> > about a pr_debug() after the loop (so if we don't see that it means it hanged)?
> 
> I don't think the number of messages on pr_debug will be proportional
> to the size of memory, but rather the product of iterations and vCPUs.
> That said, that's still a lot of messages.

The guest code dirties all pages, and that process is proportional to the size
of memory, no?

Btw since you mentioned vcpus - I also feel like above chunk should be put into
the for loop above...

> My assumption was that if you've gone to the trouble to turn on debug
> logging, it's easier to comment log lines out than add them, but I'm
> also happy to just move this to a single message after the loop.

Yah that's subjective too - feel free to keep whatever you prefer.  In all
cases, hopefully I won't even need to enable pr_debug at all. :)

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test
  2020-11-03  1:12       ` Peter Xu
@ 2020-11-03 22:17         ` Ben Gardon
  2020-11-03 22:27           ` Peter Xu
  0 siblings, 1 reply; 18+ messages in thread
From: Ben Gardon @ 2020-11-03 22:17 UTC (permalink / raw)
  To: Peter Xu
  Cc: LKML, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Mon, Nov 2, 2020 at 5:12 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Mon, Nov 02, 2020 at 03:56:05PM -0800, Ben Gardon wrote:
> > On Mon, Nov 2, 2020 at 2:21 PM Peter Xu <peterx@redhat.com> wrote:
> > >
> > > On Tue, Oct 27, 2020 at 04:37:33PM -0700, Ben Gardon wrote:
> > > > The dirty log perf test will time verious dirty logging operations
> > > > (enabling dirty logging, dirtying memory, getting the dirty log,
> > > > clearing the dirty log, and disabling dirty logging) in order to
> > > > quantify dirty logging performance. This test can be used to inform
> > > > future performance improvements to KVM's dirty logging infrastructure.
> > >
> > > One thing to mention is that there're a few patches in the kvm dirty ring
> > > series that reworked the dirty log test quite a bit (to add similar test for
> > > dirty ring).  For example:
> > >
> > >   https://lore.kernel.org/kvm/20201023183358.50607-11-peterx@redhat.com/
> > >
> > > Just a FYI if we're going to use separate test programs.  Merging this tests
> > > should benefit in many ways, of course (e.g., dirty ring may directly runnable
> > > with the perf tests too; so we can manually enable this "perf mode" as a new
> > > parameter in dirty_log_test, if possible?), however I don't know how hard -
> > > maybe there's some good reason to keep them separate...
> >
> > Absolutely, we definitely need a performance test for both modes. I'll
> > take a look at the patch you linked and see what it would take to
> > support dirty ring in this test.
>
> That would be highly appreciated.
>
> > Do you think that should be done in this series, or would it make
> > sense to add as a follow up?
>
> To me I slightly lean toward working upon those patches, since we should
> potentially share quite some code there (e.g., the clear dirty log cleanup
> seems necessary, or not easy to add the dirty ring tests anyway).  But current
> one is still ok to me at least as initial version - we should always be more
> tolerant for test cases, aren't we? :)
>
> So maybe we can wait for a 3rd opinion before you change the direction.

I took a look at your patches for dirty ring and dirty logging modes
and thought about this some more.
I think your patch to merge the get and clear dirty log tests is
great, and I can try to include it and build on it in my series as
well if desired. I don't think it would be hard to use the same mode
approach in the dirty log perf test. That said, I think it would be
easier to keep the functional test (dirty_log_test,
clear_dirty_log_test) separate from the performance test because the
dirty log validation is extra time and complexity not needed in the
dirty log perf test. I did try building them in the same test
initially, but it was really ugly. Perhaps a future refactoring could
merge them better.

>
> >
> > >
> > > [...]
> > >
> > > > +static void run_test(enum vm_guest_mode mode, unsigned long iterations,
> > > > +                  uint64_t phys_offset, int vcpus,
> > > > +                  uint64_t vcpu_memory_bytes, int wr_fract)
> > > > +{
> > >
> > > [...]
> > >
> > > > +     /* Start the iterations */
> > > > +     iteration = 0;
> > > > +     host_quit = false;
> > > > +
> > > > +     clock_gettime(CLOCK_MONOTONIC, &start);
> > > > +     for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> > > > +             pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
> > > > +                            &perf_test_args.vcpu_args[vcpu_id]);
> > > > +     }
> > > > +
> > > > +     /* Allow the vCPU to populate memory */
> > > > +     pr_debug("Starting iteration %lu - Populating\n", iteration);
> > > > +     while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) != iteration)
> > > > +             pr_debug("Waiting for vcpu_last_completed_iteration == %lu\n",
> > > > +                     iteration);
> > >
> > > Isn't array vcpu_last_completed_iteration[] initialized to all zeros?  If so, I
> > > feel like this "while" won't run as expected to wait for populating mem.
> >
> > I think you are totally right. The array should be initialized to -1,
> > which I realize isn't a uint and unsigned integer overflow is bad, so
> > the array should be converted to ints too.
> > I suppose I didn't catch this because it would just make the
> > populating pass 0 look really short and pass 1 really long. I remember
> > seeing that behavior but not realizing that it was caused by a test
> > bug. I will correct this, thank you for pointing that out.
> >
> > >
> > > The flooding pr_debug() seems a bit scary too if the mem size is huge..  How
> > > about a pr_debug() after the loop (so if we don't see that it means it hanged)?
> >
> > I don't think the number of messages on pr_debug will be proportional
> > to the size of memory, but rather the product of iterations and vCPUs.
> > That said, that's still a lot of messages.
>
> The guest code dirties all pages, and that process is proportional to the size
> of memory, no?
>
> Btw since you mentioned vcpus - I also feel like above chunk should be put into
> the for loop above...

Ooof I misread my code. You're totally right. I'll fix that by
removing the print there.

>
> > My assumption was that if you've gone to the trouble to turn on debug
> > logging, it's easier to comment log lines out than add them, but I'm
> > also happy to just move this to a single message after the loop.
>
> Yah that's subjective too - feel free to keep whatever you prefer.  In all
> cases, hopefully I won't even need to enable pr_debug at all. :)
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test
  2020-11-03 22:17         ` Ben Gardon
@ 2020-11-03 22:27           ` Peter Xu
  0 siblings, 0 replies; 18+ messages in thread
From: Peter Xu @ 2020-11-03 22:27 UTC (permalink / raw)
  To: Ben Gardon
  Cc: LKML, kvm, linux-kselftest, Paolo Bonzini, Andrew Jones,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Tue, Nov 03, 2020 at 02:17:53PM -0800, Ben Gardon wrote:
> On Mon, Nov 2, 2020 at 5:12 PM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Mon, Nov 02, 2020 at 03:56:05PM -0800, Ben Gardon wrote:
> > > On Mon, Nov 2, 2020 at 2:21 PM Peter Xu <peterx@redhat.com> wrote:
> > > >
> > > > On Tue, Oct 27, 2020 at 04:37:33PM -0700, Ben Gardon wrote:
> > > > > The dirty log perf test will time verious dirty logging operations
> > > > > (enabling dirty logging, dirtying memory, getting the dirty log,
> > > > > clearing the dirty log, and disabling dirty logging) in order to
> > > > > quantify dirty logging performance. This test can be used to inform
> > > > > future performance improvements to KVM's dirty logging infrastructure.
> > > >
> > > > One thing to mention is that there're a few patches in the kvm dirty ring
> > > > series that reworked the dirty log test quite a bit (to add similar test for
> > > > dirty ring).  For example:
> > > >
> > > >   https://lore.kernel.org/kvm/20201023183358.50607-11-peterx@redhat.com/
> > > >
> > > > Just a FYI if we're going to use separate test programs.  Merging this tests
> > > > should benefit in many ways, of course (e.g., dirty ring may directly runnable
> > > > with the perf tests too; so we can manually enable this "perf mode" as a new
> > > > parameter in dirty_log_test, if possible?), however I don't know how hard -
> > > > maybe there's some good reason to keep them separate...
> > >
> > > Absolutely, we definitely need a performance test for both modes. I'll
> > > take a look at the patch you linked and see what it would take to
> > > support dirty ring in this test.
> >
> > That would be highly appreciated.
> >
> > > Do you think that should be done in this series, or would it make
> > > sense to add as a follow up?
> >
> > To me I slightly lean toward working upon those patches, since we should
> > potentially share quite some code there (e.g., the clear dirty log cleanup
> > seems necessary, or not easy to add the dirty ring tests anyway).  But current
> > one is still ok to me at least as initial version - we should always be more
> > tolerant for test cases, aren't we? :)
> >
> > So maybe we can wait for a 3rd opinion before you change the direction.
> 
> I took a look at your patches for dirty ring and dirty logging modes
> and thought about this some more.
> I think your patch to merge the get and clear dirty log tests is
> great, and I can try to include it and build on it in my series as
> well if desired. I don't think it would be hard to use the same mode
> approach in the dirty log perf test. That said, I think it would be
> easier to keep the functional test (dirty_log_test,
> clear_dirty_log_test) separate from the performance test because the
> dirty log validation is extra time and complexity not needed in the
> dirty log perf test. I did try building them in the same test
> initially, but it was really ugly. Perhaps a future refactoring could
> merge them better.

We can conditionally bypass the validation part.  Let's keep it separate for
now - which is totally fine by me.  Actually I also don't want the dirty ring
series to block your series since I still don't know when it'll land.  That'll
be unnecessary depencency.  Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/5] Add a dirty logging performance test
  2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
                   ` (4 preceding siblings ...)
  2020-10-27 23:37 ` [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test Ben Gardon
@ 2020-11-06 12:48 ` Paolo Bonzini
  2020-11-09  9:21   ` Andrew Jones
  5 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2020-11-06 12:48 UTC (permalink / raw)
  To: Ben Gardon, linux-kernel, kvm, linux-kselftest
  Cc: Peter Xu, Andrew Jones, Peter Shier, Sean Christopherson,
	Thomas Huth, Peter Feiner

On 28/10/20 00:37, Ben Gardon wrote:
> Currently KVM lacks a simple, userspace agnostic, performance benchmark for
> dirty logging. Such a benchmark will be beneficial for ensuring that dirty
> logging performance does not regress, and to give a common baseline for
> validating performance improvements. The dirty log perf test introduced in
> this series builds on aspects of the existing demand paging perf test and
> provides time-based performance metrics for enabling and disabling dirty
> logging, getting the dirty log, and dirtying memory.
> 
> While the test currently only has a build target for x86, I expect it will
> work on, or be easily modified to support other architectures.
> 
> Ben Gardon (5):
>    KVM: selftests: Factor code out of demand_paging_test
>    KVM: selftests: Remove address rounding in guest code
>    KVM: selftests: Simplify demand_paging_test with timespec_diff_now
>    KVM: selftests: Add wrfract to common guest code
>    KVM: selftests: Introduce the dirty log perf test
> 
>   tools/testing/selftests/kvm/.gitignore        |   1 +
>   tools/testing/selftests/kvm/Makefile          |   1 +
>   .../selftests/kvm/demand_paging_test.c        | 230 ++---------
>   .../selftests/kvm/dirty_log_perf_test.c       | 382 ++++++++++++++++++
>   .../selftests/kvm/include/perf_test_util.h    | 192 +++++++++
>   .../testing/selftests/kvm/include/test_util.h |   2 +
>   tools/testing/selftests/kvm/lib/test_util.c   |  22 +-
>   7 files changed, 635 insertions(+), 195 deletions(-)
>   create mode 100644 tools/testing/selftests/kvm/dirty_log_perf_test.c
>   create mode 100644 tools/testing/selftests/kvm/include/perf_test_util.h
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/5] Add a dirty logging performance test
  2020-11-06 12:48 ` [PATCH 0/5] Add a dirty logging performance test Paolo Bonzini
@ 2020-11-09  9:21   ` Andrew Jones
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Jones @ 2020-11-09  9:21 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ben Gardon, linux-kernel, kvm, linux-kselftest, Peter Xu,
	Peter Shier, Sean Christopherson, Thomas Huth, Peter Feiner

On Fri, Nov 06, 2020 at 01:48:29PM +0100, Paolo Bonzini wrote:
> On 28/10/20 00:37, Ben Gardon wrote:
> > Currently KVM lacks a simple, userspace agnostic, performance benchmark for
> > dirty logging. Such a benchmark will be beneficial for ensuring that dirty
> > logging performance does not regress, and to give a common baseline for
> > validating performance improvements. The dirty log perf test introduced in
> > this series builds on aspects of the existing demand paging perf test and
> > provides time-based performance metrics for enabling and disabling dirty
> > logging, getting the dirty log, and dirtying memory.
> > 
> > While the test currently only has a build target for x86, I expect it will
> > work on, or be easily modified to support other architectures.
> > 
> > Ben Gardon (5):
> >    KVM: selftests: Factor code out of demand_paging_test
> >    KVM: selftests: Remove address rounding in guest code
> >    KVM: selftests: Simplify demand_paging_test with timespec_diff_now
> >    KVM: selftests: Add wrfract to common guest code
> >    KVM: selftests: Introduce the dirty log perf test
> > 
> >   tools/testing/selftests/kvm/.gitignore        |   1 +
> >   tools/testing/selftests/kvm/Makefile          |   1 +
> >   .../selftests/kvm/demand_paging_test.c        | 230 ++---------
> >   .../selftests/kvm/dirty_log_perf_test.c       | 382 ++++++++++++++++++
> >   .../selftests/kvm/include/perf_test_util.h    | 192 +++++++++
> >   .../testing/selftests/kvm/include/test_util.h |   2 +
> >   tools/testing/selftests/kvm/lib/test_util.c   |  22 +-
> >   7 files changed, 635 insertions(+), 195 deletions(-)
> >   create mode 100644 tools/testing/selftests/kvm/dirty_log_perf_test.c
> >   create mode 100644 tools/testing/selftests/kvm/include/perf_test_util.h
> > 
> 
> Queued, thanks.

Why would you do that? Peter reviewed this, making several comments,
such as not to put non-inline functions in header files. Ben took the
time to respin the series, posting a v2. It makes no sense to pick up
v1 after they put in that additional effort.

drew


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2020-11-09  9:21 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-27 23:37 [PATCH 0/5] Add a dirty logging performance test Ben Gardon
2020-10-27 23:37 ` [PATCH 1/5] KVM: selftests: Factor code out of demand_paging_test Ben Gardon
2020-11-02 21:23   ` Peter Xu
2020-11-02 22:57     ` Ben Gardon
2020-10-27 23:37 ` [PATCH 2/5] KVM: selftests: Remove address rounding in guest code Ben Gardon
2020-11-02 21:25   ` Peter Xu
2020-10-27 23:37 ` [PATCH 3/5] KVM: selftests: Simplify demand_paging_test with timespec_diff_now Ben Gardon
2020-11-02 21:27   ` Peter Xu
2020-11-02 22:59     ` Ben Gardon
2020-10-27 23:37 ` [PATCH 4/5] KVM: selftests: Add wrfract to common guest code Ben Gardon
2020-10-27 23:37 ` [PATCH 5/5] KVM: selftests: Introduce the dirty log perf test Ben Gardon
2020-11-02 22:21   ` Peter Xu
2020-11-02 23:56     ` Ben Gardon
2020-11-03  1:12       ` Peter Xu
2020-11-03 22:17         ` Ben Gardon
2020-11-03 22:27           ` Peter Xu
2020-11-06 12:48 ` [PATCH 0/5] Add a dirty logging performance test Paolo Bonzini
2020-11-09  9:21   ` Andrew Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).