kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
@ 2020-11-16 12:19 Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 1/4] KVM: selftests: Factor out guest mode code Andrew Jones
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Andrew Jones @ 2020-11-16 12:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

This series attempts to clean up demand_paging_test, dirty_log_perf_test,
and dirty_log_test by factoring out common code, creating some new API
along the way. It also splits include/perf_test_util.h into a more
conventional header and source pair.

I've tested on x86 and AArch64 (one config each), but not s390x.

v3:
 - Rebased remaining four patches from v2 onto kvm/queue
 - Picked up r-b's from Peter and Ben

v2: https://www.spinics.net/lists/kvm/msg228711.html   

Thanks,
drew


Andrew Jones (4):
  KVM: selftests: Factor out guest mode code
  KVM: selftests: dirty_log_test: Remove create_vm
  KVM: selftests: Use vm_create_with_vcpus in create_vm
  KVM: selftests: Implement perf_test_util more conventionally

 tools/testing/selftests/kvm/Makefile          |   2 +-
 .../selftests/kvm/demand_paging_test.c        | 118 ++++--------
 .../selftests/kvm/dirty_log_perf_test.c       | 145 +++++---------
 tools/testing/selftests/kvm/dirty_log_test.c  | 179 +++++-------------
 .../selftests/kvm/include/guest_modes.h       |  21 ++
 .../testing/selftests/kvm/include/kvm_util.h  |   9 +
 .../selftests/kvm/include/perf_test_util.h    | 167 ++--------------
 tools/testing/selftests/kvm/lib/guest_modes.c |  70 +++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    |   9 +-
 .../selftests/kvm/lib/perf_test_util.c        | 134 +++++++++++++
 10 files changed, 378 insertions(+), 476 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
 create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c
 create mode 100644 tools/testing/selftests/kvm/lib/perf_test_util.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 1/4] KVM: selftests: Factor out guest mode code
  2020-11-16 12:19 [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Andrew Jones
@ 2020-11-16 12:19 ` Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 2/4] KVM: selftests: dirty_log_test: Remove create_vm Andrew Jones
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Andrew Jones @ 2020-11-16 12:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

demand_paging_test, dirty_log_test, and dirty_log_perf_test have
redundant guest mode code. Factor it out.

Also, while adding a new include, remove the ones we don't need.

Reviewed-by: Ben Gardon <bgardon@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/Makefile          |   2 +-
 .../selftests/kvm/demand_paging_test.c        | 107 ++++-----------
 .../selftests/kvm/dirty_log_perf_test.c       | 121 +++++------------
 tools/testing/selftests/kvm/dirty_log_test.c  | 125 ++++++------------
 .../selftests/kvm/include/guest_modes.h       |  21 +++
 tools/testing/selftests/kvm/lib/guest_modes.c |  70 ++++++++++
 6 files changed, 189 insertions(+), 257 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
 create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 4febf4d5ead9..ba24691719ef 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -33,7 +33,7 @@ ifeq ($(ARCH),s390)
 	UNAME_M := s390x
 endif
 
-LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c
+LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
 LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c
 LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 3d96a7bfaff3..946161a9ce2d 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -7,23 +7,20 @@
  * Copyright (C) 2019, Google, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
+#define _GNU_SOURCE /* for program_invocation_name and pipe2 */
 
 #include <stdio.h>
 #include <stdlib.h>
-#include <sys/syscall.h>
-#include <unistd.h>
-#include <asm/unistd.h>
 #include <time.h>
 #include <poll.h>
 #include <pthread.h>
-#include <linux/bitmap.h>
-#include <linux/bitops.h>
 #include <linux/userfaultfd.h>
+#include <sys/syscall.h>
 
-#include "perf_test_util.h"
-#include "processor.h"
+#include "kvm_util.h"
 #include "test_util.h"
+#include "perf_test_util.h"
+#include "guest_modes.h"
 
 #ifdef __NR_userfaultfd
 
@@ -248,9 +245,14 @@ static int setup_demand_paging(struct kvm_vm *vm,
 	return 0;
 }
 
-static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay)
+struct test_params {
+	bool use_uffd;
+	useconds_t uffd_delay;
+};
+
+static void run_test(enum vm_guest_mode mode, void *arg)
 {
+	struct test_params *p = arg;
 	pthread_t *vcpu_threads;
 	pthread_t *uffd_handler_threads = NULL;
 	struct uffd_handler_args *uffd_args = NULL;
@@ -275,7 +277,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	add_vcpus(vm, nr_vcpus, guest_percpu_mem_size);
 
-	if (use_uffd) {
+	if (p->use_uffd) {
 		uffd_handler_threads =
 			malloc(nr_vcpus * sizeof(*uffd_handler_threads));
 		TEST_ASSERT(uffd_handler_threads, "Memory allocation failed");
@@ -308,7 +310,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 			r = setup_demand_paging(vm,
 						&uffd_handler_threads[vcpu_id],
 						pipefds[vcpu_id * 2],
-						uffd_delay, &uffd_args[vcpu_id],
+						p->uffd_delay, &uffd_args[vcpu_id],
 						vcpu_hva, guest_percpu_mem_size);
 			if (r < 0)
 				exit(-r);
@@ -339,7 +341,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	pr_info("All vCPU threads joined\n");
 
-	if (use_uffd) {
+	if (p->use_uffd) {
 		char c;
 
 		/* Tell the user fault fd handler threads to quit */
@@ -362,38 +364,19 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	free(guest_data_prototype);
 	free(vcpu_threads);
-	if (use_uffd) {
+	if (p->use_uffd) {
 		free(uffd_handler_threads);
 		free(uffd_args);
 		free(pipefds);
 	}
 }
 
-struct guest_mode {
-	bool supported;
-	bool enabled;
-};
-static struct guest_mode guest_modes[NUM_VM_MODES];
-
-#define guest_mode_init(mode, supported, enabled) ({ \
-	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
-})
-
 static void help(char *name)
 {
-	int i;
-
 	puts("");
 	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
 	       "          [-b memory] [-v vcpus]\n", name);
-	printf(" -m: specify the guest mode ID to test\n"
-	       "     (default: test all supported modes)\n"
-	       "     This option may be used multiple times.\n"
-	       "     Guest mode IDs:\n");
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
-		       guest_modes[i].supported ? " (supported)" : "");
-	}
+	guest_modes_help();
 	printf(" -u: use User Fault FD to handle vCPU page\n"
 	       "     faults.\n");
 	printf(" -d: add a delay in usec to the User Fault\n"
@@ -410,53 +393,22 @@ static void help(char *name)
 int main(int argc, char *argv[])
 {
 	int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
-	bool mode_selected = false;
-	unsigned int mode;
-	int opt, i;
-	bool use_uffd = false;
-	useconds_t uffd_delay = 0;
-
-#ifdef __x86_64__
-	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
-#endif
-#ifdef __aarch64__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-	guest_mode_init(VM_MODE_P40V48_64K, true, true);
-	{
-		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
-
-		if (limit >= 52)
-			guest_mode_init(VM_MODE_P52V48_64K, true, true);
-		if (limit >= 48) {
-			guest_mode_init(VM_MODE_P48V48_4K, true, true);
-			guest_mode_init(VM_MODE_P48V48_64K, true, true);
-		}
-	}
-#endif
-#ifdef __s390x__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-#endif
+	struct test_params p = {};
+	int opt;
+
+	guest_modes_append_default();
 
 	while ((opt = getopt(argc, argv, "hm:ud:b:v:")) != -1) {
 		switch (opt) {
 		case 'm':
-			if (!mode_selected) {
-				for (i = 0; i < NUM_VM_MODES; ++i)
-					guest_modes[i].enabled = false;
-				mode_selected = true;
-			}
-			mode = strtoul(optarg, NULL, 10);
-			TEST_ASSERT(mode < NUM_VM_MODES,
-				    "Guest mode ID %d too big", mode);
-			guest_modes[mode].enabled = true;
+			guest_modes_cmdline(optarg);
 			break;
 		case 'u':
-			use_uffd = true;
+			p.use_uffd = true;
 			break;
 		case 'd':
-			uffd_delay = strtoul(optarg, NULL, 0);
-			TEST_ASSERT(uffd_delay >= 0,
-				    "A negative UFFD delay is not supported.");
+			p.uffd_delay = strtoul(optarg, NULL, 0);
+			TEST_ASSERT(p.uffd_delay >= 0, "A negative UFFD delay is not supported.");
 			break;
 		case 'b':
 			guest_percpu_mem_size = parse_size(optarg);
@@ -473,14 +425,7 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		if (!guest_modes[i].enabled)
-			continue;
-		TEST_ASSERT(guest_modes[i].supported,
-			    "Guest mode ID %d (%s) not supported.",
-			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay);
-	}
+	for_each_guest_mode(run_test, &p);
 
 	return 0;
 }
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 9c6a7be31e03..506741eb5d7f 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -12,16 +12,14 @@
 
 #include <stdio.h>
 #include <stdlib.h>
-#include <unistd.h>
 #include <time.h>
 #include <pthread.h>
 #include <linux/bitmap.h>
-#include <linux/bitops.h>
 
 #include "kvm_util.h"
-#include "perf_test_util.h"
-#include "processor.h"
 #include "test_util.h"
+#include "perf_test_util.h"
+#include "guest_modes.h"
 
 /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/
 #define TEST_HOST_LOOP_N		2UL
@@ -89,9 +87,15 @@ static void *vcpu_worker(void *data)
 	return NULL;
 }
 
-static void run_test(enum vm_guest_mode mode, unsigned long iterations,
-		     uint64_t phys_offset, int wr_fract)
+struct test_params {
+	unsigned long iterations;
+	uint64_t phys_offset;
+	int wr_fract;
+};
+
+static void run_test(enum vm_guest_mode mode, void *arg)
 {
+	struct test_params *p = arg;
 	pthread_t *vcpu_threads;
 	struct kvm_vm *vm;
 	unsigned long *bmap;
@@ -108,7 +112,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 
 	vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size);
 
-	perf_test_args.wr_fract = wr_fract;
+	perf_test_args.wr_fract = p->wr_fract;
 
 	guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm);
 	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
@@ -156,7 +160,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	pr_info("Enabling dirty logging time: %ld.%.9lds\n\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
 
-	while (iteration < iterations) {
+	while (iteration < p->iterations) {
 		/*
 		 * Incrementing the iteration number will start the vCPUs
 		 * dirtying memory again.
@@ -210,15 +214,15 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	pr_info("Disabling dirty logging time: %ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
 
-	avg = timespec_div(get_dirty_log_total, iterations);
+	avg = timespec_div(get_dirty_log_total, p->iterations);
 	pr_info("Get dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n",
-		iterations, get_dirty_log_total.tv_sec,
+		p->iterations, get_dirty_log_total.tv_sec,
 		get_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec);
 
 	if (dirty_log_manual_caps) {
-		avg = timespec_div(clear_dirty_log_total, iterations);
+		avg = timespec_div(clear_dirty_log_total, p->iterations);
 		pr_info("Clear dirty log over %lu iterations took %ld.%.9lds. (Avg %ld.%.9lds/iteration)\n",
-			iterations, clear_dirty_log_total.tv_sec,
+			p->iterations, clear_dirty_log_total.tv_sec,
 			clear_dirty_log_total.tv_nsec, avg.tv_sec, avg.tv_nsec);
 	}
 
@@ -228,20 +232,8 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	kvm_vm_free(vm);
 }
 
-struct guest_mode {
-	bool supported;
-	bool enabled;
-};
-static struct guest_mode guest_modes[NUM_VM_MODES];
-
-#define guest_mode_init(mode, supported, enabled) ({ \
-	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
-})
-
 static void help(char *name)
 {
-	int i;
-
 	puts("");
 	printf("usage: %s [-h] [-i iterations] [-p offset] "
 	       "[-m mode] [-b vcpu bytes] [-v vcpus]\n", name);
@@ -250,14 +242,7 @@ static void help(char *name)
 	       TEST_HOST_LOOP_N);
 	printf(" -p: specify guest physical test memory offset\n"
 	       "     Warning: a low offset can conflict with the loaded test code.\n");
-	printf(" -m: specify the guest mode ID to test "
-	       "(default: test all supported modes)\n"
-	       "     This option may be used multiple times.\n"
-	       "     Guest mode IDs:\n");
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
-		       guest_modes[i].supported ? " (supported)" : "");
-	}
+	guest_modes_help();
 	printf(" -b: specify the size of the memory region which should be\n"
 	       "     dirtied by each vCPU. e.g. 10M or 3G.\n"
 	       "     (default: 1G)\n");
@@ -272,74 +257,43 @@ static void help(char *name)
 
 int main(int argc, char *argv[])
 {
-	unsigned long iterations = TEST_HOST_LOOP_N;
-	bool mode_selected = false;
-	uint64_t phys_offset = 0;
-	unsigned int mode;
-	int opt, i;
-	int wr_fract = 1;
+	int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
+	struct test_params p = {
+		.iterations = TEST_HOST_LOOP_N,
+		.wr_fract = 1,
+	};
+	int opt;
 
 	dirty_log_manual_caps =
 		kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2);
 	dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE |
 				  KVM_DIRTY_LOG_INITIALLY_SET);
 
-#ifdef __x86_64__
-	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
-#endif
-#ifdef __aarch64__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-	guest_mode_init(VM_MODE_P40V48_64K, true, true);
-
-	{
-		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
-
-		if (limit >= 52)
-			guest_mode_init(VM_MODE_P52V48_64K, true, true);
-		if (limit >= 48) {
-			guest_mode_init(VM_MODE_P48V48_4K, true, true);
-			guest_mode_init(VM_MODE_P48V48_64K, true, true);
-		}
-	}
-#endif
-#ifdef __s390x__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-#endif
+	guest_modes_append_default();
 
 	while ((opt = getopt(argc, argv, "hi:p:m:b:f:v:")) != -1) {
 		switch (opt) {
 		case 'i':
-			iterations = strtol(optarg, NULL, 10);
+			p.iterations = strtol(optarg, NULL, 10);
 			break;
 		case 'p':
-			phys_offset = strtoull(optarg, NULL, 0);
+			p.phys_offset = strtoull(optarg, NULL, 0);
 			break;
 		case 'm':
-			if (!mode_selected) {
-				for (i = 0; i < NUM_VM_MODES; ++i)
-					guest_modes[i].enabled = false;
-				mode_selected = true;
-			}
-			mode = strtoul(optarg, NULL, 10);
-			TEST_ASSERT(mode < NUM_VM_MODES,
-				    "Guest mode ID %d too big", mode);
-			guest_modes[mode].enabled = true;
+			guest_modes_cmdline(optarg);
 			break;
 		case 'b':
 			guest_percpu_mem_size = parse_size(optarg);
 			break;
 		case 'f':
-			wr_fract = atoi(optarg);
-			TEST_ASSERT(wr_fract >= 1,
+			p.wr_fract = atoi(optarg);
+			TEST_ASSERT(p.wr_fract >= 1,
 				    "Write fraction cannot be less than one");
 			break;
 		case 'v':
 			nr_vcpus = atoi(optarg);
-			TEST_ASSERT(nr_vcpus > 0,
-				    "Must have a positive number of vCPUs");
-			TEST_ASSERT(nr_vcpus <= MAX_VCPUS,
-				    "This test does not currently support\n"
-				    "more than %d vCPUs.", MAX_VCPUS);
+			TEST_ASSERT(nr_vcpus > 0 && nr_vcpus <= max_vcpus,
+				    "Invalid number of vcpus, must be between 1 and %d", max_vcpus);
 			break;
 		case 'h':
 		default:
@@ -348,18 +302,11 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	TEST_ASSERT(iterations >= 2, "The test should have at least two iterations");
+	TEST_ASSERT(p.iterations >= 2, "The test should have at least two iterations");
 
-	pr_info("Test iterations: %"PRIu64"\n",	iterations);
+	pr_info("Test iterations: %"PRIu64"\n",	p.iterations);
 
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		if (!guest_modes[i].enabled)
-			continue;
-		TEST_ASSERT(guest_modes[i].supported,
-			    "Guest mode ID %d (%s) not supported.",
-			    i, vm_guest_mode_string(i));
-		run_test(i, iterations, phys_offset, wr_fract);
-	}
+	for_each_guest_mode(run_test, &p);
 
 	return 0;
 }
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 471baecb7772..bb2752d78fe3 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -9,8 +9,6 @@
 
 #include <stdio.h>
 #include <stdlib.h>
-#include <unistd.h>
-#include <time.h>
 #include <pthread.h>
 #include <semaphore.h>
 #include <sys/types.h>
@@ -20,8 +18,9 @@
 #include <linux/bitops.h>
 #include <asm/barrier.h>
 
-#include "test_util.h"
 #include "kvm_util.h"
+#include "test_util.h"
+#include "guest_modes.h"
 #include "processor.h"
 
 #define VCPU_ID				1
@@ -673,9 +672,15 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 #define DIRTY_MEM_BITS 30 /* 1G */
 #define PAGE_SHIFT_4K  12
 
-static void run_test(enum vm_guest_mode mode, unsigned long iterations,
-		     unsigned long interval, uint64_t phys_offset)
+struct test_params {
+	unsigned long iterations;
+	unsigned long interval;
+	uint64_t phys_offset;
+};
+
+static void run_test(enum vm_guest_mode mode, void *arg)
 {
+	struct test_params *p = arg;
 	struct kvm_vm *vm;
 	unsigned long *bmap;
 
@@ -709,12 +714,12 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	host_page_size = getpagesize();
 	host_num_pages = vm_num_host_pages(mode, guest_num_pages);
 
-	if (!phys_offset) {
+	if (!p->phys_offset) {
 		guest_test_phys_mem = (vm_get_max_gfn(vm) -
 				       guest_num_pages) * guest_page_size;
 		guest_test_phys_mem &= ~(host_page_size - 1);
 	} else {
-		guest_test_phys_mem = phys_offset;
+		guest_test_phys_mem = p->phys_offset;
 	}
 
 #ifdef __s390x__
@@ -758,9 +763,9 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 
 	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
 
-	while (iteration < iterations) {
+	while (iteration < p->iterations) {
 		/* Give the vcpu thread some time to dirty some pages */
-		usleep(interval * 1000);
+		usleep(p->interval * 1000);
 		log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
 					     bmap, host_num_pages);
 		vm_dirty_log_verify(mode, bmap);
@@ -783,20 +788,8 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	kvm_vm_free(vm);
 }
 
-struct guest_mode {
-	bool supported;
-	bool enabled;
-};
-static struct guest_mode guest_modes[NUM_VM_MODES];
-
-#define guest_mode_init(mode, supported, enabled) ({ \
-	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
-})
-
 static void help(char *name)
 {
-	int i;
-
 	puts("");
 	printf("usage: %s [-h] [-i iterations] [-I interval] "
 	       "[-p offset] [-m mode]\n", name);
@@ -813,51 +806,23 @@ static void help(char *name)
 	printf(" -M: specify the host logging mode "
 	       "(default: run all log modes).  Supported modes: \n\t");
 	log_modes_dump();
-	printf(" -m: specify the guest mode ID to test "
-	       "(default: test all supported modes)\n"
-	       "     This option may be used multiple times.\n"
-	       "     Guest mode IDs:\n");
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
-		       guest_modes[i].supported ? " (supported)" : "");
-	}
+	guest_modes_help();
 	puts("");
 	exit(0);
 }
 
 int main(int argc, char *argv[])
 {
-	unsigned long iterations = TEST_HOST_LOOP_N;
-	unsigned long interval = TEST_HOST_LOOP_INTERVAL;
-	bool mode_selected = false;
-	uint64_t phys_offset = 0;
-	unsigned int mode;
-	int opt, i, j;
+	struct test_params p = {
+		.iterations = TEST_HOST_LOOP_N,
+		.interval = TEST_HOST_LOOP_INTERVAL,
+	};
+	int opt, i;
 
 	sem_init(&dirty_ring_vcpu_stop, 0, 0);
 	sem_init(&dirty_ring_vcpu_cont, 0, 0);
 
-#ifdef __x86_64__
-	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
-#endif
-#ifdef __aarch64__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-	guest_mode_init(VM_MODE_P40V48_64K, true, true);
-
-	{
-		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
-
-		if (limit >= 52)
-			guest_mode_init(VM_MODE_P52V48_64K, true, true);
-		if (limit >= 48) {
-			guest_mode_init(VM_MODE_P48V48_4K, true, true);
-			guest_mode_init(VM_MODE_P48V48_64K, true, true);
-		}
-	}
-#endif
-#ifdef __s390x__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-#endif
+	guest_modes_append_default();
 
 	while ((opt = getopt(argc, argv, "c:hi:I:p:m:M:")) != -1) {
 		switch (opt) {
@@ -865,24 +830,16 @@ int main(int argc, char *argv[])
 			test_dirty_ring_count = strtol(optarg, NULL, 10);
 			break;
 		case 'i':
-			iterations = strtol(optarg, NULL, 10);
+			p.iterations = strtol(optarg, NULL, 10);
 			break;
 		case 'I':
-			interval = strtol(optarg, NULL, 10);
+			p.interval = strtol(optarg, NULL, 10);
 			break;
 		case 'p':
-			phys_offset = strtoull(optarg, NULL, 0);
+			p.phys_offset = strtoull(optarg, NULL, 0);
 			break;
 		case 'm':
-			if (!mode_selected) {
-				for (i = 0; i < NUM_VM_MODES; ++i)
-					guest_modes[i].enabled = false;
-				mode_selected = true;
-			}
-			mode = strtoul(optarg, NULL, 10);
-			TEST_ASSERT(mode < NUM_VM_MODES,
-				    "Guest mode ID %d too big", mode);
-			guest_modes[mode].enabled = true;
+			guest_modes_cmdline(optarg);
 			break;
 		case 'M':
 			if (!strcmp(optarg, "all")) {
@@ -911,32 +868,24 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
-	TEST_ASSERT(interval > 0, "Interval must be greater than zero");
+	TEST_ASSERT(p.iterations > 2, "Iterations must be greater than two");
+	TEST_ASSERT(p.interval > 0, "Interval must be greater than zero");
 
 	pr_info("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n",
-		iterations, interval);
+		p.iterations, p.interval);
 
 	srandom(time(0));
 
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		if (!guest_modes[i].enabled)
-			continue;
-		TEST_ASSERT(guest_modes[i].supported,
-			    "Guest mode ID %d (%s) not supported.",
-			    i, vm_guest_mode_string(i));
-		if (host_log_mode_option == LOG_MODE_ALL) {
-			/* Run each log mode */
-			for (j = 0; j < LOG_MODE_NUM; j++) {
-				pr_info("Testing Log Mode '%s'\n",
-					log_modes[j].name);
-				host_log_mode = j;
-				run_test(i, iterations, interval, phys_offset);
-			}
-		} else {
-			host_log_mode = host_log_mode_option;
-			run_test(i, iterations, interval, phys_offset);
+	if (host_log_mode_option == LOG_MODE_ALL) {
+		/* Run each log mode */
+		for (i = 0; i < LOG_MODE_NUM; i++) {
+			pr_info("Testing Log Mode '%s'\n", log_modes[i].name);
+			host_log_mode = i;
+			for_each_guest_mode(run_test, &p);
 		}
+	} else {
+		host_log_mode = host_log_mode_option;
+		for_each_guest_mode(run_test, &p);
 	}
 
 	return 0;
diff --git a/tools/testing/selftests/kvm/include/guest_modes.h b/tools/testing/selftests/kvm/include/guest_modes.h
new file mode 100644
index 000000000000..b691df33e64e
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/guest_modes.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020, Red Hat, Inc.
+ */
+#include "kvm_util.h"
+
+struct guest_mode {
+	bool supported;
+	bool enabled;
+};
+
+extern struct guest_mode guest_modes[NUM_VM_MODES];
+
+#define guest_mode_append(mode, supported, enabled) ({ \
+	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
+})
+
+void guest_modes_append_default(void);
+void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg);
+void guest_modes_help(void);
+void guest_modes_cmdline(const char *arg);
diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c
new file mode 100644
index 000000000000..25bff307c71f
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/guest_modes.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020, Red Hat, Inc.
+ */
+#include "guest_modes.h"
+
+struct guest_mode guest_modes[NUM_VM_MODES];
+
+void guest_modes_append_default(void)
+{
+	guest_mode_append(VM_MODE_DEFAULT, true, true);
+
+#ifdef __aarch64__
+	guest_mode_append(VM_MODE_P40V48_64K, true, true);
+	{
+		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
+		if (limit >= 52)
+			guest_mode_append(VM_MODE_P52V48_64K, true, true);
+		if (limit >= 48) {
+			guest_mode_append(VM_MODE_P48V48_4K, true, true);
+			guest_mode_append(VM_MODE_P48V48_64K, true, true);
+		}
+	}
+#endif
+}
+
+void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg)
+{
+	int i;
+
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		if (!guest_modes[i].enabled)
+			continue;
+		TEST_ASSERT(guest_modes[i].supported,
+			    "Guest mode ID %d (%s) not supported.",
+			    i, vm_guest_mode_string(i));
+		func(i, arg);
+	}
+}
+
+void guest_modes_help(void)
+{
+	int i;
+
+	printf(" -m: specify the guest mode ID to test\n"
+	       "     (default: test all supported modes)\n"
+	       "     This option may be used multiple times.\n"
+	       "     Guest mode IDs:\n");
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
+		       guest_modes[i].supported ? " (supported)" : "");
+	}
+}
+
+void guest_modes_cmdline(const char *arg)
+{
+	static bool mode_selected;
+	unsigned int mode;
+	int i;
+
+	if (!mode_selected) {
+		for (i = 0; i < NUM_VM_MODES; ++i)
+			guest_modes[i].enabled = false;
+		mode_selected = true;
+	}
+
+	mode = strtoul(optarg, NULL, 10);
+	TEST_ASSERT(mode < NUM_VM_MODES, "Guest mode ID %d too big", mode);
+	guest_modes[mode].enabled = true;
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 2/4] KVM: selftests: dirty_log_test: Remove create_vm
  2020-11-16 12:19 [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 1/4] KVM: selftests: Factor out guest mode code Andrew Jones
@ 2020-11-16 12:19 ` Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 3/4] KVM: selftests: Use vm_create_with_vcpus in create_vm Andrew Jones
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Andrew Jones @ 2020-11-16 12:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Use vm_create_with_vcpus instead of create_vm and do
some minor cleanups around it.

Reviewed-by: Ben Gardon <bgardon@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 56 ++++++--------------
 1 file changed, 16 insertions(+), 40 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index bb2752d78fe3..5af730deb86b 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -5,8 +5,6 @@
  * Copyright (C) 2018, Red Hat, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
-
 #include <stdio.h>
 #include <stdlib.h>
 #include <pthread.h>
@@ -25,6 +23,9 @@
 
 #define VCPU_ID				1
 
+#define DIRTY_MEM_BITS			30 /* 1G */
+#define DIRTY_MEM_SIZE			(1UL << DIRTY_MEM_BITS)
+
 /* The memory slot index to track dirty pages */
 #define TEST_MEM_SLOT_INDEX		1
 
@@ -651,27 +652,6 @@ static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long *bmap)
 	}
 }
 
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
-				uint64_t extra_mem_pages, void *guest_code)
-{
-	struct kvm_vm *vm;
-	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
-
-	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
-
-	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-#ifdef __x86_64__
-	vm_create_irqchip(vm);
-#endif
-	log_mode_create_vm_done(vm);
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
-	return vm;
-}
-
-#define DIRTY_MEM_BITS 30 /* 1G */
-#define PAGE_SHIFT_4K  12
-
 struct test_params {
 	unsigned long iterations;
 	unsigned long interval;
@@ -690,43 +670,39 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		return;
 	}
 
+	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
+
 	/*
 	 * We reserve page table for 2 times of extra dirty mem which
-	 * will definitely cover the original (1G+) test range.  Here
-	 * we do the calculation with 4K page size which is the
-	 * smallest so the page number will be enough for all archs
-	 * (e.g., 64K page size guest will need even less memory for
-	 * page tables).
+	 * will definitely cover the original (1G+) test range.
 	 */
-	vm = create_vm(mode, VCPU_ID,
-		       2ul << (DIRTY_MEM_BITS - PAGE_SHIFT_4K),
-		       guest_code);
+	vm = vm_create_with_vcpus(mode, 1,
+			vm_calc_num_guest_pages(mode, DIRTY_MEM_SIZE * 2),
+			0, guest_code, (uint32_t []){ VCPU_ID });
+
+	log_mode_create_vm_done(vm);
 
 	guest_page_size = vm_get_page_size(vm);
+	host_page_size = getpagesize();
+
 	/*
 	 * A little more than 1G of guest page sized pages.  Cover the
 	 * case where the size is not aligned to 64 pages.
 	 */
-	guest_num_pages = (1ul << (DIRTY_MEM_BITS -
-				   vm_get_page_shift(vm))) + 3;
-	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
-
-	host_page_size = getpagesize();
+	guest_num_pages = vm_adjust_num_guest_pages(mode,
+				(1ul << (DIRTY_MEM_BITS - vm_get_page_shift(vm))) + 3);
 	host_num_pages = vm_num_host_pages(mode, guest_num_pages);
 
 	if (!p->phys_offset) {
-		guest_test_phys_mem = (vm_get_max_gfn(vm) -
-				       guest_num_pages) * guest_page_size;
+		guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size;
 		guest_test_phys_mem &= ~(host_page_size - 1);
 	} else {
 		guest_test_phys_mem = p->phys_offset;
 	}
-
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
 	guest_test_phys_mem &= ~((1 << 20) - 1);
 #endif
-
 	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
 
 	bmap = bitmap_alloc(host_num_pages);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 3/4] KVM: selftests: Use vm_create_with_vcpus in create_vm
  2020-11-16 12:19 [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 1/4] KVM: selftests: Factor out guest mode code Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 2/4] KVM: selftests: dirty_log_test: Remove create_vm Andrew Jones
@ 2020-11-16 12:19 ` Andrew Jones
  2020-11-16 12:19 ` [PATCH v3 4/4] KVM: selftests: Implement perf_test_util more conventionally Andrew Jones
  2020-11-16 18:16 ` [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Paolo Bonzini
  4 siblings, 0 replies; 17+ messages in thread
From: Andrew Jones @ 2020-11-16 12:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/demand_paging_test.c        |  2 +-
 .../selftests/kvm/dirty_log_perf_test.c       |  2 -
 .../testing/selftests/kvm/include/kvm_util.h  |  8 ++++
 .../selftests/kvm/include/perf_test_util.h    | 47 +++++--------------
 tools/testing/selftests/kvm/lib/kvm_util.c    |  9 +---
 5 files changed, 21 insertions(+), 47 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 946161a9ce2d..b0c41de32e9b 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -7,7 +7,7 @@
  * Copyright (C) 2019, Google, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name and pipe2 */
+#define _GNU_SOURCE /* for pipe2 */
 
 #include <stdio.h>
 #include <stdlib.h>
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 506741eb5d7f..36bea75a8d6f 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -8,8 +8,6 @@
  * Copyright (C) 2020, Google, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
-
 #include <stdio.h>
 #include <stdlib.h>
 #include <time.h>
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index dfa9d369e8fc..149766ecd68b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -70,6 +70,14 @@ enum vm_guest_mode {
 #define vm_guest_mode_string(m) vm_guest_mode_string[m]
 extern const char * const vm_guest_mode_string[];
 
+struct vm_guest_mode_params {
+	unsigned int pa_bits;
+	unsigned int va_bits;
+	unsigned int page_size;
+	unsigned int page_shift;
+};
+extern const struct vm_guest_mode_params vm_guest_mode_params[];
+
 enum vm_mem_backing_src_type {
 	VM_MEM_SRC_ANONYMOUS,
 	VM_MEM_SRC_ANONYMOUS_THP,
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 239421e4f6b8..cd4c258f458d 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -13,9 +13,6 @@
 
 #define MAX_VCPUS 512
 
-#define PAGE_SHIFT_4K  12
-#define PTES_PER_4K_PT 512
-
 #define TEST_MEM_SLOT_INDEX		1
 
 /* Default guest test virtual memory offset */
@@ -94,41 +91,26 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
 				uint64_t vcpu_memory_bytes)
 {
 	struct kvm_vm *vm;
-	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
 	uint64_t guest_num_pages;
 
-	/* Account for a few pages per-vCPU for stacks */
-	pages += DEFAULT_STACK_PGS * vcpus;
-
-	/*
-	 * Reserve twice the ammount of memory needed to map the test region and
-	 * the page table / stacks region, at 4k, for page tables. Do the
-	 * calculation with 4K page size: the smallest of all archs. (e.g., 64K
-	 * page size guest will need even less memory for page tables).
-	 */
-	pages += (2 * pages) / PTES_PER_4K_PT;
-	pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
-		 PTES_PER_4K_PT;
-	pages = vm_adjust_num_guest_pages(mode, pages);
-
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	vm = vm_create(mode, pages, O_RDWR);
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-#ifdef __x86_64__
-	vm_create_irqchip(vm);
-#endif
-
-	perf_test_args.vm = vm;
-	perf_test_args.guest_page_size = vm_get_page_size(vm);
 	perf_test_args.host_page_size = getpagesize();
+	perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
 
+	guest_num_pages = vm_adjust_num_guest_pages(mode,
+				(vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size);
+
+	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
+		    "Guest memory size is not host page size aligned.");
 	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
-	guest_num_pages = (vcpus * vcpu_memory_bytes) /
-			  perf_test_args.guest_page_size;
-	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
+	vm = vm_create_with_vcpus(mode, vcpus,
+				  (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size,
+				  0, guest_code, NULL);
+
+	perf_test_args.vm = vm;
 
 	/*
 	 * If there should be more memory in the guest test region than there
@@ -140,18 +122,13 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
 		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
 		    vcpu_memory_bytes);
 
-	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
-		    "Guest memory size is not host page size aligned.");
-
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
 			      perf_test_args.guest_page_size;
 	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
-
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
 	guest_test_phys_mem &= ~((1 << 20) - 1);
 #endif
-
 	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
 
 	/* Add an extra memory slot for testing */
@@ -177,8 +154,6 @@ static void add_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes)
 	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
 		vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
 
-		vm_vcpu_add_default(vm, vcpu_id, guest_code);
-
 		vcpu_args->vcpu_id = vcpu_id;
 		vcpu_args->gva = guest_test_virt_mem +
 				 (vcpu_id * vcpu_memory_bytes);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b2c426adb87f..14fcadb778b5 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -153,14 +153,7 @@ const char * const vm_guest_mode_string[] = {
 _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
 	       "Missing new mode strings?");
 
-struct vm_guest_mode_params {
-	unsigned int pa_bits;
-	unsigned int va_bits;
-	unsigned int page_size;
-	unsigned int page_shift;
-};
-
-static const struct vm_guest_mode_params vm_guest_mode_params[] = {
+const struct vm_guest_mode_params vm_guest_mode_params[] = {
 	{ 52, 48,  0x1000, 12 },
 	{ 52, 48, 0x10000, 16 },
 	{ 48, 48,  0x1000, 12 },
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 4/4] KVM: selftests: Implement perf_test_util more conventionally
  2020-11-16 12:19 [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Andrew Jones
                   ` (2 preceding siblings ...)
  2020-11-16 12:19 ` [PATCH v3 3/4] KVM: selftests: Use vm_create_with_vcpus in create_vm Andrew Jones
@ 2020-11-16 12:19 ` Andrew Jones
  2020-11-16 18:16 ` [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Paolo Bonzini
  4 siblings, 0 replies; 17+ messages in thread
From: Andrew Jones @ 2020-11-16 12:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

It's not conventional C to put non-inline functions in header
files. Create a source file for the functions instead. Also
reduce the amount of globals and rename the functions to
something less generic.

Reviewed-by: Ben Gardon <bgardon@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/Makefile          |   2 +-
 .../selftests/kvm/demand_paging_test.c        |  11 +-
 .../selftests/kvm/dirty_log_perf_test.c       |  22 +--
 .../testing/selftests/kvm/include/kvm_util.h  |   1 +
 .../selftests/kvm/include/perf_test_util.h    | 142 ++----------------
 .../selftests/kvm/lib/perf_test_util.c        | 134 +++++++++++++++++
 6 files changed, 166 insertions(+), 146 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/lib/perf_test_util.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index ba24691719ef..74e99a2b1b49 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -33,7 +33,7 @@ ifeq ($(ARCH),s390)
 	UNAME_M := s390x
 endif
 
-LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c
+LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
 LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c
 LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index b0c41de32e9b..cdad1eca72f7 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -36,12 +36,14 @@
 #define PER_VCPU_DEBUG(...) _no_printf(__VA_ARGS__)
 #endif
 
+static int nr_vcpus = 1;
+static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
 static char *guest_data_prototype;
 
 static void *vcpu_worker(void *data)
 {
 	int ret;
-	struct vcpu_args *vcpu_args = (struct vcpu_args *)data;
+	struct perf_test_vcpu_args *vcpu_args = (struct perf_test_vcpu_args *)data;
 	int vcpu_id = vcpu_args->vcpu_id;
 	struct kvm_vm *vm = perf_test_args.vm;
 	struct kvm_run *run;
@@ -263,7 +265,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	int vcpu_id;
 	int r;
 
-	vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size);
+	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size);
 
 	perf_test_args.wr_fract = 1;
 
@@ -275,7 +277,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	add_vcpus(vm, nr_vcpus, guest_percpu_mem_size);
+	perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size);
 
 	if (p->use_uffd) {
 		uffd_handler_threads =
@@ -359,8 +361,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		perf_test_args.vcpu_args[0].pages * nr_vcpus /
 		((double)ts_diff.tv_sec + (double)ts_diff.tv_nsec / 100000000.0));
 
-	ucall_uninit(vm);
-	kvm_vm_free(vm);
+	perf_test_destroy_vm(vm);
 
 	free(guest_data_prototype);
 	free(vcpu_threads);
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 36bea75a8d6f..2283a0ec74a9 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -22,11 +22,14 @@
 /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/
 #define TEST_HOST_LOOP_N		2UL
 
+static int nr_vcpus = 1;
+static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
+
 /* Host variables */
 static u64 dirty_log_manual_caps;
 static bool host_quit;
 static uint64_t iteration;
-static uint64_t vcpu_last_completed_iteration[MAX_VCPUS];
+static uint64_t vcpu_last_completed_iteration[KVM_MAX_VCPUS];
 
 static void *vcpu_worker(void *data)
 {
@@ -38,7 +41,7 @@ static void *vcpu_worker(void *data)
 	struct timespec ts_diff;
 	struct timespec total = (struct timespec){0};
 	struct timespec avg;
-	struct vcpu_args *vcpu_args = (struct vcpu_args *)data;
+	struct perf_test_vcpu_args *vcpu_args = (struct perf_test_vcpu_args *)data;
 	int vcpu_id = vcpu_args->vcpu_id;
 
 	vcpu_args_set(vm, vcpu_id, 1, vcpu_id);
@@ -108,7 +111,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct kvm_enable_cap cap = {};
 	struct timespec clear_dirty_log_total = (struct timespec){0};
 
-	vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size);
+	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size);
 
 	perf_test_args.wr_fract = p->wr_fract;
 
@@ -126,7 +129,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	add_vcpus(vm, nr_vcpus, guest_percpu_mem_size);
+	perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size);
 
 	sync_global_to_guest(vm, perf_test_args);
 
@@ -152,7 +155,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	/* Enable dirty logging */
 	clock_gettime(CLOCK_MONOTONIC, &start);
-	vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX,
+	vm_mem_region_set_flags(vm, PERF_TEST_MEM_SLOT_INDEX,
 				KVM_MEM_LOG_DIRTY_PAGES);
 	ts_diff = timespec_diff_now(start);
 	pr_info("Enabling dirty logging time: %ld.%.9lds\n\n",
@@ -179,7 +182,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 			iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
 
 		clock_gettime(CLOCK_MONOTONIC, &start);
-		kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap);
+		kvm_vm_get_dirty_log(vm, PERF_TEST_MEM_SLOT_INDEX, bmap);
 
 		ts_diff = timespec_diff_now(start);
 		get_dirty_log_total = timespec_add(get_dirty_log_total,
@@ -189,7 +192,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 		if (dirty_log_manual_caps) {
 			clock_gettime(CLOCK_MONOTONIC, &start);
-			kvm_vm_clear_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap, 0,
+			kvm_vm_clear_dirty_log(vm, PERF_TEST_MEM_SLOT_INDEX, bmap, 0,
 					       host_num_pages);
 
 			ts_diff = timespec_diff_now(start);
@@ -207,7 +210,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	/* Disable dirty logging */
 	clock_gettime(CLOCK_MONOTONIC, &start);
-	vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, 0);
+	vm_mem_region_set_flags(vm, PERF_TEST_MEM_SLOT_INDEX, 0);
 	ts_diff = timespec_diff_now(start);
 	pr_info("Disabling dirty logging time: %ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
@@ -226,8 +229,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	free(bmap);
 	free(vcpu_threads);
-	ucall_uninit(vm);
-	kvm_vm_free(vm);
+	perf_test_destroy_vm(vm);
 }
 
 static void help(char *name)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 149766ecd68b..5cbb861525ed 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -16,6 +16,7 @@
 
 #include "sparsebit.h"
 
+#define KVM_MAX_VCPUS 512
 
 /*
  * Callers of kvm_util only have an incomplete/opaque description of the
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index cd4c258f458d..b1188823c31b 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -9,35 +9,15 @@
 #define SELFTEST_KVM_PERF_TEST_UTIL_H
 
 #include "kvm_util.h"
-#include "processor.h"
-
-#define MAX_VCPUS 512
-
-#define TEST_MEM_SLOT_INDEX		1
 
 /* Default guest test virtual memory offset */
 #define DEFAULT_GUEST_TEST_MEM		0xc0000000
 
 #define DEFAULT_PER_VCPU_MEM_SIZE	(1 << 30) /* 1G */
 
-/*
- * Guest physical memory offset of the testing memory slot.
- * This will be set to the topmost valid physical address minus
- * the test memory size.
- */
-static uint64_t guest_test_phys_mem;
-
-/*
- * Guest virtual memory offset of the testing memory slot.
- * Must not conflict with identity mapped test code.
- */
-static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
-static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
-
-/* Number of VCPUs for the test */
-static int nr_vcpus = 1;
+#define PERF_TEST_MEM_SLOT_INDEX	1
 
-struct vcpu_args {
+struct perf_test_vcpu_args {
 	uint64_t gva;
 	uint64_t pages;
 
@@ -51,119 +31,21 @@ struct perf_test_args {
 	uint64_t guest_page_size;
 	int wr_fract;
 
-	struct vcpu_args vcpu_args[MAX_VCPUS];
+	struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS];
 };
 
-static struct perf_test_args perf_test_args;
+extern struct perf_test_args perf_test_args;
 
 /*
- * Continuously write to the first 8 bytes of each page in the
- * specified region.
+ * Guest physical memory offset of the testing memory slot.
+ * This will be set to the topmost valid physical address minus
+ * the test memory size.
  */
-static void guest_code(uint32_t vcpu_id)
-{
-	struct vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
-	uint64_t gva;
-	uint64_t pages;
-	int i;
-
-	/* Make sure vCPU args data structure is not corrupt. */
-	GUEST_ASSERT(vcpu_args->vcpu_id == vcpu_id);
-
-	gva = vcpu_args->gva;
-	pages = vcpu_args->pages;
-
-	while (true) {
-		for (i = 0; i < pages; i++) {
-			uint64_t addr = gva + (i * perf_test_args.guest_page_size);
-
-			if (i % perf_test_args.wr_fract == 0)
-				*(uint64_t *)addr = 0x0123456789ABCDEF;
-			else
-				READ_ONCE(*(uint64_t *)addr);
-		}
-
-		GUEST_SYNC(1);
-	}
-}
-
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
-				uint64_t vcpu_memory_bytes)
-{
-	struct kvm_vm *vm;
-	uint64_t guest_num_pages;
-
-	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
-
-	perf_test_args.host_page_size = getpagesize();
-	perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
-
-	guest_num_pages = vm_adjust_num_guest_pages(mode,
-				(vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size);
-
-	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
-		    "Guest memory size is not host page size aligned.");
-	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,
-		    "Guest memory size is not guest page size aligned.");
-
-	vm = vm_create_with_vcpus(mode, vcpus,
-				  (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size,
-				  0, guest_code, NULL);
-
-	perf_test_args.vm = vm;
-
-	/*
-	 * If there should be more memory in the guest test region than there
-	 * can be pages in the guest, it will definitely cause problems.
-	 */
-	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
-		    "Requested more guest memory than address space allows.\n"
-		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
-		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
-		    vcpu_memory_bytes);
-
-	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
-			      perf_test_args.guest_page_size;
-	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
-#ifdef __s390x__
-	/* Align to 1M (segment size) */
-	guest_test_phys_mem &= ~((1 << 20) - 1);
-#endif
-	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
-
-	/* Add an extra memory slot for testing */
-	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-				    guest_test_phys_mem,
-				    TEST_MEM_SLOT_INDEX,
-				    guest_num_pages, 0);
-
-	/* Do mapping for the demand paging memory slot */
-	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0);
-
-	ucall_init(vm, NULL);
-
-	return vm;
-}
-
-static void add_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes)
-{
-	vm_paddr_t vcpu_gpa;
-	struct vcpu_args *vcpu_args;
-	int vcpu_id;
-
-	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
-		vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
-
-		vcpu_args->vcpu_id = vcpu_id;
-		vcpu_args->gva = guest_test_virt_mem +
-				 (vcpu_id * vcpu_memory_bytes);
-		vcpu_args->pages = vcpu_memory_bytes /
-				   perf_test_args.guest_page_size;
+extern uint64_t guest_test_phys_mem;
 
-		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes);
-		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
-			 vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes);
-	}
-}
+struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
+				uint64_t vcpu_memory_bytes);
+void perf_test_destroy_vm(struct kvm_vm *vm);
+void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes);
 
 #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
new file mode 100644
index 000000000000..9be1944c2d1c
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -0,0 +1,134 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020, Google LLC.
+ */
+
+#include "kvm_util.h"
+#include "perf_test_util.h"
+#include "processor.h"
+
+struct perf_test_args perf_test_args;
+
+uint64_t guest_test_phys_mem;
+
+/*
+ * Guest virtual memory offset of the testing memory slot.
+ * Must not conflict with identity mapped test code.
+ */
+static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+
+/*
+ * Continuously write to the first 8 bytes of each page in the
+ * specified region.
+ */
+static void guest_code(uint32_t vcpu_id)
+{
+	struct perf_test_vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
+	uint64_t gva;
+	uint64_t pages;
+	int i;
+
+	/* Make sure vCPU args data structure is not corrupt. */
+	GUEST_ASSERT(vcpu_args->vcpu_id == vcpu_id);
+
+	gva = vcpu_args->gva;
+	pages = vcpu_args->pages;
+
+	while (true) {
+		for (i = 0; i < pages; i++) {
+			uint64_t addr = gva + (i * perf_test_args.guest_page_size);
+
+			if (i % perf_test_args.wr_fract == 0)
+				*(uint64_t *)addr = 0x0123456789ABCDEF;
+			else
+				READ_ONCE(*(uint64_t *)addr);
+		}
+
+		GUEST_SYNC(1);
+	}
+}
+
+struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
+				   uint64_t vcpu_memory_bytes)
+{
+	struct kvm_vm *vm;
+	uint64_t guest_num_pages;
+
+	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
+
+	perf_test_args.host_page_size = getpagesize();
+	perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
+
+	guest_num_pages = vm_adjust_num_guest_pages(mode,
+				(vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size);
+
+	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
+		    "Guest memory size is not host page size aligned.");
+	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,
+		    "Guest memory size is not guest page size aligned.");
+
+	vm = vm_create_with_vcpus(mode, vcpus,
+				  (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size,
+				  0, guest_code, NULL);
+
+	perf_test_args.vm = vm;
+
+	/*
+	 * If there should be more memory in the guest test region than there
+	 * can be pages in the guest, it will definitely cause problems.
+	 */
+	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
+		    "Requested more guest memory than address space allows.\n"
+		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
+		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
+		    vcpu_memory_bytes);
+
+	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
+			      perf_test_args.guest_page_size;
+	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
+#ifdef __s390x__
+	/* Align to 1M (segment size) */
+	guest_test_phys_mem &= ~((1 << 20) - 1);
+#endif
+	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
+
+	/* Add an extra memory slot for testing */
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+				    guest_test_phys_mem,
+				    PERF_TEST_MEM_SLOT_INDEX,
+				    guest_num_pages, 0);
+
+	/* Do mapping for the demand paging memory slot */
+	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0);
+
+	ucall_init(vm, NULL);
+
+	return vm;
+}
+
+void perf_test_destroy_vm(struct kvm_vm *vm)
+{
+	ucall_uninit(vm);
+	kvm_vm_free(vm);
+}
+
+void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus, uint64_t vcpu_memory_bytes)
+{
+	vm_paddr_t vcpu_gpa;
+	struct perf_test_vcpu_args *vcpu_args;
+	int vcpu_id;
+
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
+
+		vcpu_args->vcpu_id = vcpu_id;
+		vcpu_args->gva = guest_test_virt_mem +
+				 (vcpu_id * vcpu_memory_bytes);
+		vcpu_args->pages = vcpu_memory_bytes /
+				   perf_test_args.guest_page_size;
+
+		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes);
+		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
+			 vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes);
+	}
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-16 12:19 [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Andrew Jones
                   ` (3 preceding siblings ...)
  2020-11-16 12:19 ` [PATCH v3 4/4] KVM: selftests: Implement perf_test_util more conventionally Andrew Jones
@ 2020-11-16 18:16 ` Paolo Bonzini
  2020-11-16 18:40   ` Peter Xu
  2020-11-20  8:05   ` Andrew Jones
  4 siblings, 2 replies; 17+ messages in thread
From: Paolo Bonzini @ 2020-11-16 18:16 UTC (permalink / raw)
  To: Andrew Jones, kvm; +Cc: borntraeger, frankja, bgardon, peterx

On 16/11/20 13:19, Andrew Jones wrote:
> This series attempts to clean up demand_paging_test, dirty_log_perf_test,
> and dirty_log_test by factoring out common code, creating some new API
> along the way. It also splits include/perf_test_util.h into a more
> conventional header and source pair.
> 
> I've tested on x86 and AArch64 (one config each), but not s390x.
> 
> v3:
>   - Rebased remaining four patches from v2 onto kvm/queue
>   - Picked up r-b's from Peter and Ben
> 
> v2: https://www.spinics.net/lists/kvm/msg228711.html

Unfortunately patch 2 is still broken:

$ ./dirty_log_test -M dirty-ring
Setting log mode to: 'dirty-ring'
Test iterations: 32, interval: 10 (ms)
Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
==== Test Assertion Failure ====
   lib/kvm_util.c:85: ret == 0
   pid=2010122 tid=2010122 - Invalid argument
      1	0x0000000000402ee7: vm_enable_cap at kvm_util.c:84
      2	0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124
      3	0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453
      4	 (inlined by) run_test at dirty_log_test.c:683
      5	0x000000000040b643: for_each_guest_mode at guest_modes.c:37
      6	0x00000000004019c2: main at dirty_log_test.c:864
      7	0x00007fe3f48207b2: ?? ??:0
      8	0x0000000000401aad: _start at ??:?
   KVM_ENABLE_CAP IOCTL failed,
   rc: -1 errno: 22

(Also fails without -M).

Paolo

> 
> Thanks,
> drew
> 
> 
> Andrew Jones (4):
>    KVM: selftests: Factor out guest mode code
>    KVM: selftests: dirty_log_test: Remove create_vm
>    KVM: selftests: Use vm_create_with_vcpus in create_vm
>    KVM: selftests: Implement perf_test_util more conventionally
> 
>   tools/testing/selftests/kvm/Makefile          |   2 +-
>   .../selftests/kvm/demand_paging_test.c        | 118 ++++--------
>   .../selftests/kvm/dirty_log_perf_test.c       | 145 +++++---------
>   tools/testing/selftests/kvm/dirty_log_test.c  | 179 +++++-------------
>   .../selftests/kvm/include/guest_modes.h       |  21 ++
>   .../testing/selftests/kvm/include/kvm_util.h  |   9 +
>   .../selftests/kvm/include/perf_test_util.h    | 167 ++--------------
>   tools/testing/selftests/kvm/lib/guest_modes.c |  70 +++++++
>   tools/testing/selftests/kvm/lib/kvm_util.c    |   9 +-
>   .../selftests/kvm/lib/perf_test_util.c        | 134 +++++++++++++
>   10 files changed, 378 insertions(+), 476 deletions(-)
>   create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
>   create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c
>   create mode 100644 tools/testing/selftests/kvm/lib/perf_test_util.c
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-16 18:16 ` [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Paolo Bonzini
@ 2020-11-16 18:40   ` Peter Xu
  2020-11-18  8:38     ` Andrew Jones
  2020-11-20  8:05   ` Andrew Jones
  1 sibling, 1 reply; 17+ messages in thread
From: Peter Xu @ 2020-11-16 18:40 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Andrew Jones, kvm, borntraeger, frankja, bgardon

On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote:
> On 16/11/20 13:19, Andrew Jones wrote:
> > This series attempts to clean up demand_paging_test, dirty_log_perf_test,
> > and dirty_log_test by factoring out common code, creating some new API
> > along the way. It also splits include/perf_test_util.h into a more
> > conventional header and source pair.
> > 
> > I've tested on x86 and AArch64 (one config each), but not s390x.
> > 
> > v3:
> >   - Rebased remaining four patches from v2 onto kvm/queue
> >   - Picked up r-b's from Peter and Ben
> > 
> > v2: https://www.spinics.net/lists/kvm/msg228711.html
> 
> Unfortunately patch 2 is still broken:
> 
> $ ./dirty_log_test -M dirty-ring
> Setting log mode to: 'dirty-ring'
> Test iterations: 32, interval: 10 (ms)
> Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> ==== Test Assertion Failure ====
>   lib/kvm_util.c:85: ret == 0
>   pid=2010122 tid=2010122 - Invalid argument
>      1	0x0000000000402ee7: vm_enable_cap at kvm_util.c:84
>      2	0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124
>      3	0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453
>      4	 (inlined by) run_test at dirty_log_test.c:683
>      5	0x000000000040b643: for_each_guest_mode at guest_modes.c:37
>      6	0x00000000004019c2: main at dirty_log_test.c:864
>      7	0x00007fe3f48207b2: ?? ??:0
>      8	0x0000000000401aad: _start at ??:?
>   KVM_ENABLE_CAP IOCTL failed,
>   rc: -1 errno: 22
> 
> (Also fails without -M).

It should be because of the ordering of creating vcpu and enabling dirty rings,
since currently for simplicity when enabling dirty ring we must have not
created any vcpus:

+       if (kvm->created_vcpus) {
+               /* We don't allow to change this value after vcpu created */
+               r = -EINVAL;
+       } else {
+               kvm->dirty_ring_size = size;
+               r = 0;
+       }

We may need to call log_mode_create_vm_done() before creating any vcpus
somehow.  Sorry to not have noticed that when reviewing it.

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-16 18:40   ` Peter Xu
@ 2020-11-18  8:38     ` Andrew Jones
  2020-11-18  9:05       ` Andrew Jones
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Jones @ 2020-11-18  8:38 UTC (permalink / raw)
  To: Peter Xu; +Cc: Paolo Bonzini, kvm, borntraeger, frankja, bgardon

On Mon, Nov 16, 2020 at 01:40:11PM -0500, Peter Xu wrote:
> On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote:
> > On 16/11/20 13:19, Andrew Jones wrote:
> > > This series attempts to clean up demand_paging_test, dirty_log_perf_test,
> > > and dirty_log_test by factoring out common code, creating some new API
> > > along the way. It also splits include/perf_test_util.h into a more
> > > conventional header and source pair.
> > > 
> > > I've tested on x86 and AArch64 (one config each), but not s390x.
> > > 
> > > v3:
> > >   - Rebased remaining four patches from v2 onto kvm/queue
> > >   - Picked up r-b's from Peter and Ben
> > > 
> > > v2: https://www.spinics.net/lists/kvm/msg228711.html
> > 
> > Unfortunately patch 2 is still broken:
> > 
> > $ ./dirty_log_test -M dirty-ring
> > Setting log mode to: 'dirty-ring'
> > Test iterations: 32, interval: 10 (ms)
> > Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> > ==== Test Assertion Failure ====
> >   lib/kvm_util.c:85: ret == 0
> >   pid=2010122 tid=2010122 - Invalid argument
> >      1	0x0000000000402ee7: vm_enable_cap at kvm_util.c:84
> >      2	0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124
> >      3	0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453
> >      4	 (inlined by) run_test at dirty_log_test.c:683
> >      5	0x000000000040b643: for_each_guest_mode at guest_modes.c:37
> >      6	0x00000000004019c2: main at dirty_log_test.c:864
> >      7	0x00007fe3f48207b2: ?? ??:0
> >      8	0x0000000000401aad: _start at ??:?
> >   KVM_ENABLE_CAP IOCTL failed,
> >   rc: -1 errno: 22
> > 
> > (Also fails without -M).
> 
> It should be because of the ordering of creating vcpu and enabling dirty rings,
> since currently for simplicity when enabling dirty ring we must have not
> created any vcpus:
> 
> +       if (kvm->created_vcpus) {
> +               /* We don't allow to change this value after vcpu created */
> +               r = -EINVAL;
> +       } else {
> +               kvm->dirty_ring_size = size;
> +               r = 0;
> +       }
> 
> We may need to call log_mode_create_vm_done() before creating any vcpus
> somehow.  Sorry to not have noticed that when reviewing it.
>

And sorry for having not tested with '-M dirty-ring'. I thought we were
trying to ensure each unique test type had its own test file (even if we
have to do the weird inclusion of C files). Doing that, the command line
options are then only used to change stuff like verbosity or to experiment
with tweaked configurations.

If we're not doing that, then I think we should. We don't want to try and
explain to all the CI people how each test should be run. It's much easier
to say "run all the binaries, no parameters necessary". Each binary with
no parameters should run the test(s) using a good default or by executing
all possible configurations.

Thanks,
drew


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-18  8:38     ` Andrew Jones
@ 2020-11-18  9:05       ` Andrew Jones
  2020-11-18  9:08         ` Andrew Jones
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Jones @ 2020-11-18  9:05 UTC (permalink / raw)
  To: Peter Xu; +Cc: Paolo Bonzini, kvm, borntraeger, frankja, bgardon

On Wed, Nov 18, 2020 at 09:38:31AM +0100, Andrew Jones wrote:
> On Mon, Nov 16, 2020 at 01:40:11PM -0500, Peter Xu wrote:
> > On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote:
> > > On 16/11/20 13:19, Andrew Jones wrote:
> > > > This series attempts to clean up demand_paging_test, dirty_log_perf_test,
> > > > and dirty_log_test by factoring out common code, creating some new API
> > > > along the way. It also splits include/perf_test_util.h into a more
> > > > conventional header and source pair.
> > > > 
> > > > I've tested on x86 and AArch64 (one config each), but not s390x.
> > > > 
> > > > v3:
> > > >   - Rebased remaining four patches from v2 onto kvm/queue
> > > >   - Picked up r-b's from Peter and Ben
> > > > 
> > > > v2: https://www.spinics.net/lists/kvm/msg228711.html
> > > 
> > > Unfortunately patch 2 is still broken:
> > > 
> > > $ ./dirty_log_test -M dirty-ring
> > > Setting log mode to: 'dirty-ring'
> > > Test iterations: 32, interval: 10 (ms)
> > > Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> > > ==== Test Assertion Failure ====
> > >   lib/kvm_util.c:85: ret == 0
> > >   pid=2010122 tid=2010122 - Invalid argument
> > >      1	0x0000000000402ee7: vm_enable_cap at kvm_util.c:84
> > >      2	0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124
> > >      3	0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453
> > >      4	 (inlined by) run_test at dirty_log_test.c:683
> > >      5	0x000000000040b643: for_each_guest_mode at guest_modes.c:37
> > >      6	0x00000000004019c2: main at dirty_log_test.c:864
> > >      7	0x00007fe3f48207b2: ?? ??:0
> > >      8	0x0000000000401aad: _start at ??:?
> > >   KVM_ENABLE_CAP IOCTL failed,
> > >   rc: -1 errno: 22
> > > 
> > > (Also fails without -M).
> > 
> > It should be because of the ordering of creating vcpu and enabling dirty rings,
> > since currently for simplicity when enabling dirty ring we must have not
> > created any vcpus:
> > 
> > +       if (kvm->created_vcpus) {
> > +               /* We don't allow to change this value after vcpu created */
> > +               r = -EINVAL;
> > +       } else {
> > +               kvm->dirty_ring_size = size;
> > +               r = 0;
> > +       }
> > 
> > We may need to call log_mode_create_vm_done() before creating any vcpus
> > somehow.  Sorry to not have noticed that when reviewing it.
> >
> 
> And sorry for having not tested with '-M dirty-ring'. I thought we were
> trying to ensure each unique test type had its own test file (even if we
> have to do the weird inclusion of C files). Doing that, the command line
> options are then only used to change stuff like verbosity or to experiment
> with tweaked configurations.
> 
> If we're not doing that, then I think we should. We don't want to try and
> explain to all the CI people how each test should be run. It's much easier
> to say "run all the binaries, no parameters necessary". Each binary with
> no parameters should run the test(s) using a good default or by executing
> all possible configurations.
>

I just double checked and we are running all modes by default. This is
the output I get

Test iterations: 32, interval: 10 (ms)
Testing Log Mode 'dirty-log'
Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
guest physical test memory offset: 0x7fbfffc000
Dirtied 1024 pages
Total bits checked: dirty (209373), clear (7917184), track_next (38548)
Testing Log Mode 'clear-log'
Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
guest physical test memory offset: 0x7fbfffc000
Dirtied 1024 pages
Total bits checked: dirty (464547), clear (7662010), track_next (37553)
Testing Log Mode 'dirty-ring'
Log mode 'dirty-ring' not supported, skipping test

which matches the output before this patch, except for minor differences
in the numbers.

I'm not sure how this is failing in your environment and not mine.

Peter?

Thanks,
drew


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-18  9:05       ` Andrew Jones
@ 2020-11-18  9:08         ` Andrew Jones
  0 siblings, 0 replies; 17+ messages in thread
From: Andrew Jones @ 2020-11-18  9:08 UTC (permalink / raw)
  To: Peter Xu; +Cc: Paolo Bonzini, kvm, borntraeger, frankja, bgardon

On Wed, Nov 18, 2020 at 10:05:59AM +0100, Andrew Jones wrote:
> On Wed, Nov 18, 2020 at 09:38:31AM +0100, Andrew Jones wrote:
> > On Mon, Nov 16, 2020 at 01:40:11PM -0500, Peter Xu wrote:
> > > On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote:
> > > > On 16/11/20 13:19, Andrew Jones wrote:
> > > > > This series attempts to clean up demand_paging_test, dirty_log_perf_test,
> > > > > and dirty_log_test by factoring out common code, creating some new API
> > > > > along the way. It also splits include/perf_test_util.h into a more
> > > > > conventional header and source pair.
> > > > > 
> > > > > I've tested on x86 and AArch64 (one config each), but not s390x.
> > > > > 
> > > > > v3:
> > > > >   - Rebased remaining four patches from v2 onto kvm/queue
> > > > >   - Picked up r-b's from Peter and Ben
> > > > > 
> > > > > v2: https://www.spinics.net/lists/kvm/msg228711.html
> > > > 
> > > > Unfortunately patch 2 is still broken:
> > > > 
> > > > $ ./dirty_log_test -M dirty-ring
> > > > Setting log mode to: 'dirty-ring'
> > > > Test iterations: 32, interval: 10 (ms)
> > > > Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> > > > ==== Test Assertion Failure ====
> > > >   lib/kvm_util.c:85: ret == 0
> > > >   pid=2010122 tid=2010122 - Invalid argument
> > > >      1	0x0000000000402ee7: vm_enable_cap at kvm_util.c:84
> > > >      2	0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124
> > > >      3	0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453
> > > >      4	 (inlined by) run_test at dirty_log_test.c:683
> > > >      5	0x000000000040b643: for_each_guest_mode at guest_modes.c:37
> > > >      6	0x00000000004019c2: main at dirty_log_test.c:864
> > > >      7	0x00007fe3f48207b2: ?? ??:0
> > > >      8	0x0000000000401aad: _start at ??:?
> > > >   KVM_ENABLE_CAP IOCTL failed,
> > > >   rc: -1 errno: 22
> > > > 
> > > > (Also fails without -M).
> > > 
> > > It should be because of the ordering of creating vcpu and enabling dirty rings,
> > > since currently for simplicity when enabling dirty ring we must have not
> > > created any vcpus:
> > > 
> > > +       if (kvm->created_vcpus) {
> > > +               /* We don't allow to change this value after vcpu created */
> > > +               r = -EINVAL;
> > > +       } else {
> > > +               kvm->dirty_ring_size = size;
> > > +               r = 0;
> > > +       }
> > > 
> > > We may need to call log_mode_create_vm_done() before creating any vcpus
> > > somehow.  Sorry to not have noticed that when reviewing it.
> > >
> > 
> > And sorry for having not tested with '-M dirty-ring'. I thought we were
> > trying to ensure each unique test type had its own test file (even if we
> > have to do the weird inclusion of C files). Doing that, the command line
> > options are then only used to change stuff like verbosity or to experiment
> > with tweaked configurations.
> > 
> > If we're not doing that, then I think we should. We don't want to try and
> > explain to all the CI people how each test should be run. It's much easier
> > to say "run all the binaries, no parameters necessary". Each binary with
> > no parameters should run the test(s) using a good default or by executing
> > all possible configurations.
> >
> 
> I just double checked and we are running all modes by default. This is
> the output I get
> 
> Test iterations: 32, interval: 10 (ms)
> Testing Log Mode 'dirty-log'
> Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> guest physical test memory offset: 0x7fbfffc000
> Dirtied 1024 pages
> Total bits checked: dirty (209373), clear (7917184), track_next (38548)
> Testing Log Mode 'clear-log'
> Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> guest physical test memory offset: 0x7fbfffc000
> Dirtied 1024 pages
> Total bits checked: dirty (464547), clear (7662010), track_next (37553)
> Testing Log Mode 'dirty-ring'
> Log mode 'dirty-ring' not supported, skipping test
> 
> which matches the output before this patch, except for minor differences
> in the numbers.
> 
> I'm not sure how this is failing in your environment and not mine.
>

Doh, sorry. I didn't actually read the output. I was only looking for an
assert and the string 'dirty-ring'. It looks like the test is skipping
for me. That would explain why it "works" for me...

drew 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-16 18:16 ` [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Paolo Bonzini
  2020-11-16 18:40   ` Peter Xu
@ 2020-11-20  8:05   ` Andrew Jones
  2020-11-20  8:48     ` Paolo Bonzini
  1 sibling, 1 reply; 17+ messages in thread
From: Andrew Jones @ 2020-11-20  8:05 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, borntraeger, frankja, bgardon, peterx

On Mon, Nov 16, 2020 at 07:16:50PM +0100, Paolo Bonzini wrote:
> On 16/11/20 13:19, Andrew Jones wrote:
> > This series attempts to clean up demand_paging_test, dirty_log_perf_test,
> > and dirty_log_test by factoring out common code, creating some new API
> > along the way. It also splits include/perf_test_util.h into a more
> > conventional header and source pair.
> > 
> > I've tested on x86 and AArch64 (one config each), but not s390x.
> > 
> > v3:
> >   - Rebased remaining four patches from v2 onto kvm/queue
> >   - Picked up r-b's from Peter and Ben
> > 
> > v2: https://www.spinics.net/lists/kvm/msg228711.html
> 
> Unfortunately patch 2 is still broken:
> 
> $ ./dirty_log_test -M dirty-ring
> Setting log mode to: 'dirty-ring'
> Test iterations: 32, interval: 10 (ms)
> Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> ==== Test Assertion Failure ====
>   lib/kvm_util.c:85: ret == 0
>   pid=2010122 tid=2010122 - Invalid argument
>      1	0x0000000000402ee7: vm_enable_cap at kvm_util.c:84
>      2	0x0000000000403004: vm_enable_dirty_ring at kvm_util.c:124
>      3	0x00000000004021a5: log_mode_create_vm_done at dirty_log_test.c:453
>      4	 (inlined by) run_test at dirty_log_test.c:683
>      5	0x000000000040b643: for_each_guest_mode at guest_modes.c:37
>      6	0x00000000004019c2: main at dirty_log_test.c:864
>      7	0x00007fe3f48207b2: ?? ??:0
>      8	0x0000000000401aad: _start at ??:?
>   KVM_ENABLE_CAP IOCTL failed,
>   rc: -1 errno: 22
> 

So I finally looked closely enough at the dirty-ring stuff to see that
patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
that says "Switch to dirty ring mode after VM creation but before any of
the vcpu creation". I'd argue that that comment would be better served at
the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
here. Maybe someday we can add a patch that adds that comment and also
tries to use common code for the number of pages calculation for the VM,
but not today.

Regarding this series, if the other three patches look good, then we
can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.

Thanks,
drew


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-20  8:05   ` Andrew Jones
@ 2020-11-20  8:48     ` Paolo Bonzini
  2020-12-16 12:46       ` Andrew Jones
  0 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2020-11-20  8:48 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, borntraeger, frankja, bgardon, peterx

On 20/11/20 09:05, Andrew Jones wrote:
> So I finally looked closely enough at the dirty-ring stuff to see that
> patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
> that says "Switch to dirty ring mode after VM creation but before any of
> the vcpu creation". I'd argue that that comment would be better served at
> the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
> here. Maybe someday we can add a patch that adds that comment and also
> tries to use common code for the number of pages calculation for the VM,
> but not today.
> 
> Regarding this series, if the other three patches look good, then we
> can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.

Yes, the rest is good.

Paolo


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-11-20  8:48     ` Paolo Bonzini
@ 2020-12-16 12:46       ` Andrew Jones
  2020-12-17 22:12         ` Ben Gardon
  2020-12-18 10:32         ` Paolo Bonzini
  0 siblings, 2 replies; 17+ messages in thread
From: Andrew Jones @ 2020-12-16 12:46 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, borntraeger, frankja, bgardon, peterx

On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote:
> On 20/11/20 09:05, Andrew Jones wrote:
> > So I finally looked closely enough at the dirty-ring stuff to see that
> > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
> > that says "Switch to dirty ring mode after VM creation but before any of
> > the vcpu creation". I'd argue that that comment would be better served at
> > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
> > here. Maybe someday we can add a patch that adds that comment and also
> > tries to use common code for the number of pages calculation for the VM,
> > but not today.
> > 
> > Regarding this series, if the other three patches look good, then we
> > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.
> 
> Yes, the rest is good.
>

Ping?

Thanks,
drew 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-12-16 12:46       ` Andrew Jones
@ 2020-12-17 22:12         ` Ben Gardon
  2020-12-18 10:32         ` Paolo Bonzini
  1 sibling, 0 replies; 17+ messages in thread
From: Ben Gardon @ 2020-12-17 22:12 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Paolo Bonzini, kvm, Christian Borntraeger, Janosch Frank, Peter Xu

On Wed, Dec 16, 2020 at 4:47 AM Andrew Jones <drjones@redhat.com> wrote:
>
> On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote:
> > On 20/11/20 09:05, Andrew Jones wrote:
> > > So I finally looked closely enough at the dirty-ring stuff to see that
> > > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
> > > that says "Switch to dirty ring mode after VM creation but before any of
> > > the vcpu creation". I'd argue that that comment would be better served at
> > > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
> > > here. Maybe someday we can add a patch that adds that comment and also
> > > tries to use common code for the number of pages calculation for the VM,
> > > but not today.
> > >
> > > Regarding this series, if the other three patches look good, then we
> > > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.
> >
> > Yes, the rest is good.
> >
>
> Ping?
>
> Thanks,
> drew

FWIW, patches 1, 3, and 4 look good to me too. Once these are merged,
I've got another new test ready to go which builds on these.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-12-16 12:46       ` Andrew Jones
  2020-12-17 22:12         ` Ben Gardon
@ 2020-12-18 10:32         ` Paolo Bonzini
  2020-12-18 11:13           ` Andrew Jones
  1 sibling, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2020-12-18 10:32 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, borntraeger, frankja, bgardon, peterx

On 16/12/20 13:46, Andrew Jones wrote:
> On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote:
>> On 20/11/20 09:05, Andrew Jones wrote:
>>> So I finally looked closely enough at the dirty-ring stuff to see that
>>> patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
>>> that says "Switch to dirty ring mode after VM creation but before any of
>>> the vcpu creation". I'd argue that that comment would be better served at
>>> the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
>>> here. Maybe someday we can add a patch that adds that comment and also
>>> tries to use common code for the number of pages calculation for the VM,
>>> but not today.
>>>
>>> Regarding this series, if the other three patches look good, then we
>>> can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.
>>
>> Yes, the rest is good.
>>
> 
> Ping?

Sorry, I was waiting for a resend.

Paolo


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-12-18 10:32         ` Paolo Bonzini
@ 2020-12-18 11:13           ` Andrew Jones
  2020-12-18 13:08             ` Paolo Bonzini
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Jones @ 2020-12-18 11:13 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, borntraeger, frankja, bgardon, peterx

On Fri, Dec 18, 2020 at 11:32:02AM +0100, Paolo Bonzini wrote:
> On 16/12/20 13:46, Andrew Jones wrote:
> > On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote:
> > > On 20/11/20 09:05, Andrew Jones wrote:
> > > > So I finally looked closely enough at the dirty-ring stuff to see that
> > > > patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
> > > > that says "Switch to dirty ring mode after VM creation but before any of
> > > > the vcpu creation". I'd argue that that comment would be better served at
> > > > the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
> > > > here. Maybe someday we can add a patch that adds that comment and also
> > > > tries to use common code for the number of pages calculation for the VM,
> > > > but not today.
> > > > 
> > > > Regarding this series, if the other three patches look good, then we
> > > > can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.
> > > 
> > > Yes, the rest is good.
> > > 
> > 
> > Ping?
> 
> Sorry, I was waiting for a resend.
>

Oops, I understood that we'd just drop 2/4 while applying. Should I resend
now?

Thanks,
drew 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/4] KVM: selftests: Cleanups, take 2
  2020-12-18 11:13           ` Andrew Jones
@ 2020-12-18 13:08             ` Paolo Bonzini
  0 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2020-12-18 13:08 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, borntraeger, frankja, bgardon, peterx

On 18/12/20 12:13, Andrew Jones wrote:
> On Fri, Dec 18, 2020 at 11:32:02AM +0100, Paolo Bonzini wrote:
>> On 16/12/20 13:46, Andrew Jones wrote:
>>> On Fri, Nov 20, 2020 at 09:48:26AM +0100, Paolo Bonzini wrote:
>>>> On 20/11/20 09:05, Andrew Jones wrote:
>>>>> So I finally looked closely enough at the dirty-ring stuff to see that
>>>>> patch 2 was always a dumb idea. dirty_ring_create_vm_done() has a comment
>>>>> that says "Switch to dirty ring mode after VM creation but before any of
>>>>> the vcpu creation". I'd argue that that comment would be better served at
>>>>> the log_mode_create_vm_done() call, but that doesn't excuse my sloppiness
>>>>> here. Maybe someday we can add a patch that adds that comment and also
>>>>> tries to use common code for the number of pages calculation for the VM,
>>>>> but not today.
>>>>>
>>>>> Regarding this series, if the other three patches look good, then we
>>>>> can just drop 2/4. 3/4 and 4/4 should still apply cleanly and work.
>>>>
>>>> Yes, the rest is good.
>>>>
>>>
>>> Ping?
>>
>> Sorry, I was waiting for a resend.
>>
> 
> Oops, I understood that we'd just drop 2/4 while applying. Should I resend
> now?

Yes, please.  Maybe there were conflicts, I don't remember why I dropped 
the whole series on the floor.

Paolo


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-12-18 13:09 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-16 12:19 [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Andrew Jones
2020-11-16 12:19 ` [PATCH v3 1/4] KVM: selftests: Factor out guest mode code Andrew Jones
2020-11-16 12:19 ` [PATCH v3 2/4] KVM: selftests: dirty_log_test: Remove create_vm Andrew Jones
2020-11-16 12:19 ` [PATCH v3 3/4] KVM: selftests: Use vm_create_with_vcpus in create_vm Andrew Jones
2020-11-16 12:19 ` [PATCH v3 4/4] KVM: selftests: Implement perf_test_util more conventionally Andrew Jones
2020-11-16 18:16 ` [PATCH v3 0/4] KVM: selftests: Cleanups, take 2 Paolo Bonzini
2020-11-16 18:40   ` Peter Xu
2020-11-18  8:38     ` Andrew Jones
2020-11-18  9:05       ` Andrew Jones
2020-11-18  9:08         ` Andrew Jones
2020-11-20  8:05   ` Andrew Jones
2020-11-20  8:48     ` Paolo Bonzini
2020-12-16 12:46       ` Andrew Jones
2020-12-17 22:12         ` Ben Gardon
2020-12-18 10:32         ` Paolo Bonzini
2020-12-18 11:13           ` Andrew Jones
2020-12-18 13:08             ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).