kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] KVM: selftests: Cleanups
@ 2020-11-04 21:23 Andrew Jones
  2020-11-04 21:23 ` [PATCH 01/11] KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore Andrew Jones
                   ` (13 more replies)
  0 siblings, 14 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

This series attempts to clean up demand_paging_test and dirty_log_test
by factoring out common code, creating some new API along the way. It's
main goal is to prepare for even more factoring that Ben and Peter want
to do. The series would have a nice negative diff stat, but it also
picks up a few of Peter's patches for his new dirty log test. So, the
+/- diff stat is close to equal. It's not as close as an electoral vote
count, but it's close.

I've tested on x86 and AArch64 (one config each), but not s390x.

Thanks,
drew


Andrew Jones (8):
  KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore
  KVM: selftests: Drop pointless vm_create wrapper
  KVM: selftests: Make the per vcpu memory size global
  KVM: selftests: Make the number of vcpus global
  KVM: selftests: Factor out guest mode code
  KVM: selftests: Make vm_create_default common
  KVM: selftests: Introduce vm_create_[default_]vcpus
  KVM: selftests: Remove create_vm

Peter Xu (3):
  KVM: selftests: Always clear dirty bitmap after iteration
  KVM: selftests: Use a single binary for dirty/clear log test
  KVM: selftests: Introduce after_vcpu_run hook for dirty log test

 tools/testing/selftests/kvm/.gitignore        |   2 +-
 tools/testing/selftests/kvm/Makefile          |   4 +-
 .../selftests/kvm/clear_dirty_log_test.c      |   6 -
 .../selftests/kvm/demand_paging_test.c        | 213 +++-------
 tools/testing/selftests/kvm/dirty_log_test.c  | 372 ++++++++++--------
 .../selftests/kvm/include/aarch64/processor.h |   3 +
 .../selftests/kvm/include/guest_modes.h       |  21 +
 .../testing/selftests/kvm/include/kvm_util.h  |  20 +-
 .../selftests/kvm/include/s390x/processor.h   |   4 +
 .../selftests/kvm/include/x86_64/processor.h  |   4 +
 .../selftests/kvm/lib/aarch64/processor.c     |  17 -
 tools/testing/selftests/kvm/lib/guest_modes.c |  70 ++++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  62 ++-
 .../selftests/kvm/lib/s390x/processor.c       |  22 --
 .../selftests/kvm/lib/x86_64/processor.c      |  32 --
 15 files changed, 445 insertions(+), 407 deletions(-)
 delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
 create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
 create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/11] KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 02/11] KVM: selftests: Drop pointless vm_create wrapper Andrew Jones
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/.gitignore | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index d2c2d6205008..06c71dc2f7e2 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -15,6 +15,7 @@
 /x86_64/vmx_preemption_timer_test
 /x86_64/svm_vmcall_test
 /x86_64/sync_regs_test
+/x86_64/tsc_msrs_test
 /x86_64/vmx_apic_access_test
 /x86_64/vmx_close_while_nested_test
 /x86_64/vmx_dirty_log_test
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/11] KVM: selftests: Drop pointless vm_create wrapper
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
  2020-11-04 21:23 ` [PATCH 01/11] KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 03/11] KVM: selftests: Always clear dirty bitmap after iteration Andrew Jones
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/demand_paging_test.c | 2 +-
 tools/testing/selftests/kvm/dirty_log_test.c     | 2 +-
 tools/testing/selftests/kvm/include/kvm_util.h   | 1 -
 tools/testing/selftests/kvm/lib/kvm_util.c       | 7 +------
 4 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 360cd3ea4cd6..d1fe6c8e595b 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -166,7 +166,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
 
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	vm = _vm_create(mode, pages, O_RDWR);
+	vm = vm_create(mode, pages, O_RDWR);
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 #ifdef __x86_64__
 	vm_create_irqchip(vm);
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 752ec158ac59..37445aa4876f 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -252,7 +252,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 #ifdef __x86_64__
 	vm_create_irqchip(vm);
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 919e161dd289..ebf7f87d72df 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -65,7 +65,6 @@ int kvm_check_cap(long cap);
 int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
 
 struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
-struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
 void kvm_vm_free(struct kvm_vm *vmp);
 void kvm_vm_restart(struct kvm_vm *vmp, int perm);
 void kvm_vm_release(struct kvm_vm *vmp);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 3327cebc1095..b9943e935dc7 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -152,7 +152,7 @@ _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params)
  * descriptor to control the created VM is created with the permissions
  * given by perm (e.g. O_RDWR).
  */
-struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
+struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 {
 	struct kvm_vm *vm;
 
@@ -243,11 +243,6 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	return vm;
 }
 
-struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
-{
-	return _vm_create(mode, phy_pages, perm);
-}
-
 /*
  * VM Restart
  *
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/11] KVM: selftests: Always clear dirty bitmap after iteration
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
  2020-11-04 21:23 ` [PATCH 01/11] KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore Andrew Jones
  2020-11-04 21:23 ` [PATCH 02/11] KVM: selftests: Drop pointless vm_create wrapper Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 04/11] KVM: selftests: Use a single binary for dirty/clear log test Andrew Jones
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

From: Peter Xu <peterx@redhat.com>

We don't clear the dirty bitmap before because KVM_GET_DIRTY_LOG will
clear it for us before copying the dirty log onto it.  However we'd
still better to clear it explicitly instead of assuming the kernel
will always do it for us.

More importantly, in the upcoming dirty ring tests we'll start to
fetch dirty pages from a ring buffer, so no one is going to clear the
dirty bitmap for us.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 37445aa4876f..954331c2fda2 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -195,7 +195,7 @@ static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long *bmap)
 				    page);
 		}
 
-		if (test_bit_le(page, bmap)) {
+		if (test_and_clear_bit_le(page, bmap)) {
 			host_dirty_count++;
 			/*
 			 * If the bit is set, the value written onto
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/11] KVM: selftests: Use a single binary for dirty/clear log test
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (2 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 03/11] KVM: selftests: Always clear dirty bitmap after iteration Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 05/11] KVM: selftests: Introduce after_vcpu_run hook for dirty " Andrew Jones
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

From: Peter Xu <peterx@redhat.com>

Remove the clear_dirty_log test, instead merge it into the existing
dirty_log_test.  It should be cleaner to use this single binary to do
both tests, also it's a preparation for the upcoming dirty ring test.

The default behavior will run all the modes in sequence.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 -
 tools/testing/selftests/kvm/Makefile          |   2 -
 .../selftests/kvm/clear_dirty_log_test.c      |   6 -
 tools/testing/selftests/kvm/dirty_log_test.c  | 187 +++++++++++++++---
 4 files changed, 156 insertions(+), 40 deletions(-)
 delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 06c71dc2f7e2..f44eea60c788 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -22,7 +22,6 @@
 /x86_64/vmx_set_nested_state_test
 /x86_64/vmx_tsc_adjust_test
 /x86_64/xss_msr_test
-/clear_dirty_log_test
 /demand_paging_test
 /dirty_log_test
 /kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 30afbad36cd5..795bd8e6623b 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -58,14 +58,12 @@ TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test
 TEST_GEN_PROGS_x86_64 += x86_64/debug_regs
 TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
 TEST_GEN_PROGS_x86_64 += x86_64/user_msr_test
-TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
 TEST_GEN_PROGS_x86_64 += set_memory_region_test
 TEST_GEN_PROGS_x86_64 += steal_time
 
-TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
diff --git a/tools/testing/selftests/kvm/clear_dirty_log_test.c b/tools/testing/selftests/kvm/clear_dirty_log_test.c
deleted file mode 100644
index 11672ec6f74e..000000000000
--- a/tools/testing/selftests/kvm/clear_dirty_log_test.c
+++ /dev/null
@@ -1,6 +0,0 @@
-#define USE_CLEAR_DIRTY_LOG
-#define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0)
-#define KVM_DIRTY_LOG_INITIALLY_SET         (1 << 1)
-#define KVM_DIRTY_LOG_MANUAL_CAPS   (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
-		KVM_DIRTY_LOG_INITIALLY_SET)
-#include "dirty_log_test.c"
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 954331c2fda2..229648565980 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -128,6 +128,78 @@ static uint64_t host_dirty_count;
 static uint64_t host_clear_count;
 static uint64_t host_track_next_count;
 
+enum log_mode_t {
+	/* Only use KVM_GET_DIRTY_LOG for logging */
+	LOG_MODE_DIRTY_LOG = 0,
+
+	/* Use both KVM_[GET|CLEAR]_DIRTY_LOG for logging */
+	LOG_MODE_CLEAR_LOG = 1,
+
+	LOG_MODE_NUM,
+
+	/* Run all supported modes */
+	LOG_MODE_ALL = LOG_MODE_NUM,
+};
+
+/* Mode of logging to test.  Default is to run all supported modes */
+static enum log_mode_t host_log_mode_option = LOG_MODE_ALL;
+/* Logging mode for current run */
+static enum log_mode_t host_log_mode;
+
+static bool clear_log_supported(void)
+{
+	return kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2);
+}
+
+static void clear_log_create_vm_done(struct kvm_vm *vm)
+{
+	struct kvm_enable_cap cap = {};
+	u64 manual_caps;
+
+	manual_caps = kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2);
+	TEST_ASSERT(manual_caps, "MANUAL_CAPS is zero!");
+	manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE |
+			KVM_DIRTY_LOG_INITIALLY_SET);
+	cap.cap = KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2;
+	cap.args[0] = manual_caps;
+	vm_enable_cap(vm, &cap);
+}
+
+static void dirty_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
+					  void *bitmap, uint32_t num_pages)
+{
+	kvm_vm_get_dirty_log(vm, slot, bitmap);
+}
+
+static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
+					  void *bitmap, uint32_t num_pages)
+{
+	kvm_vm_get_dirty_log(vm, slot, bitmap);
+	kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages);
+}
+
+struct log_mode {
+	const char *name;
+	/* Return true if this mode is supported, otherwise false */
+	bool (*supported)(void);
+	/* Hook when the vm creation is done (before vcpu creation) */
+	void (*create_vm_done)(struct kvm_vm *vm);
+	/* Hook to collect the dirty pages into the bitmap provided */
+	void (*collect_dirty_pages)(struct kvm_vm *vm, int slot,
+				    void *bitmap, uint32_t num_pages);
+} log_modes[LOG_MODE_NUM] = {
+	{
+		.name = "dirty-log",
+		.collect_dirty_pages = dirty_log_collect_dirty_pages,
+	},
+	{
+		.name = "clear-log",
+		.supported = clear_log_supported,
+		.create_vm_done = clear_log_create_vm_done,
+		.collect_dirty_pages = clear_log_collect_dirty_pages,
+	},
+};
+
 /*
  * We use this bitmap to track some pages that should have its dirty
  * bit set in the _next_ iteration.  For example, if we detected the
@@ -137,6 +209,44 @@ static uint64_t host_track_next_count;
  */
 static unsigned long *host_bmap_track;
 
+static void log_modes_dump(void)
+{
+	int i;
+
+	printf("all");
+	for (i = 0; i < LOG_MODE_NUM; i++)
+		printf(", %s", log_modes[i].name);
+	printf("\n");
+}
+
+static bool log_mode_supported(void)
+{
+	struct log_mode *mode = &log_modes[host_log_mode];
+
+	if (mode->supported)
+		return mode->supported();
+
+	return true;
+}
+
+static void log_mode_create_vm_done(struct kvm_vm *vm)
+{
+	struct log_mode *mode = &log_modes[host_log_mode];
+
+	if (mode->create_vm_done)
+		mode->create_vm_done(vm);
+}
+
+static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot,
+					 void *bitmap, uint32_t num_pages)
+{
+	struct log_mode *mode = &log_modes[host_log_mode];
+
+	TEST_ASSERT(mode->collect_dirty_pages != NULL,
+		    "collect_dirty_pages() is required for any log mode!");
+	mode->collect_dirty_pages(vm, slot, bitmap, num_pages);
+}
+
 static void generate_random_array(uint64_t *guest_array, uint64_t size)
 {
 	uint64_t i;
@@ -257,6 +367,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 #ifdef __x86_64__
 	vm_create_irqchip(vm);
 #endif
+	log_mode_create_vm_done(vm);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
 	return vm;
 }
@@ -264,10 +375,6 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 #define DIRTY_MEM_BITS 30 /* 1G */
 #define PAGE_SHIFT_4K  12
 
-#ifdef USE_CLEAR_DIRTY_LOG
-static u64 dirty_log_manual_caps;
-#endif
-
 static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 		     unsigned long interval, uint64_t phys_offset)
 {
@@ -275,6 +382,12 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	struct kvm_vm *vm;
 	unsigned long *bmap;
 
+	if (!log_mode_supported()) {
+		print_skip("Log mode '%s' not supported",
+			   log_modes[host_log_mode].name);
+		return;
+	}
+
 	/*
 	 * We reserve page table for 2 times of extra dirty mem which
 	 * will definitely cover the original (1G+) test range.  Here
@@ -317,14 +430,6 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	bmap = bitmap_alloc(host_num_pages);
 	host_bmap_track = bitmap_alloc(host_num_pages);
 
-#ifdef USE_CLEAR_DIRTY_LOG
-	struct kvm_enable_cap cap = {};
-
-	cap.cap = KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2;
-	cap.args[0] = dirty_log_manual_caps;
-	vm_enable_cap(vm, &cap);
-#endif
-
 	/* Add an extra memory slot for testing dirty logging */
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
 				    guest_test_phys_mem,
@@ -362,11 +467,8 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	while (iteration < iterations) {
 		/* Give the vcpu thread some time to dirty some pages */
 		usleep(interval * 1000);
-		kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap);
-#ifdef USE_CLEAR_DIRTY_LOG
-		kvm_vm_clear_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap, 0,
-				       host_num_pages);
-#endif
+		log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
+					     bmap, host_num_pages);
 		vm_dirty_log_verify(mode, bmap);
 		iteration++;
 		sync_global_to_guest(vm, iteration);
@@ -410,6 +512,9 @@ static void help(char *name)
 	       TEST_HOST_LOOP_INTERVAL);
 	printf(" -p: specify guest physical test memory offset\n"
 	       "     Warning: a low offset can conflict with the loaded test code.\n");
+	printf(" -M: specify the host logging mode "
+	       "(default: run all log modes).  Supported modes:\n\t");
+	log_modes_dump();
 	printf(" -m: specify the guest mode ID to test "
 	       "(default: test all supported modes)\n"
 	       "     This option may be used multiple times.\n"
@@ -429,18 +534,7 @@ int main(int argc, char *argv[])
 	bool mode_selected = false;
 	uint64_t phys_offset = 0;
 	unsigned int mode;
-	int opt, i;
-
-#ifdef USE_CLEAR_DIRTY_LOG
-	dirty_log_manual_caps =
-		kvm_check_cap(KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2);
-	if (!dirty_log_manual_caps) {
-		print_skip("KVM_CLEAR_DIRTY_LOG not available");
-		exit(KSFT_SKIP);
-	}
-	dirty_log_manual_caps &= (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE |
-				  KVM_DIRTY_LOG_INITIALLY_SET);
-#endif
+	int opt, i, j;
 
 #ifdef __x86_64__
 	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
@@ -464,7 +558,7 @@ int main(int argc, char *argv[])
 	guest_mode_init(VM_MODE_P40V48_4K, true, true);
 #endif
 
-	while ((opt = getopt(argc, argv, "hi:I:p:m:")) != -1) {
+	while ((opt = getopt(argc, argv, "hi:I:p:m:M:")) != -1) {
 		switch (opt) {
 		case 'i':
 			iterations = strtol(optarg, NULL, 10);
@@ -486,6 +580,26 @@ int main(int argc, char *argv[])
 				    "Guest mode ID %d too big", mode);
 			guest_modes[mode].enabled = true;
 			break;
+		case 'M':
+			if (!strcmp(optarg, "all")) {
+				host_log_mode_option = LOG_MODE_ALL;
+				break;
+			}
+			for (i = 0; i < LOG_MODE_NUM; i++) {
+				if (!strcmp(optarg, log_modes[i].name)) {
+					pr_info("Setting log mode to: '%s'\n",
+						optarg);
+					host_log_mode_option = i;
+					break;
+				}
+			}
+			if (i == LOG_MODE_NUM) {
+				printf("Log mode '%s' invalid. Please choose "
+				       "from: ", optarg);
+				log_modes_dump();
+				exit(1);
+			}
+			break;
 		case 'h':
 		default:
 			help(argv[0]);
@@ -507,7 +621,18 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(guest_modes[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, iterations, interval, phys_offset);
+		if (host_log_mode_option == LOG_MODE_ALL) {
+			/* Run each log mode */
+			for (j = 0; j < LOG_MODE_NUM; j++) {
+				pr_info("Testing Log Mode '%s'\n",
+					log_modes[j].name);
+				host_log_mode = j;
+				run_test(i, iterations, interval, phys_offset);
+			}
+		} else {
+			host_log_mode = host_log_mode_option;
+			run_test(i, iterations, interval, phys_offset);
+		}
 	}
 
 	return 0;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/11] KVM: selftests: Introduce after_vcpu_run hook for dirty log test
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (3 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 04/11] KVM: selftests: Use a single binary for dirty/clear log test Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 06/11] KVM: selftests: Make the per vcpu memory size global Andrew Jones
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

From: Peter Xu <peterx@redhat.com>

Provide a hook for the checks after vcpu_run() completes.  Preparation
for the dirty ring test because we'll need to take care of another
exit reason.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 36 +++++++++++++-------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 229648565980..5c7aac88b3fa 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -178,6 +178,15 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
 	kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages);
 }
 
+static void default_after_vcpu_run(struct kvm_vm *vm)
+{
+	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
+
+	TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
+		    "Invalid guest sync status: exit_reason=%s\n",
+		    exit_reason_str(run->exit_reason));
+}
+
 struct log_mode {
 	const char *name;
 	/* Return true if this mode is supported, otherwise false */
@@ -187,16 +196,20 @@ struct log_mode {
 	/* Hook to collect the dirty pages into the bitmap provided */
 	void (*collect_dirty_pages)(struct kvm_vm *vm, int slot,
 				    void *bitmap, uint32_t num_pages);
+	/* Hook to call when after each vcpu run */
+	void (*after_vcpu_run)(struct kvm_vm *vm);
 } log_modes[LOG_MODE_NUM] = {
 	{
 		.name = "dirty-log",
 		.collect_dirty_pages = dirty_log_collect_dirty_pages,
+		.after_vcpu_run = default_after_vcpu_run,
 	},
 	{
 		.name = "clear-log",
 		.supported = clear_log_supported,
 		.create_vm_done = clear_log_create_vm_done,
 		.collect_dirty_pages = clear_log_collect_dirty_pages,
+		.after_vcpu_run = default_after_vcpu_run,
 	},
 };
 
@@ -247,6 +260,14 @@ static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot,
 	mode->collect_dirty_pages(vm, slot, bitmap, num_pages);
 }
 
+static void log_mode_after_vcpu_run(struct kvm_vm *vm)
+{
+	struct log_mode *mode = &log_modes[host_log_mode];
+
+	if (mode->after_vcpu_run)
+		mode->after_vcpu_run(vm);
+}
+
 static void generate_random_array(uint64_t *guest_array, uint64_t size)
 {
 	uint64_t i;
@@ -261,25 +282,16 @@ static void *vcpu_worker(void *data)
 	struct kvm_vm *vm = data;
 	uint64_t *guest_array;
 	uint64_t pages_count = 0;
-	struct kvm_run *run;
-
-	run = vcpu_state(vm, VCPU_ID);
 
 	guest_array = addr_gva2hva(vm, (vm_vaddr_t)random_array);
-	generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
 
 	while (!READ_ONCE(host_quit)) {
+		generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
+		pages_count += TEST_PAGES_PER_LOOP;
 		/* Let the guest dirty the random pages */
 		ret = _vcpu_run(vm, VCPU_ID);
 		TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
-		if (get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC) {
-			pages_count += TEST_PAGES_PER_LOOP;
-			generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
-		} else {
-			TEST_FAIL("Invalid guest sync status: "
-				  "exit_reason=%s\n",
-				  exit_reason_str(run->exit_reason));
-		}
+		log_mode_after_vcpu_run(vm);
 	}
 
 	pr_info("Dirtied %"PRIu64" pages\n", pages_count);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/11] KVM: selftests: Make the per vcpu memory size global
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (4 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 05/11] KVM: selftests: Introduce after_vcpu_run hook for dirty " Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 07/11] KVM: selftests: Make the number of vcpus global Andrew Jones
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Rename vcpu_memory_bytes to something with "percpu" in it
in order to be less ambiguous. Also make it global to
simplify things.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/demand_paging_test.c        | 36 +++++++++----------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index d1fe6c8e595b..89699652c34d 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -32,8 +32,7 @@
 
 /* Default guest test virtual memory offset */
 #define DEFAULT_GUEST_TEST_MEM		0xc0000000
-
-#define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
+#define DEFAULT_PER_VCPU_MEM_SIZE	(1 << 30) /* 1G */
 
 #ifdef PRINT_PER_PAGE_UPDATES
 #define PER_PAGE_DEBUG(...) printf(__VA_ARGS__)
@@ -72,6 +71,7 @@ static uint64_t guest_test_phys_mem;
  * Must not conflict with identity mapped test code.
  */
 static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
 
 struct vcpu_args {
 	uint64_t gva;
@@ -145,7 +145,7 @@ static void *vcpu_worker(void *data)
 #define PTES_PER_4K_PT 512
 
 static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
-				uint64_t vcpu_memory_bytes)
+				uint64_t guest_percpu_mem_size)
 {
 	struct kvm_vm *vm;
 	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
@@ -160,7 +160,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
 	 * page size guest will need even less memory for page tables).
 	 */
 	pages += (2 * pages) / PTES_PER_4K_PT;
-	pages += ((2 * vcpus * vcpu_memory_bytes) >> PAGE_SHIFT_4K) /
+	pages += ((2 * vcpus * guest_percpu_mem_size) >> PAGE_SHIFT_4K) /
 		 PTES_PER_4K_PT;
 	pages = vm_adjust_num_guest_pages(mode, pages);
 
@@ -351,8 +351,7 @@ static int setup_demand_paging(struct kvm_vm *vm,
 }
 
 static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay, int vcpus,
-		     uint64_t vcpu_memory_bytes)
+		     useconds_t uffd_delay, int vcpus)
 {
 	pthread_t *vcpu_threads;
 	pthread_t *uffd_handler_threads = NULL;
@@ -364,14 +363,14 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	int vcpu_id;
 	int r;
 
-	vm = create_vm(mode, vcpus, vcpu_memory_bytes);
+	vm = create_vm(mode, vcpus, guest_percpu_mem_size);
 
 	guest_page_size = vm_get_page_size(vm);
 
-	TEST_ASSERT(vcpu_memory_bytes % guest_page_size == 0,
+	TEST_ASSERT(guest_percpu_mem_size % guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
-	guest_num_pages = (vcpus * vcpu_memory_bytes) / guest_page_size;
+	guest_num_pages = (vcpus * guest_percpu_mem_size) / guest_page_size;
 	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
 
 	/*
@@ -382,10 +381,10 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		    "Requested more guest memory than address space allows.\n"
 		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
 		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
-		    vcpu_memory_bytes);
+		    guest_percpu_mem_size);
 
 	host_page_size = getpagesize();
-	TEST_ASSERT(vcpu_memory_bytes % host_page_size == 0,
+	TEST_ASSERT(guest_percpu_mem_size % host_page_size == 0,
 		    "Guest memory size is not host page size aligned.");
 
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
@@ -436,9 +435,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 		vm_vcpu_add_default(vm, vcpu_id, guest_code);
 
-		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_memory_bytes);
+		vcpu_gpa = guest_test_phys_mem + (vcpu_id * guest_percpu_mem_size);
 		PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
-			       vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_memory_bytes);
+			       vcpu_id, vcpu_gpa, vcpu_gpa + guest_percpu_mem_size);
 
 		/* Cache the HVA pointer of the region */
 		vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
@@ -456,7 +455,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 						&uffd_handler_threads[vcpu_id],
 						pipefds[vcpu_id * 2],
 						uffd_delay, &uffd_args[vcpu_id],
-						vcpu_hva, vcpu_memory_bytes);
+						vcpu_hva, guest_percpu_mem_size);
 			if (r < 0)
 				exit(-r);
 		}
@@ -468,8 +467,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		vcpu_args[vcpu_id].vm = vm;
 		vcpu_args[vcpu_id].vcpu_id = vcpu_id;
 		vcpu_args[vcpu_id].gva = guest_test_virt_mem +
-					 (vcpu_id * vcpu_memory_bytes);
-		vcpu_args[vcpu_id].pages = vcpu_memory_bytes / guest_page_size;
+					 (vcpu_id * guest_percpu_mem_size);
+		vcpu_args[vcpu_id].pages = guest_percpu_mem_size / guest_page_size;
 	}
 
 	/* Export the shared variables to the guest */
@@ -569,7 +568,6 @@ static void help(char *name)
 int main(int argc, char *argv[])
 {
 	bool mode_selected = false;
-	uint64_t vcpu_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
 	int vcpus = 1;
 	unsigned int mode;
 	int opt, i;
@@ -619,7 +617,7 @@ int main(int argc, char *argv[])
 				    "A negative UFFD delay is not supported.");
 			break;
 		case 'b':
-			vcpu_memory_bytes = parse_size(optarg);
+			guest_percpu_mem_size = parse_size(optarg);
 			break;
 		case 'v':
 			vcpus = atoi(optarg);
@@ -642,7 +640,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(guest_modes[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay, vcpus, vcpu_memory_bytes);
+		run_test(i, use_uffd, uffd_delay, vcpus);
 	}
 
 	return 0;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/11] KVM: selftests: Make the number of vcpus global
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (5 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 06/11] KVM: selftests: Make the per vcpu memory size global Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 08/11] KVM: selftests: Factor out guest mode code Andrew Jones
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

We also check the input number of vcpus against the maximum supported.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/demand_paging_test.c        | 40 +++++++++----------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 89699652c34d..27c613f06ade 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -73,6 +73,9 @@ static uint64_t guest_test_phys_mem;
 static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
 static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
 
+/* Number of VCPUs for the test */
+static int nr_vcpus = 1;
+
 struct vcpu_args {
 	uint64_t gva;
 	uint64_t pages;
@@ -351,7 +354,7 @@ static int setup_demand_paging(struct kvm_vm *vm,
 }
 
 static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay, int vcpus)
+		     useconds_t uffd_delay)
 {
 	pthread_t *vcpu_threads;
 	pthread_t *uffd_handler_threads = NULL;
@@ -363,14 +366,14 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	int vcpu_id;
 	int r;
 
-	vm = create_vm(mode, vcpus, guest_percpu_mem_size);
+	vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size);
 
 	guest_page_size = vm_get_page_size(vm);
 
 	TEST_ASSERT(guest_percpu_mem_size % guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
-	guest_num_pages = (vcpus * guest_percpu_mem_size) / guest_page_size;
+	guest_num_pages = (nr_vcpus * guest_percpu_mem_size) / guest_page_size;
 	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
 
 	/*
@@ -380,7 +383,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
 		    "Requested more guest memory than address space allows.\n"
 		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
-		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
+		    guest_num_pages, vm_get_max_gfn(vm), nr_vcpus,
 		    guest_percpu_mem_size);
 
 	host_page_size = getpagesize();
@@ -414,22 +417,22 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		    "Failed to allocate buffer for guest data pattern");
 	memset(guest_data_prototype, 0xAB, host_page_size);
 
-	vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads));
+	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
 	if (use_uffd) {
 		uffd_handler_threads =
-			malloc(vcpus * sizeof(*uffd_handler_threads));
+			malloc(nr_vcpus * sizeof(*uffd_handler_threads));
 		TEST_ASSERT(uffd_handler_threads, "Memory allocation failed");
 
-		uffd_args = malloc(vcpus * sizeof(*uffd_args));
+		uffd_args = malloc(nr_vcpus * sizeof(*uffd_args));
 		TEST_ASSERT(uffd_args, "Memory allocation failed");
 
-		pipefds = malloc(sizeof(int) * vcpus * 2);
+		pipefds = malloc(sizeof(int) * nr_vcpus * 2);
 		TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd");
 	}
 
-	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+	for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
 		vm_paddr_t vcpu_gpa;
 		void *vcpu_hva;
 
@@ -480,7 +483,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	clock_gettime(CLOCK_MONOTONIC, &start);
 
-	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+	for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
 		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
 			       &vcpu_args[vcpu_id]);
 	}
@@ -488,7 +491,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	pr_info("Started all vCPUs\n");
 
 	/* Wait for the vcpu threads to quit */
-	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+	for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
 		pthread_join(vcpu_threads[vcpu_id], NULL);
 		PER_VCPU_DEBUG("Joined thread for vCPU %d\n", vcpu_id);
 	}
@@ -501,7 +504,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		char c;
 
 		/* Tell the user fault fd handler threads to quit */
-		for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
 			r = write(pipefds[vcpu_id * 2 + 1], &c, 1);
 			TEST_ASSERT(r == 1, "Unable to write to pipefd");
 
@@ -567,8 +570,8 @@ static void help(char *name)
 
 int main(int argc, char *argv[])
 {
+	int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
 	bool mode_selected = false;
-	int vcpus = 1;
 	unsigned int mode;
 	int opt, i;
 	bool use_uffd = false;
@@ -620,12 +623,9 @@ int main(int argc, char *argv[])
 			guest_percpu_mem_size = parse_size(optarg);
 			break;
 		case 'v':
-			vcpus = atoi(optarg);
-			TEST_ASSERT(vcpus > 0,
-				    "Must have a positive number of vCPUs");
-			TEST_ASSERT(vcpus <= MAX_VCPUS,
-				    "This test does not currently support\n"
-				    "more than %d vCPUs.", MAX_VCPUS);
+			nr_vcpus = atoi(optarg);
+			TEST_ASSERT(nr_vcpus > 0 && nr_vcpus <= max_vcpus,
+				    "Invalid number of vcpus, must be between 1 and %d", max_vcpus);
 			break;
 		case 'h':
 		default:
@@ -640,7 +640,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(guest_modes[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay, vcpus);
+		run_test(i, use_uffd, uffd_delay);
 	}
 
 	return 0;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/11] KVM: selftests: Factor out guest mode code
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (6 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 07/11] KVM: selftests: Make the number of vcpus global Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 09/11] KVM: selftests: Make vm_create_default common Andrew Jones
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

demand_paging_test and dirty_log_test have redundant guest mode
code. Factor it out.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/Makefile          |   2 +-
 .../selftests/kvm/demand_paging_test.c        |  95 ++++----------
 tools/testing/selftests/kvm/dirty_log_test.c  | 121 ++++++------------
 .../selftests/kvm/include/guest_modes.h       |  21 +++
 tools/testing/selftests/kvm/lib/guest_modes.c |  70 ++++++++++
 5 files changed, 150 insertions(+), 159 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
 create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 795bd8e6623b..211ba87530f3 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -33,7 +33,7 @@ ifeq ($(ARCH),s390)
 	UNAME_M := s390x
 endif
 
-LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c
+LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c
 LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c
 LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 27c613f06ade..693e2c810f15 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -21,6 +21,7 @@
 #include <linux/bitops.h>
 #include <linux/userfaultfd.h>
 
+#include "guest_modes.h"
 #include "test_util.h"
 #include "kvm_util.h"
 #include "processor.h"
@@ -353,9 +354,14 @@ static int setup_demand_paging(struct kvm_vm *vm,
 	return 0;
 }
 
-static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay)
+struct test_params {
+	bool use_uffd;
+	useconds_t uffd_delay;
+};
+
+static void run_test(enum vm_guest_mode mode, void *arg)
 {
+	struct test_params *p = arg;
 	pthread_t *vcpu_threads;
 	pthread_t *uffd_handler_threads = NULL;
 	struct uffd_handler_args *uffd_args = NULL;
@@ -420,7 +426,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	if (use_uffd) {
+	if (p->use_uffd) {
 		uffd_handler_threads =
 			malloc(nr_vcpus * sizeof(*uffd_handler_threads));
 		TEST_ASSERT(uffd_handler_threads, "Memory allocation failed");
@@ -445,7 +451,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		/* Cache the HVA pointer of the region */
 		vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
 
-		if (use_uffd) {
+		if (p->use_uffd) {
 			/*
 			 * Set up user fault fd to handle demand paging
 			 * requests.
@@ -457,7 +463,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 			r = setup_demand_paging(vm,
 						&uffd_handler_threads[vcpu_id],
 						pipefds[vcpu_id * 2],
-						uffd_delay, &uffd_args[vcpu_id],
+						p->uffd_delay, &uffd_args[vcpu_id],
 						vcpu_hva, guest_percpu_mem_size);
 			if (r < 0)
 				exit(-r);
@@ -500,7 +506,7 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	clock_gettime(CLOCK_MONOTONIC, &end);
 
-	if (use_uffd) {
+	if (p->use_uffd) {
 		char c;
 
 		/* Tell the user fault fd handler threads to quit */
@@ -523,38 +529,19 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	free(guest_data_prototype);
 	free(vcpu_threads);
-	if (use_uffd) {
+	if (p->use_uffd) {
 		free(uffd_handler_threads);
 		free(uffd_args);
 		free(pipefds);
 	}
 }
 
-struct guest_mode {
-	bool supported;
-	bool enabled;
-};
-static struct guest_mode guest_modes[NUM_VM_MODES];
-
-#define guest_mode_init(mode, supported, enabled) ({ \
-	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
-})
-
 static void help(char *name)
 {
-	int i;
-
 	puts("");
 	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
 	       "          [-b memory] [-v vcpus]\n", name);
-	printf(" -m: specify the guest mode ID to test\n"
-	       "     (default: test all supported modes)\n"
-	       "     This option may be used multiple times.\n"
-	       "     Guest mode IDs:\n");
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
-		       guest_modes[i].supported ? " (supported)" : "");
-	}
+	guest_modes_help();
 	printf(" -u: use User Fault FD to handle vCPU page\n"
 	       "     faults.\n");
 	printf(" -d: add a delay in usec to the User Fault\n"
@@ -571,53 +558,22 @@ static void help(char *name)
 int main(int argc, char *argv[])
 {
 	int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
-	bool mode_selected = false;
-	unsigned int mode;
-	int opt, i;
-	bool use_uffd = false;
-	useconds_t uffd_delay = 0;
+	struct test_params p = {};
+	int opt;
 
-#ifdef __x86_64__
-	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
-#endif
-#ifdef __aarch64__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-	guest_mode_init(VM_MODE_P40V48_64K, true, true);
-	{
-		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
-
-		if (limit >= 52)
-			guest_mode_init(VM_MODE_P52V48_64K, true, true);
-		if (limit >= 48) {
-			guest_mode_init(VM_MODE_P48V48_4K, true, true);
-			guest_mode_init(VM_MODE_P48V48_64K, true, true);
-		}
-	}
-#endif
-#ifdef __s390x__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-#endif
+	guest_modes_append_default();
 
 	while ((opt = getopt(argc, argv, "hm:ud:b:v:")) != -1) {
 		switch (opt) {
 		case 'm':
-			if (!mode_selected) {
-				for (i = 0; i < NUM_VM_MODES; ++i)
-					guest_modes[i].enabled = false;
-				mode_selected = true;
-			}
-			mode = strtoul(optarg, NULL, 10);
-			TEST_ASSERT(mode < NUM_VM_MODES,
-				    "Guest mode ID %d too big", mode);
-			guest_modes[mode].enabled = true;
+			guest_modes_cmdline(optarg);
 			break;
 		case 'u':
-			use_uffd = true;
+			p.use_uffd = true;
 			break;
 		case 'd':
-			uffd_delay = strtoul(optarg, NULL, 0);
-			TEST_ASSERT(uffd_delay >= 0,
-				    "A negative UFFD delay is not supported.");
+			p.uffd_delay = strtoul(optarg, NULL, 0);
+			TEST_ASSERT(p.uffd_delay >= 0, "A negative UFFD delay is not supported.");
 			break;
 		case 'b':
 			guest_percpu_mem_size = parse_size(optarg);
@@ -634,14 +590,7 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		if (!guest_modes[i].enabled)
-			continue;
-		TEST_ASSERT(guest_modes[i].supported,
-			    "Guest mode ID %d (%s) not supported.",
-			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay);
-	}
+	for_each_guest_mode(run_test, &p);
 
 	return 0;
 }
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 5c7aac88b3fa..f2710c6a60bf 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -15,6 +15,7 @@
 #include <linux/bitmap.h>
 #include <linux/bitops.h>
 
+#include "guest_modes.h"
 #include "test_util.h"
 #include "kvm_util.h"
 #include "processor.h"
@@ -387,9 +388,15 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 #define DIRTY_MEM_BITS 30 /* 1G */
 #define PAGE_SHIFT_4K  12
 
-static void run_test(enum vm_guest_mode mode, unsigned long iterations,
-		     unsigned long interval, uint64_t phys_offset)
+struct test_params {
+	unsigned long iterations;
+	unsigned long interval;
+	uint64_t phys_offset;
+};
+
+static void run_test(enum vm_guest_mode mode, void *arg)
 {
+	struct test_params *p = arg;
 	pthread_t vcpu_thread;
 	struct kvm_vm *vm;
 	unsigned long *bmap;
@@ -424,12 +431,12 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	host_page_size = getpagesize();
 	host_num_pages = vm_num_host_pages(mode, guest_num_pages);
 
-	if (!phys_offset) {
+	if (!p->phys_offset) {
 		guest_test_phys_mem = (vm_get_max_gfn(vm) -
 				       guest_num_pages) * guest_page_size;
 		guest_test_phys_mem &= ~(host_page_size - 1);
 	} else {
-		guest_test_phys_mem = phys_offset;
+		guest_test_phys_mem = p->phys_offset;
 	}
 
 #ifdef __s390x__
@@ -476,9 +483,9 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 
 	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
 
-	while (iteration < iterations) {
+	while (iteration < p->iterations) {
 		/* Give the vcpu thread some time to dirty some pages */
-		usleep(interval * 1000);
+		usleep(p->interval * 1000);
 		log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
 					     bmap, host_num_pages);
 		vm_dirty_log_verify(mode, bmap);
@@ -500,20 +507,8 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	kvm_vm_free(vm);
 }
 
-struct guest_mode {
-	bool supported;
-	bool enabled;
-};
-static struct guest_mode guest_modes[NUM_VM_MODES];
-
-#define guest_mode_init(mode, supported, enabled) ({ \
-	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
-})
-
 static void help(char *name)
 {
-	int i;
-
 	puts("");
 	printf("usage: %s [-h] [-i iterations] [-I interval] "
 	       "[-p offset] [-m mode]\n", name);
@@ -527,70 +522,34 @@ static void help(char *name)
 	printf(" -M: specify the host logging mode "
 	       "(default: run all log modes).  Supported modes:\n\t");
 	log_modes_dump();
-	printf(" -m: specify the guest mode ID to test "
-	       "(default: test all supported modes)\n"
-	       "     This option may be used multiple times.\n"
-	       "     Guest mode IDs:\n");
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
-		       guest_modes[i].supported ? " (supported)" : "");
-	}
+	guest_modes_help();
 	puts("");
 	exit(0);
 }
 
 int main(int argc, char *argv[])
 {
-	unsigned long iterations = TEST_HOST_LOOP_N;
-	unsigned long interval = TEST_HOST_LOOP_INTERVAL;
-	bool mode_selected = false;
-	uint64_t phys_offset = 0;
-	unsigned int mode;
-	int opt, i, j;
-
-#ifdef __x86_64__
-	guest_mode_init(VM_MODE_PXXV48_4K, true, true);
-#endif
-#ifdef __aarch64__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-	guest_mode_init(VM_MODE_P40V48_64K, true, true);
+	struct test_params p = {
+		.iterations = TEST_HOST_LOOP_N,
+		.interval = TEST_HOST_LOOP_INTERVAL,
+	};
+	int opt, i;
 
-	{
-		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
-
-		if (limit >= 52)
-			guest_mode_init(VM_MODE_P52V48_64K, true, true);
-		if (limit >= 48) {
-			guest_mode_init(VM_MODE_P48V48_4K, true, true);
-			guest_mode_init(VM_MODE_P48V48_64K, true, true);
-		}
-	}
-#endif
-#ifdef __s390x__
-	guest_mode_init(VM_MODE_P40V48_4K, true, true);
-#endif
+	guest_modes_append_default();
 
 	while ((opt = getopt(argc, argv, "hi:I:p:m:M:")) != -1) {
 		switch (opt) {
 		case 'i':
-			iterations = strtol(optarg, NULL, 10);
+			p.iterations = strtol(optarg, NULL, 10);
 			break;
 		case 'I':
-			interval = strtol(optarg, NULL, 10);
+			p.interval = strtol(optarg, NULL, 10);
 			break;
 		case 'p':
-			phys_offset = strtoull(optarg, NULL, 0);
+			p.phys_offset = strtoull(optarg, NULL, 0);
 			break;
 		case 'm':
-			if (!mode_selected) {
-				for (i = 0; i < NUM_VM_MODES; ++i)
-					guest_modes[i].enabled = false;
-				mode_selected = true;
-			}
-			mode = strtoul(optarg, NULL, 10);
-			TEST_ASSERT(mode < NUM_VM_MODES,
-				    "Guest mode ID %d too big", mode);
-			guest_modes[mode].enabled = true;
+			guest_modes_cmdline(optarg);
 			break;
 		case 'M':
 			if (!strcmp(optarg, "all")) {
@@ -619,32 +578,24 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
-	TEST_ASSERT(interval > 0, "Interval must be greater than zero");
+	TEST_ASSERT(p.iterations > 2, "Iterations must be greater than two");
+	TEST_ASSERT(p.interval > 0, "Interval must be greater than zero");
 
 	pr_info("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n",
-		iterations, interval);
+		p.iterations, p.interval);
 
 	srandom(time(0));
 
-	for (i = 0; i < NUM_VM_MODES; ++i) {
-		if (!guest_modes[i].enabled)
-			continue;
-		TEST_ASSERT(guest_modes[i].supported,
-			    "Guest mode ID %d (%s) not supported.",
-			    i, vm_guest_mode_string(i));
-		if (host_log_mode_option == LOG_MODE_ALL) {
-			/* Run each log mode */
-			for (j = 0; j < LOG_MODE_NUM; j++) {
-				pr_info("Testing Log Mode '%s'\n",
-					log_modes[j].name);
-				host_log_mode = j;
-				run_test(i, iterations, interval, phys_offset);
-			}
-		} else {
-			host_log_mode = host_log_mode_option;
-			run_test(i, iterations, interval, phys_offset);
+	if (host_log_mode_option == LOG_MODE_ALL) {
+		/* Run each log mode */
+		for (i = 0; i < LOG_MODE_NUM; i++) {
+			pr_info("Testing Log Mode '%s'\n", log_modes[i].name);
+			host_log_mode = i;
+			for_each_guest_mode(run_test, &p);
 		}
+	} else {
+		host_log_mode = host_log_mode_option;
+		for_each_guest_mode(run_test, &p);
 	}
 
 	return 0;
diff --git a/tools/testing/selftests/kvm/include/guest_modes.h b/tools/testing/selftests/kvm/include/guest_modes.h
new file mode 100644
index 000000000000..b691df33e64e
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/guest_modes.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020, Red Hat, Inc.
+ */
+#include "kvm_util.h"
+
+struct guest_mode {
+	bool supported;
+	bool enabled;
+};
+
+extern struct guest_mode guest_modes[NUM_VM_MODES];
+
+#define guest_mode_append(mode, supported, enabled) ({ \
+	guest_modes[mode] = (struct guest_mode){ supported, enabled }; \
+})
+
+void guest_modes_append_default(void);
+void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg);
+void guest_modes_help(void);
+void guest_modes_cmdline(const char *arg);
diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c
new file mode 100644
index 000000000000..25bff307c71f
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/guest_modes.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020, Red Hat, Inc.
+ */
+#include "guest_modes.h"
+
+struct guest_mode guest_modes[NUM_VM_MODES];
+
+void guest_modes_append_default(void)
+{
+	guest_mode_append(VM_MODE_DEFAULT, true, true);
+
+#ifdef __aarch64__
+	guest_mode_append(VM_MODE_P40V48_64K, true, true);
+	{
+		unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
+		if (limit >= 52)
+			guest_mode_append(VM_MODE_P52V48_64K, true, true);
+		if (limit >= 48) {
+			guest_mode_append(VM_MODE_P48V48_4K, true, true);
+			guest_mode_append(VM_MODE_P48V48_64K, true, true);
+		}
+	}
+#endif
+}
+
+void for_each_guest_mode(void (*func)(enum vm_guest_mode, void *), void *arg)
+{
+	int i;
+
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		if (!guest_modes[i].enabled)
+			continue;
+		TEST_ASSERT(guest_modes[i].supported,
+			    "Guest mode ID %d (%s) not supported.",
+			    i, vm_guest_mode_string(i));
+		func(i, arg);
+	}
+}
+
+void guest_modes_help(void)
+{
+	int i;
+
+	printf(" -m: specify the guest mode ID to test\n"
+	       "     (default: test all supported modes)\n"
+	       "     This option may be used multiple times.\n"
+	       "     Guest mode IDs:\n");
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
+		       guest_modes[i].supported ? " (supported)" : "");
+	}
+}
+
+void guest_modes_cmdline(const char *arg)
+{
+	static bool mode_selected;
+	unsigned int mode;
+	int i;
+
+	if (!mode_selected) {
+		for (i = 0; i < NUM_VM_MODES; ++i)
+			guest_modes[i].enabled = false;
+		mode_selected = true;
+	}
+
+	mode = strtoul(optarg, NULL, 10);
+	TEST_ASSERT(mode < NUM_VM_MODES, "Guest mode ID %d too big", mode);
+	guest_modes[mode].enabled = true;
+}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/11] KVM: selftests: Make vm_create_default common
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (7 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 08/11] KVM: selftests: Factor out guest mode code Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:36   ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 10/11] KVM: selftests: Introduce vm_create_[default_]vcpus Andrew Jones
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

The code is almost 100% the same anyway. Just move it to common
and add a few arch-specific helpers.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/include/aarch64/processor.h |  3 ++
 .../selftests/kvm/include/s390x/processor.h   |  4 +++
 .../selftests/kvm/include/x86_64/processor.h  |  4 +++
 .../selftests/kvm/lib/aarch64/processor.c     | 17 ----------
 tools/testing/selftests/kvm/lib/kvm_util.c    | 26 +++++++++++++++
 .../selftests/kvm/lib/s390x/processor.c       | 22 -------------
 .../selftests/kvm/lib/x86_64/processor.c      | 32 -------------------
 7 files changed, 37 insertions(+), 71 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index b7fa0c8551db..5e5849cdd115 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -9,6 +9,9 @@
 
 #include "kvm_util.h"
 
+#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
+#define min_page_size()			(4096)
+#define min_page_shift()		(12)
 
 #define ARM64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
 			   KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
index e0e96a5f608c..0952f53c538b 100644
--- a/tools/testing/selftests/kvm/include/s390x/processor.h
+++ b/tools/testing/selftests/kvm/include/s390x/processor.h
@@ -5,6 +5,10 @@
 #ifndef SELFTEST_KVM_PROCESSOR_H
 #define SELFTEST_KVM_PROCESSOR_H
 
+#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
+#define min_page_size()			(4096)
+#define min_page_shift()		(12)
+
 /* Bits in the region/segment table entry */
 #define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
 #define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 82b7fe16a824..7f1fc597ed54 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -13,6 +13,10 @@
 
 #include <asm/msr-index.h>
 
+#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
+#define min_page_size()			(4096)
+#define min_page_shift()		(12)
+
 #define X86_EFLAGS_FIXED	 (1u << 1)
 
 #define X86_CR4_VME		(1ul << 0)
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 2afa6618b396..da90e5a17d3a 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -5,8 +5,6 @@
  * Copyright (C) 2018, Red Hat, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
-
 #include <linux/compiler.h>
 
 #include "kvm_util.h"
@@ -219,21 +217,6 @@ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 	}
 }
 
-struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
-				 void *guest_code)
-{
-	uint64_t ptrs_per_4k_pte = 512;
-	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
-	struct kvm_vm *vm;
-
-	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
-
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
-
-	return vm;
-}
-
 void aarch64_vcpu_setup(struct kvm_vm *vm, int vcpuid, struct kvm_vcpu_init *init)
 {
 	struct kvm_vcpu_init default_init = { .target = -1, };
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b9943e935dc7..0d9d2242af2e 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -5,6 +5,7 @@
  * Copyright (C) 2018, Google LLC.
  */
 
+#define _GNU_SOURCE /* for program_invocation_name */
 #include "test_util.h"
 #include "kvm_util.h"
 #include "kvm_util_internal.h"
@@ -243,6 +244,31 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	return vm;
 }
 
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	/* The maximum page table size for a memory region will be when the
+	 * smallest pages are used. Considering each page contains x page
+	 * table descriptors, the total extra size for page tables (for extra
+	 * N pages) will be: N/x+N/x^2+N/x^3+... which is definitely smaller
+	 * than N/x*2.
+	 */
+	uint64_t extra_pg_pages = (extra_mem_pages / PTRS_PER_PAGE(min_page_size())) * 2;
+	struct kvm_vm *vm;
+
+	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+
+#ifdef __x86_64__
+	vm_create_irqchip(vm);
+#endif
+
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
 /*
  * VM Restart
  *
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
index a88c5d665725..1c28e0ee75f2 100644
--- a/tools/testing/selftests/kvm/lib/s390x/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -5,8 +5,6 @@
  * Copyright (C) 2019, Red Hat, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
-
 #include "processor.h"
 #include "kvm_util.h"
 #include "../kvm_util_internal.h"
@@ -160,26 +158,6 @@ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 	virt_dump_region(stream, vm, indent, vm->pgd);
 }
 
-struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
-				 void *guest_code)
-{
-	/*
-	 * The additional amount of pages required for the page tables is:
-	 * 1 * n / 256 + 4 * (n / 256) / 2048 + 4 * (n / 256) / 2048^2 + ...
-	 * which is definitely smaller than (n / 256) * 2.
-	 */
-	uint64_t extra_pg_pages = extra_mem_pages / 256 * 2;
-	struct kvm_vm *vm;
-
-	vm = vm_create(VM_MODE_DEFAULT,
-		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
-
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
-
-	return vm;
-}
-
 void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
 {
 	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index f6eb34eaa0d2..835909b6038e 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -5,8 +5,6 @@
  * Copyright (C) 2018, Google LLC.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
-
 #include "test_util.h"
 #include "kvm_util.h"
 #include "../kvm_util_internal.h"
@@ -721,36 +719,6 @@ void vcpu_set_cpuid(struct kvm_vm *vm,
 
 }
 
-struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
-				 void *guest_code)
-{
-	struct kvm_vm *vm;
-	/*
-	 * For x86 the maximum page table size for a memory region
-	 * will be when only 4K pages are used.  In that case the
-	 * total extra size for page tables (for extra N pages) will
-	 * be: N/512+N/512^2+N/512^3+... which is definitely smaller
-	 * than N/512*2.
-	 */
-	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
-
-	/* Create VM */
-	vm = vm_create(VM_MODE_DEFAULT,
-		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
-		       O_RDWR);
-
-	/* Setup guest code */
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-
-	/* Setup IRQ Chip */
-	vm_create_irqchip(vm);
-
-	/* Add the first vCPU. */
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
-
-	return vm;
-}
-
 /*
  * VCPU Get MSR
  *
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/11] KVM: selftests: Introduce vm_create_[default_]vcpus
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (8 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 09/11] KVM: selftests: Make vm_create_default common Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-04 21:23 ` [PATCH 11/11] KVM: selftests: Remove create_vm Andrew Jones
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Introduce new vm_create variants that also takes a number of vcpus,
an amount of per-vcpu pages, and optionally a list of vcpuids. These
variants will create default VMs with enough additional pages to
cover the vcpu stacks, per-vcpu pages, and pagetable pages for all.
The new 'default' variant uses VM_MODE_DEFAULT, whereas the other
new variant accepts the mode as a parameter.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  | 10 ++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 36 ++++++++++++++++---
 2 files changed, 41 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index ebf7f87d72df..0d652de57e6e 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -242,6 +242,16 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 				 void *guest_code);
 
+/* Same as vm_create_default, but can be used for more than one vcpu */
+struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_mem_pages,
+					    uint32_t num_percpu_pages, void *guest_code,
+					    uint32_t vcpuids[]);
+
+/* Like vm_create_default_with_vcpus, but accepts mode as a parameter */
+struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
+				    uint64_t extra_mem_pages, uint32_t num_percpu_pages,
+				    void *guest_code, uint32_t vcpuids[]);
+
 /*
  * Adds a vCPU with reasonable defaults (e.g. a stack)
  *
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 0d9d2242af2e..e56ed247b71e 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -244,8 +244,9 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	return vm;
 }
 
-struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
-				 void *guest_code)
+struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus,
+				    uint64_t extra_mem_pages, uint32_t num_percpu_pages,
+				    void *guest_code, uint32_t vcpuids[])
 {
 	/* The maximum page table size for a memory region will be when the
 	 * smallest pages are used. Considering each page contains x page
@@ -253,10 +254,19 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	 * N pages) will be: N/x+N/x^2+N/x^3+... which is definitely smaller
 	 * than N/x*2.
 	 */
-	uint64_t extra_pg_pages = (extra_mem_pages / PTRS_PER_PAGE(min_page_size())) * 2;
+	uint64_t vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus;
+	uint64_t extra_pg_pages = (extra_mem_pages + vcpu_pages) /
+				  PTRS_PER_PAGE(min_page_size()) * 2;
+	uint64_t pages = vm_adjust_num_guest_pages(mode,
+				DEFAULT_GUEST_PHY_PAGES + vcpu_pages + extra_pg_pages);
 	struct kvm_vm *vm;
+	int i;
+
+	TEST_ASSERT(nr_vcpus <= kvm_check_cap(KVM_CAP_MAX_VCPUS),
+		    "nr_vcpus = %d too large for host, max-vcpus = %d",
+		    nr_vcpus, kvm_check_cap(KVM_CAP_MAX_VCPUS));
 
-	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(mode, pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 
@@ -264,11 +274,27 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	vm_create_irqchip(vm);
 #endif
 
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
+	for (i = 0; i < nr_vcpus; ++i)
+		vm_vcpu_add_default(vm, vcpuids ? vcpuids[i] : i, guest_code);
 
 	return vm;
 }
 
+struct kvm_vm *vm_create_default_with_vcpus(uint32_t nr_vcpus, uint64_t extra_mem_pages,
+					    uint32_t num_percpu_pages, void *guest_code,
+					    uint32_t vcpuids[])
+{
+	return vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, extra_mem_pages,
+				    num_percpu_pages, guest_code, vcpuids);
+}
+
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	return vm_create_default_with_vcpus(1, extra_mem_pages, 0, guest_code,
+					    (uint32_t []){ vcpuid });
+}
+
 /*
  * VM Restart
  *
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/11] KVM: selftests: Remove create_vm
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (9 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 10/11] KVM: selftests: Introduce vm_create_[default_]vcpus Andrew Jones
@ 2020-11-04 21:23 ` Andrew Jones
  2020-11-05  7:08 ` [PATCH 00/11] KVM: selftests: Cleanups Christian Borntraeger
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:23 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

Remove create_vm() from demand_paging_test and dirty_log_test by
replacing it with vm_create_with_vcpus(). We also make
vm_guest_mode_params[] global allowing us to clean up some
page calculations and remove redundant asserts.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/demand_paging_test.c        | 70 ++++---------------
 tools/testing/selftests/kvm/dirty_log_test.c  | 62 ++++------------
 .../testing/selftests/kvm/include/kvm_util.h  |  9 +++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  9 +--
 4 files changed, 37 insertions(+), 113 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 693e2c810f15..49a4ee8ca78a 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -7,7 +7,7 @@
  * Copyright (C) 2019, Google, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
+#define _GNU_SOURCE /* for pipe2 */
 
 #include <stdio.h>
 #include <stdlib.h>
@@ -145,39 +145,6 @@ static void *vcpu_worker(void *data)
 	return NULL;
 }
 
-#define PAGE_SHIFT_4K  12
-#define PTES_PER_4K_PT 512
-
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
-				uint64_t guest_percpu_mem_size)
-{
-	struct kvm_vm *vm;
-	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
-
-	/* Account for a few pages per-vCPU for stacks */
-	pages += DEFAULT_STACK_PGS * vcpus;
-
-	/*
-	 * Reserve twice the ammount of memory needed to map the test region and
-	 * the page table / stacks region, at 4k, for page tables. Do the
-	 * calculation with 4K page size: the smallest of all archs. (e.g., 64K
-	 * page size guest will need even less memory for page tables).
-	 */
-	pages += (2 * pages) / PTES_PER_4K_PT;
-	pages += ((2 * vcpus * guest_percpu_mem_size) >> PAGE_SHIFT_4K) /
-		 PTES_PER_4K_PT;
-	pages = vm_adjust_num_guest_pages(mode, pages);
-
-	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
-
-	vm = vm_create(mode, pages, O_RDWR);
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-#ifdef __x86_64__
-	vm_create_irqchip(vm);
-#endif
-	return vm;
-}
-
 static int handle_uffd_page_request(int uffd, uint64_t addr)
 {
 	pid_t tid;
@@ -368,43 +335,32 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct timespec start, end, ts_diff;
 	int *pipefds = NULL;
 	struct kvm_vm *vm;
+	uint64_t guest_percpu_pages;
 	uint64_t guest_num_pages;
 	int vcpu_id;
 	int r;
 
-	vm = create_vm(mode, nr_vcpus, guest_percpu_mem_size);
+	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	guest_page_size = vm_get_page_size(vm);
+	host_page_size = getpagesize();
+	guest_page_size = vm_guest_mode_params[mode].page_size;
 
-	TEST_ASSERT(guest_percpu_mem_size % guest_page_size == 0,
-		    "Guest memory size is not guest page size aligned.");
+	guest_percpu_pages = vm_calc_num_guest_pages(mode, guest_percpu_mem_size);
 
-	guest_num_pages = (nr_vcpus * guest_percpu_mem_size) / guest_page_size;
-	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
+	TEST_ASSERT(guest_percpu_pages == guest_percpu_mem_size / guest_page_size,
+		    "Guest memory size is not aligned for guest pages size %ld, try %ld",
+		    guest_page_size, guest_percpu_pages * guest_page_size);
 
-	/*
-	 * If there should be more memory in the guest test region than there
-	 * can be pages in the guest, it will definitely cause problems.
-	 */
-	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
-		    "Requested more guest memory than address space allows.\n"
-		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
-		    guest_num_pages, vm_get_max_gfn(vm), nr_vcpus,
-		    guest_percpu_mem_size);
+	guest_num_pages = nr_vcpus * guest_percpu_pages;
 
-	host_page_size = getpagesize();
-	TEST_ASSERT(guest_percpu_mem_size % host_page_size == 0,
-		    "Guest memory size is not host page size aligned.");
+	vm = vm_create_with_vcpus(mode, nr_vcpus, 0, guest_num_pages, guest_code, NULL);
 
-	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
-			      guest_page_size;
+	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size;
 	guest_test_phys_mem &= ~(host_page_size - 1);
-
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
 	guest_test_phys_mem &= ~((1 << 20) - 1);
 #endif
-
 	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
 
 	/* Add an extra memory slot for testing demand paging */
@@ -442,8 +398,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		vm_paddr_t vcpu_gpa;
 		void *vcpu_hva;
 
-		vm_vcpu_add_default(vm, vcpu_id, guest_code);
-
 		vcpu_gpa = guest_test_phys_mem + (vcpu_id * guest_percpu_mem_size);
 		PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
 			       vcpu_id, vcpu_gpa, vcpu_gpa + guest_percpu_mem_size);
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index f2710c6a60bf..4e7121ee160e 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -5,8 +5,6 @@
  * Copyright (C) 2018, Red Hat, Inc.
  */
 
-#define _GNU_SOURCE /* for program_invocation_name */
-
 #include <stdio.h>
 #include <stdlib.h>
 #include <unistd.h>
@@ -20,8 +18,6 @@
 #include "kvm_util.h"
 #include "processor.h"
 
-#define VCPU_ID				1
-
 /* The memory slot index to track dirty pages */
 #define TEST_MEM_SLOT_INDEX		1
 
@@ -181,9 +177,9 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
 
 static void default_after_vcpu_run(struct kvm_vm *vm)
 {
-	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
+	struct kvm_run *run = vcpu_state(vm, 0);
 
-	TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
+	TEST_ASSERT(get_ucall(vm, 0, NULL) == UCALL_SYNC,
 		    "Invalid guest sync status: exit_reason=%s\n",
 		    exit_reason_str(run->exit_reason));
 }
@@ -290,7 +286,7 @@ static void *vcpu_worker(void *data)
 		generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
 		pages_count += TEST_PAGES_PER_LOOP;
 		/* Let the guest dirty the random pages */
-		ret = _vcpu_run(vm, VCPU_ID);
+		ret = _vcpu_run(vm, 0);
 		TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
 		log_mode_after_vcpu_run(vm);
 	}
@@ -367,26 +363,7 @@ static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long *bmap)
 	}
 }
 
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
-				uint64_t extra_mem_pages, void *guest_code)
-{
-	struct kvm_vm *vm;
-	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
-
-	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
-
-	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
-	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
-#ifdef __x86_64__
-	vm_create_irqchip(vm);
-#endif
-	log_mode_create_vm_done(vm);
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
-	return vm;
-}
-
 #define DIRTY_MEM_BITS 30 /* 1G */
-#define PAGE_SHIFT_4K  12
 
 struct test_params {
 	unsigned long iterations;
@@ -407,43 +384,34 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		return;
 	}
 
-	/*
-	 * We reserve page table for 2 times of extra dirty mem which
-	 * will definitely cover the original (1G+) test range.  Here
-	 * we do the calculation with 4K page size which is the
-	 * smallest so the page number will be enough for all archs
-	 * (e.g., 64K page size guest will need even less memory for
-	 * page tables).
-	 */
-	vm = create_vm(mode, VCPU_ID,
-		       2ul << (DIRTY_MEM_BITS - PAGE_SHIFT_4K),
-		       guest_code);
+	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
+
+	host_page_size = getpagesize();
+	guest_page_size = vm_guest_mode_params[mode].page_size;
 
-	guest_page_size = vm_get_page_size(vm);
 	/*
 	 * A little more than 1G of guest page sized pages.  Cover the
 	 * case where the size is not aligned to 64 pages.
 	 */
-	guest_num_pages = (1ul << (DIRTY_MEM_BITS -
-				   vm_get_page_shift(vm))) + 3;
-	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
+	guest_num_pages = vm_adjust_num_guest_pages(mode,
+		(1ul << (DIRTY_MEM_BITS - vm_guest_mode_params[mode].page_shift)) + 3);
 
-	host_page_size = getpagesize();
 	host_num_pages = vm_num_host_pages(mode, guest_num_pages);
 
+	vm = vm_create_with_vcpus(mode, 1, guest_num_pages, 0, guest_code, NULL);
+
+	log_mode_create_vm_done(vm);
+
 	if (!p->phys_offset) {
-		guest_test_phys_mem = (vm_get_max_gfn(vm) -
-				       guest_num_pages) * guest_page_size;
+		guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size;
 		guest_test_phys_mem &= ~(host_page_size - 1);
 	} else {
 		guest_test_phys_mem = p->phys_offset;
 	}
-
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
 	guest_test_phys_mem &= ~((1 << 20) - 1);
 #endif
-
 	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
 
 	bmap = bitmap_alloc(host_num_pages);
@@ -463,7 +431,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
 
 #ifdef __x86_64__
-	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+	vcpu_set_cpuid(vm, 0, kvm_get_supported_cpuid());
 #endif
 	ucall_init(vm, NULL);
 
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 0d652de57e6e..7ee4f7a85ef7 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -55,6 +55,15 @@ enum vm_guest_mode {
 #define vm_guest_mode_string(m) vm_guest_mode_string[m]
 extern const char * const vm_guest_mode_string[];
 
+struct vm_guest_mode_params {
+	unsigned int pa_bits;
+	unsigned int va_bits;
+	unsigned int page_size;
+	unsigned int page_shift;
+};
+
+extern const struct vm_guest_mode_params vm_guest_mode_params[];
+
 enum vm_mem_backing_src_type {
 	VM_MEM_SRC_ANONYMOUS,
 	VM_MEM_SRC_ANONYMOUS_THP,
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e56ed247b71e..0660f583b683 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -115,14 +115,7 @@ const char * const vm_guest_mode_string[] = {
 _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
 	       "Missing new mode strings?");
 
-struct vm_guest_mode_params {
-	unsigned int pa_bits;
-	unsigned int va_bits;
-	unsigned int page_size;
-	unsigned int page_shift;
-};
-
-static const struct vm_guest_mode_params vm_guest_mode_params[] = {
+const struct vm_guest_mode_params vm_guest_mode_params[] = {
 	{ 52, 48,  0x1000, 12 },
 	{ 52, 48, 0x10000, 16 },
 	{ 48, 48,  0x1000, 12 },
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 09/11] KVM: selftests: Make vm_create_default common
  2020-11-04 21:23 ` [PATCH 09/11] KVM: selftests: Make vm_create_default common Andrew Jones
@ 2020-11-04 21:36   ` Andrew Jones
  2020-11-05  7:18     ` Christian Borntraeger
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2020-11-04 21:36 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, borntraeger, frankja, bgardon, peterx

On Wed, Nov 04, 2020 at 10:23:55PM +0100, Andrew Jones wrote:
> The code is almost 100% the same anyway. Just move it to common
> and add a few arch-specific helpers.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  .../selftests/kvm/include/aarch64/processor.h |  3 ++
>  .../selftests/kvm/include/s390x/processor.h   |  4 +++
>  .../selftests/kvm/include/x86_64/processor.h  |  4 +++
>  .../selftests/kvm/lib/aarch64/processor.c     | 17 ----------
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 26 +++++++++++++++
>  .../selftests/kvm/lib/s390x/processor.c       | 22 -------------
>  .../selftests/kvm/lib/x86_64/processor.c      | 32 -------------------
>  7 files changed, 37 insertions(+), 71 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> index b7fa0c8551db..5e5849cdd115 100644
> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> @@ -9,6 +9,9 @@
>  
>  #include "kvm_util.h"
>  
> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
> +#define min_page_size()			(4096)
> +#define min_page_shift()		(12)
>  
>  #define ARM64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
>  			   KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
> diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
> index e0e96a5f608c..0952f53c538b 100644
> --- a/tools/testing/selftests/kvm/include/s390x/processor.h
> +++ b/tools/testing/selftests/kvm/include/s390x/processor.h
> @@ -5,6 +5,10 @@
>  #ifndef SELFTEST_KVM_PROCESSOR_H
>  #define SELFTEST_KVM_PROCESSOR_H
>  
> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)

Doh. I think this 8 is supposed to be a 16 for s390x, considering it
was dividing by 256 in its version of vm_create_default. I need
guidance from s390x gurus as to whether or not I should respin though.

Thanks,
drew

> +#define min_page_size()			(4096)
> +#define min_page_shift()		(12)
> +
>  /* Bits in the region/segment table entry */
>  #define REGION_ENTRY_ORIGIN	~0xfffUL /* region/segment table origin	   */
>  #define REGION_ENTRY_PROTECT	0x200	 /* region protection bit	   */
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index 82b7fe16a824..7f1fc597ed54 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -13,6 +13,10 @@
>  
>  #include <asm/msr-index.h>
>  
> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
> +#define min_page_size()			(4096)
> +#define min_page_shift()		(12)
> +
>  #define X86_EFLAGS_FIXED	 (1u << 1)
>  
>  #define X86_CR4_VME		(1ul << 0)
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index 2afa6618b396..da90e5a17d3a 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -5,8 +5,6 @@
>   * Copyright (C) 2018, Red Hat, Inc.
>   */
>  
> -#define _GNU_SOURCE /* for program_invocation_name */
> -
>  #include <linux/compiler.h>
>  
>  #include "kvm_util.h"
> @@ -219,21 +217,6 @@ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
>  	}
>  }
>  
> -struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
> -				 void *guest_code)
> -{
> -	uint64_t ptrs_per_4k_pte = 512;
> -	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
> -	struct kvm_vm *vm;
> -
> -	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> -
> -	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
> -	vm_vcpu_add_default(vm, vcpuid, guest_code);
> -
> -	return vm;
> -}
> -
>  void aarch64_vcpu_setup(struct kvm_vm *vm, int vcpuid, struct kvm_vcpu_init *init)
>  {
>  	struct kvm_vcpu_init default_init = { .target = -1, };
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index b9943e935dc7..0d9d2242af2e 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -5,6 +5,7 @@
>   * Copyright (C) 2018, Google LLC.
>   */
>  
> +#define _GNU_SOURCE /* for program_invocation_name */
>  #include "test_util.h"
>  #include "kvm_util.h"
>  #include "kvm_util_internal.h"
> @@ -243,6 +244,31 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>  	return vm;
>  }
>  
> +struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
> +				 void *guest_code)
> +{
> +	/* The maximum page table size for a memory region will be when the
> +	 * smallest pages are used. Considering each page contains x page
> +	 * table descriptors, the total extra size for page tables (for extra
> +	 * N pages) will be: N/x+N/x^2+N/x^3+... which is definitely smaller
> +	 * than N/x*2.
> +	 */
> +	uint64_t extra_pg_pages = (extra_mem_pages / PTRS_PER_PAGE(min_page_size())) * 2;
> +	struct kvm_vm *vm;
> +
> +	vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +
> +	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
> +
> +#ifdef __x86_64__
> +	vm_create_irqchip(vm);
> +#endif
> +
> +	vm_vcpu_add_default(vm, vcpuid, guest_code);
> +
> +	return vm;
> +}
> +
>  /*
>   * VM Restart
>   *
> diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
> index a88c5d665725..1c28e0ee75f2 100644
> --- a/tools/testing/selftests/kvm/lib/s390x/processor.c
> +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
> @@ -5,8 +5,6 @@
>   * Copyright (C) 2019, Red Hat, Inc.
>   */
>  
> -#define _GNU_SOURCE /* for program_invocation_name */
> -
>  #include "processor.h"
>  #include "kvm_util.h"
>  #include "../kvm_util_internal.h"
> @@ -160,26 +158,6 @@ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
>  	virt_dump_region(stream, vm, indent, vm->pgd);
>  }
>  
> -struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
> -				 void *guest_code)
> -{
> -	/*
> -	 * The additional amount of pages required for the page tables is:
> -	 * 1 * n / 256 + 4 * (n / 256) / 2048 + 4 * (n / 256) / 2048^2 + ...
> -	 * which is definitely smaller than (n / 256) * 2.
> -	 */
> -	uint64_t extra_pg_pages = extra_mem_pages / 256 * 2;
> -	struct kvm_vm *vm;
> -
> -	vm = vm_create(VM_MODE_DEFAULT,
> -		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> -
> -	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
> -	vm_vcpu_add_default(vm, vcpuid, guest_code);
> -
> -	return vm;
> -}
> -
>  void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
>  {
>  	size_t stack_size =  DEFAULT_STACK_PGS * getpagesize();
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index f6eb34eaa0d2..835909b6038e 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -5,8 +5,6 @@
>   * Copyright (C) 2018, Google LLC.
>   */
>  
> -#define _GNU_SOURCE /* for program_invocation_name */
> -
>  #include "test_util.h"
>  #include "kvm_util.h"
>  #include "../kvm_util_internal.h"
> @@ -721,36 +719,6 @@ void vcpu_set_cpuid(struct kvm_vm *vm,
>  
>  }
>  
> -struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
> -				 void *guest_code)
> -{
> -	struct kvm_vm *vm;
> -	/*
> -	 * For x86 the maximum page table size for a memory region
> -	 * will be when only 4K pages are used.  In that case the
> -	 * total extra size for page tables (for extra N pages) will
> -	 * be: N/512+N/512^2+N/512^3+... which is definitely smaller
> -	 * than N/512*2.
> -	 */
> -	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
> -
> -	/* Create VM */
> -	vm = vm_create(VM_MODE_DEFAULT,
> -		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
> -		       O_RDWR);
> -
> -	/* Setup guest code */
> -	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
> -
> -	/* Setup IRQ Chip */
> -	vm_create_irqchip(vm);
> -
> -	/* Add the first vCPU. */
> -	vm_vcpu_add_default(vm, vcpuid, guest_code);
> -
> -	return vm;
> -}
> -
>  /*
>   * VCPU Get MSR
>   *
> -- 
> 2.26.2
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (10 preceding siblings ...)
  2020-11-04 21:23 ` [PATCH 11/11] KVM: selftests: Remove create_vm Andrew Jones
@ 2020-11-05  7:08 ` Christian Borntraeger
  2020-11-05 18:55 ` Peter Xu
  2020-11-06 13:01 ` Paolo Bonzini
  13 siblings, 0 replies; 24+ messages in thread
From: Christian Borntraeger @ 2020-11-05  7:08 UTC (permalink / raw)
  To: Andrew Jones, kvm; +Cc: pbonzini, frankja, bgardon, peterx

On 04.11.20 22:23, Andrew Jones wrote:
> This series attempts to clean up demand_paging_test and dirty_log_test
> by factoring out common code, creating some new API along the way. It's
> main goal is to prepare for even more factoring that Ben and Peter want
> to do. The series would have a nice negative diff stat, but it also
> picks up a few of Peter's patches for his new dirty log test. So, the
> +/- diff stat is close to equal. It's not as close as an electoral vote
> count, but it's close.
> 
> I've tested on x86 and AArch64 (one config each), but not s390x.

I see no regression when I run 
make TARGETS=kvm kselftest

on an s390 system with these patches applied.

Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>


> Thanks,
> drew
> 
> 
> Andrew Jones (8):
>   KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore
>   KVM: selftests: Drop pointless vm_create wrapper
>   KVM: selftests: Make the per vcpu memory size global
>   KVM: selftests: Make the number of vcpus global
>   KVM: selftests: Factor out guest mode code
>   KVM: selftests: Make vm_create_default common
>   KVM: selftests: Introduce vm_create_[default_]vcpus
>   KVM: selftests: Remove create_vm
> 
> Peter Xu (3):
>   KVM: selftests: Always clear dirty bitmap after iteration
>   KVM: selftests: Use a single binary for dirty/clear log test
>   KVM: selftests: Introduce after_vcpu_run hook for dirty log test
> 
>  tools/testing/selftests/kvm/.gitignore        |   2 +-
>  tools/testing/selftests/kvm/Makefile          |   4 +-
>  .../selftests/kvm/clear_dirty_log_test.c      |   6 -
>  .../selftests/kvm/demand_paging_test.c        | 213 +++-------
>  tools/testing/selftests/kvm/dirty_log_test.c  | 372 ++++++++++--------
>  .../selftests/kvm/include/aarch64/processor.h |   3 +
>  .../selftests/kvm/include/guest_modes.h       |  21 +
>  .../testing/selftests/kvm/include/kvm_util.h  |  20 +-
>  .../selftests/kvm/include/s390x/processor.h   |   4 +
>  .../selftests/kvm/include/x86_64/processor.h  |   4 +
>  .../selftests/kvm/lib/aarch64/processor.c     |  17 -
>  tools/testing/selftests/kvm/lib/guest_modes.c |  70 ++++
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  62 ++-
>  .../selftests/kvm/lib/s390x/processor.c       |  22 --
>  .../selftests/kvm/lib/x86_64/processor.c      |  32 --
>  15 files changed, 445 insertions(+), 407 deletions(-)
>  delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>  create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
>  create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 09/11] KVM: selftests: Make vm_create_default common
  2020-11-04 21:36   ` Andrew Jones
@ 2020-11-05  7:18     ` Christian Borntraeger
  2020-11-05  9:59       ` Andrew Jones
  0 siblings, 1 reply; 24+ messages in thread
From: Christian Borntraeger @ 2020-11-05  7:18 UTC (permalink / raw)
  To: Andrew Jones, kvm; +Cc: pbonzini, frankja, bgardon, peterx



On 04.11.20 22:36, Andrew Jones wrote:
> On Wed, Nov 04, 2020 at 10:23:55PM +0100, Andrew Jones wrote:
>> The code is almost 100% the same anyway. Just move it to common
>> and add a few arch-specific helpers.
>>
>> Signed-off-by: Andrew Jones <drjones@redhat.com>
>> ---
>>  .../selftests/kvm/include/aarch64/processor.h |  3 ++
>>  .../selftests/kvm/include/s390x/processor.h   |  4 +++
>>  .../selftests/kvm/include/x86_64/processor.h  |  4 +++
>>  .../selftests/kvm/lib/aarch64/processor.c     | 17 ----------
>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 26 +++++++++++++++
>>  .../selftests/kvm/lib/s390x/processor.c       | 22 -------------
>>  .../selftests/kvm/lib/x86_64/processor.c      | 32 -------------------
>>  7 files changed, 37 insertions(+), 71 deletions(-)
>>
>> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
>> index b7fa0c8551db..5e5849cdd115 100644
>> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
>> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
>> @@ -9,6 +9,9 @@
>>  
>>  #include "kvm_util.h"
>>  
>> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
>> +#define min_page_size()			(4096)
>> +#define min_page_shift()		(12)
>>  
>>  #define ARM64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
>>  			   KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
>> diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
>> index e0e96a5f608c..0952f53c538b 100644
>> --- a/tools/testing/selftests/kvm/include/s390x/processor.h
>> +++ b/tools/testing/selftests/kvm/include/s390x/processor.h
>> @@ -5,6 +5,10 @@
>>  #ifndef SELFTEST_KVM_PROCESSOR_H
>>  #define SELFTEST_KVM_PROCESSOR_H
>>  
>> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
> 
> Doh. I think this 8 is supposed to be a 16 for s390x, considering it
> was dividing by 256 in its version of vm_create_default. I need
> guidance from s390x gurus as to whether or not I should respin though.
> 
> Thanks,
> drew
> 

This is kind of tricky. The last level page table is only 2kb (256 entries = 1MB range).
Depending on whether the page table allocation is clever or not (you can have 2 page
tables in one page) this means that indeed 16 might be better. But then you actually 
want to change the macro name to PTES_PER_PAGE?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 09/11] KVM: selftests: Make vm_create_default common
  2020-11-05  7:18     ` Christian Borntraeger
@ 2020-11-05  9:59       ` Andrew Jones
  2020-11-05 18:45         ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2020-11-05  9:59 UTC (permalink / raw)
  To: Christian Borntraeger; +Cc: kvm, pbonzini, frankja, bgardon, peterx

On Thu, Nov 05, 2020 at 08:18:37AM +0100, Christian Borntraeger wrote:
> 
> 
> On 04.11.20 22:36, Andrew Jones wrote:
> > On Wed, Nov 04, 2020 at 10:23:55PM +0100, Andrew Jones wrote:
> >> The code is almost 100% the same anyway. Just move it to common
> >> and add a few arch-specific helpers.
> >>
> >> Signed-off-by: Andrew Jones <drjones@redhat.com>
> >> ---
> >>  .../selftests/kvm/include/aarch64/processor.h |  3 ++
> >>  .../selftests/kvm/include/s390x/processor.h   |  4 +++
> >>  .../selftests/kvm/include/x86_64/processor.h  |  4 +++
> >>  .../selftests/kvm/lib/aarch64/processor.c     | 17 ----------
> >>  tools/testing/selftests/kvm/lib/kvm_util.c    | 26 +++++++++++++++
> >>  .../selftests/kvm/lib/s390x/processor.c       | 22 -------------
> >>  .../selftests/kvm/lib/x86_64/processor.c      | 32 -------------------
> >>  7 files changed, 37 insertions(+), 71 deletions(-)
> >>
> >> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> >> index b7fa0c8551db..5e5849cdd115 100644
> >> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> >> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> >> @@ -9,6 +9,9 @@
> >>  
> >>  #include "kvm_util.h"
> >>  
> >> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
> >> +#define min_page_size()			(4096)
> >> +#define min_page_shift()		(12)
> >>  
> >>  #define ARM64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
> >>  			   KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
> >> diff --git a/tools/testing/selftests/kvm/include/s390x/processor.h b/tools/testing/selftests/kvm/include/s390x/processor.h
> >> index e0e96a5f608c..0952f53c538b 100644
> >> --- a/tools/testing/selftests/kvm/include/s390x/processor.h
> >> +++ b/tools/testing/selftests/kvm/include/s390x/processor.h
> >> @@ -5,6 +5,10 @@
> >>  #ifndef SELFTEST_KVM_PROCESSOR_H
> >>  #define SELFTEST_KVM_PROCESSOR_H
> >>  
> >> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
> > 
> > Doh. I think this 8 is supposed to be a 16 for s390x, considering it
> > was dividing by 256 in its version of vm_create_default. I need
> > guidance from s390x gurus as to whether or not I should respin though.
> > 
> > Thanks,
> > drew
> > 
> 
> This is kind of tricky. The last level page table is only 2kb (256 entries = 1MB range).
> Depending on whether the page table allocation is clever or not (you can have 2 page
> tables in one page) this means that indeed 16 might be better. But then you actually 
> want to change the macro name to PTES_PER_PAGE?

Thanks Christian,

I'll respin with the macro name change and 16 for s390.

drew


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 09/11] KVM: selftests: Make vm_create_default common
  2020-11-05  9:59       ` Andrew Jones
@ 2020-11-05 18:45         ` Peter Xu
  2020-11-05 18:54           ` Christian Borntraeger
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2020-11-05 18:45 UTC (permalink / raw)
  To: Andrew Jones; +Cc: Christian Borntraeger, kvm, pbonzini, frankja, bgardon

On Thu, Nov 05, 2020 at 10:59:30AM +0100, Andrew Jones wrote:
> > >> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
> > > 
> > > Doh. I think this 8 is supposed to be a 16 for s390x, considering it
> > > was dividing by 256 in its version of vm_create_default. I need
> > > guidance from s390x gurus as to whether or not I should respin though.
> > > 
> > > Thanks,
> > > drew
> > > 
> > 
> > This is kind of tricky. The last level page table is only 2kb (256 entries = 1MB range).
> > Depending on whether the page table allocation is clever or not (you can have 2 page
> > tables in one page) this means that indeed 16 might be better. But then you actually 
> > want to change the macro name to PTES_PER_PAGE?
> 
> Thanks Christian,
> 
> I'll respin with the macro name change and 16 for s390.

Maybe it can also be moved to common header, but instead define PTR_SIZE for
per-arch?  I'm also curious whether PTR_SIZE will equals to "sizeof(void *)",
but seems not for s390x..  Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 09/11] KVM: selftests: Make vm_create_default common
  2020-11-05 18:45         ` Peter Xu
@ 2020-11-05 18:54           ` Christian Borntraeger
  0 siblings, 0 replies; 24+ messages in thread
From: Christian Borntraeger @ 2020-11-05 18:54 UTC (permalink / raw)
  To: Peter Xu, Andrew Jones; +Cc: kvm, pbonzini, frankja, bgardon



On 05.11.20 19:45, Peter Xu wrote:
> On Thu, Nov 05, 2020 at 10:59:30AM +0100, Andrew Jones wrote:
>>>>> +#define PTRS_PER_PAGE(page_size)	((page_size) / 8)
>>>>
>>>> Doh. I think this 8 is supposed to be a 16 for s390x, considering it
>>>> was dividing by 256 in its version of vm_create_default. I need
>>>> guidance from s390x gurus as to whether or not I should respin though.
>>>>
>>>> Thanks,
>>>> drew
>>>>
>>>
>>> This is kind of tricky. The last level page table is only 2kb (256 entries = 1MB range).
>>> Depending on whether the page table allocation is clever or not (you can have 2 page
>>> tables in one page) this means that indeed 16 might be better. But then you actually 
>>> want to change the macro name to PTES_PER_PAGE?
>>
>> Thanks Christian,
>>
>> I'll respin with the macro name change and 16 for s390.
> 
> Maybe it can also be moved to common header, but instead define PTR_SIZE for
> per-arch?  I'm also curious whether PTR_SIZE will equals to "sizeof(void *)",
> but seems not for s390x..  Thanks,

Thats why I want to change the name. It is not about the ptr size. It is about
number of page table entries in a page. And as a page table is just 2kb on s390
there is a mismatch. So instead of

#define PTRS_PER_PAGE(page_size)	((page_size) / 8)

let us just to
#define PTES_PER_PAGETABLE 256
for s390
and
#define PTES_PER_PAGETABLE 512
for the others

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (11 preceding siblings ...)
  2020-11-05  7:08 ` [PATCH 00/11] KVM: selftests: Cleanups Christian Borntraeger
@ 2020-11-05 18:55 ` Peter Xu
  2020-11-05 19:41   ` Ben Gardon
  2020-11-06 13:01 ` Paolo Bonzini
  13 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2020-11-05 18:55 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, pbonzini, borntraeger, frankja, bgardon

On Wed, Nov 04, 2020 at 10:23:46PM +0100, Andrew Jones wrote:
> This series attempts to clean up demand_paging_test and dirty_log_test
> by factoring out common code, creating some new API along the way. It's
> main goal is to prepare for even more factoring that Ben and Peter want
> to do. The series would have a nice negative diff stat, but it also
> picks up a few of Peter's patches for his new dirty log test. So, the
> +/- diff stat is close to equal. It's not as close as an electoral vote
> count, but it's close.
> 
> I've tested on x86 and AArch64 (one config each), but not s390x.

The whole series looks good to me (probably except the PTRS_PER_PAGE one; but
that's not hurting much anyways, I think).  Thanks for picking up the other
patches, even if they made the diff stat much less pretty..

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-05 18:55 ` Peter Xu
@ 2020-11-05 19:41   ` Ben Gardon
  2020-11-06  9:45     ` Andrew Jones
  0 siblings, 1 reply; 24+ messages in thread
From: Ben Gardon @ 2020-11-05 19:41 UTC (permalink / raw)
  To: Peter Xu
  Cc: Andrew Jones, kvm, Paolo Bonzini, Christian Borntraeger, Janosch Frank

On Thu, Nov 5, 2020 at 10:56 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Nov 04, 2020 at 10:23:46PM +0100, Andrew Jones wrote:
> > This series attempts to clean up demand_paging_test and dirty_log_test
> > by factoring out common code, creating some new API along the way. It's
> > main goal is to prepare for even more factoring that Ben and Peter want
> > to do. The series would have a nice negative diff stat, but it also
> > picks up a few of Peter's patches for his new dirty log test. So, the
> > +/- diff stat is close to equal. It's not as close as an electoral vote
> > count, but it's close.
> >
> > I've tested on x86 and AArch64 (one config each), but not s390x.
>
> The whole series looks good to me (probably except the PTRS_PER_PAGE one; but
> that's not hurting much anyways, I think).  Thanks for picking up the other
> patches, even if they made the diff stat much less pretty..

This series looks good to me too. Thanks for doing this Drew!

Sorry I'm later than I wanted to be in reviewing this series. I
learned I was exposed to someone with COVID yesterday, so I've been a
bit scattered. The dirty log perf test series v3 might be delayed a
bit as a result, but I'll send it out as soon as I can after this
series is merged.

>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-05 19:41   ` Ben Gardon
@ 2020-11-06  9:45     ` Andrew Jones
  2020-11-06 15:04       ` Peter Xu
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2020-11-06  9:45 UTC (permalink / raw)
  To: Ben Gardon
  Cc: Peter Xu, kvm, Paolo Bonzini, Christian Borntraeger, Janosch Frank

On Thu, Nov 05, 2020 at 11:41:24AM -0800, Ben Gardon wrote:
> On Thu, Nov 5, 2020 at 10:56 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Wed, Nov 04, 2020 at 10:23:46PM +0100, Andrew Jones wrote:
> > > This series attempts to clean up demand_paging_test and dirty_log_test
> > > by factoring out common code, creating some new API along the way. It's
> > > main goal is to prepare for even more factoring that Ben and Peter want
> > > to do. The series would have a nice negative diff stat, but it also
> > > picks up a few of Peter's patches for his new dirty log test. So, the
> > > +/- diff stat is close to equal. It's not as close as an electoral vote
> > > count, but it's close.
> > >
> > > I've tested on x86 and AArch64 (one config each), but not s390x.
> >
> > The whole series looks good to me (probably except the PTRS_PER_PAGE one; but
> > that's not hurting much anyways, I think).  Thanks for picking up the other
> > patches, even if they made the diff stat much less pretty..
> 
> This series looks good to me too. Thanks for doing this Drew!
> 
> Sorry I'm later than I wanted to be in reviewing this series. I
> learned I was exposed to someone with COVID yesterday, so I've been a
> bit scattered. The dirty log perf test series v3 might be delayed a
> bit as a result, but I'll send it out as soon as I can after this
> series is merged.
>

Yikes! Don't worry about KVM selftests then. Except for one more question?
Can I translate your "looks good to me" into a r-b for the series? And,
same question for Peter. I'll be respinning witht eh PTES_PER_PAGE change
and can add your guys' r-b's if you want to give them.

Thanks,
drew


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
                   ` (12 preceding siblings ...)
  2020-11-05 18:55 ` Peter Xu
@ 2020-11-06 13:01 ` Paolo Bonzini
  13 siblings, 0 replies; 24+ messages in thread
From: Paolo Bonzini @ 2020-11-06 13:01 UTC (permalink / raw)
  To: Andrew Jones, kvm; +Cc: borntraeger, frankja, bgardon, peterx

On 04/11/20 22:23, Andrew Jones wrote:
> This series attempts to clean up demand_paging_test and dirty_log_test
> by factoring out common code, creating some new API along the way. It's
> main goal is to prepare for even more factoring that Ben and Peter want
> to do. The series would have a nice negative diff stat, but it also
> picks up a few of Peter's patches for his new dirty log test. So, the
> +/- diff stat is close to equal. It's not as close as an electoral vote
> count, but it's close.
> 
> I've tested on x86 and AArch64 (one config each), but not s390x.
> 
> Thanks,
> drew
> 
> 
> Andrew Jones (8):
>    KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore
>    KVM: selftests: Drop pointless vm_create wrapper
>    KVM: selftests: Make the per vcpu memory size global
>    KVM: selftests: Make the number of vcpus global
>    KVM: selftests: Factor out guest mode code
>    KVM: selftests: Make vm_create_default common
>    KVM: selftests: Introduce vm_create_[default_]vcpus
>    KVM: selftests: Remove create_vm
> 
> Peter Xu (3):
>    KVM: selftests: Always clear dirty bitmap after iteration
>    KVM: selftests: Use a single binary for dirty/clear log test
>    KVM: selftests: Introduce after_vcpu_run hook for dirty log test
> 
>   tools/testing/selftests/kvm/.gitignore        |   2 +-
>   tools/testing/selftests/kvm/Makefile          |   4 +-
>   .../selftests/kvm/clear_dirty_log_test.c      |   6 -
>   .../selftests/kvm/demand_paging_test.c        | 213 +++-------
>   tools/testing/selftests/kvm/dirty_log_test.c  | 372 ++++++++++--------
>   .../selftests/kvm/include/aarch64/processor.h |   3 +
>   .../selftests/kvm/include/guest_modes.h       |  21 +
>   .../testing/selftests/kvm/include/kvm_util.h  |  20 +-
>   .../selftests/kvm/include/s390x/processor.h   |   4 +
>   .../selftests/kvm/include/x86_64/processor.h  |   4 +
>   .../selftests/kvm/lib/aarch64/processor.c     |  17 -
>   tools/testing/selftests/kvm/lib/guest_modes.c |  70 ++++
>   tools/testing/selftests/kvm/lib/kvm_util.c    |  62 ++-
>   .../selftests/kvm/lib/s390x/processor.c       |  22 --
>   .../selftests/kvm/lib/x86_64/processor.c      |  32 --
>   15 files changed, 445 insertions(+), 407 deletions(-)
>   delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>   create mode 100644 tools/testing/selftests/kvm/include/guest_modes.h
>   create mode 100644 tools/testing/selftests/kvm/lib/guest_modes.c
> 

Queued (or overridden by patches already in queue) patches 1-8, thanks.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-06  9:45     ` Andrew Jones
@ 2020-11-06 15:04       ` Peter Xu
  2020-11-09 17:11         ` Ben Gardon
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Xu @ 2020-11-06 15:04 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Ben Gardon, kvm, Paolo Bonzini, Christian Borntraeger, Janosch Frank

On Fri, Nov 06, 2020 at 10:45:11AM +0100, Andrew Jones wrote:
> Yikes! Don't worry about KVM selftests then. Except for one more question?
> Can I translate your "looks good to me" into a r-b for the series? And,
> same question for Peter. I'll be respinning witht eh PTES_PER_PAGE change
> and can add your guys' r-b's if you want to give them.

Yes, please feel free to add with mine (if there's a repost).  Though I see
that Paolo has queued the series already, so maybe it's even easier to directly
work on top.

Thanks!

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/11] KVM: selftests: Cleanups
  2020-11-06 15:04       ` Peter Xu
@ 2020-11-09 17:11         ` Ben Gardon
  0 siblings, 0 replies; 24+ messages in thread
From: Ben Gardon @ 2020-11-09 17:11 UTC (permalink / raw)
  To: Peter Xu
  Cc: Andrew Jones, kvm, Paolo Bonzini, Christian Borntraeger, Janosch Frank

On Fri, Nov 6, 2020 at 7:04 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Fri, Nov 06, 2020 at 10:45:11AM +0100, Andrew Jones wrote:
> > Yikes! Don't worry about KVM selftests then. Except for one more question?
> > Can I translate your "looks good to me" into a r-b for the series? And,
> > same question for Peter. I'll be respinning witht eh PTES_PER_PAGE change
> > and can add your guys' r-b's if you want to give them.

Ah yes, please add my reviewed-by as well, sorry that wasn't clear.

>
> Yes, please feel free to add with mine (if there's a repost).  Though I see
> that Paolo has queued the series already, so maybe it's even easier to directly
> work on top.
>
> Thanks!
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-11-09 17:11 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-04 21:23 [PATCH 00/11] KVM: selftests: Cleanups Andrew Jones
2020-11-04 21:23 ` [PATCH 01/11] KVM: selftests: Add x86_64/tsc_msrs_test to .gitignore Andrew Jones
2020-11-04 21:23 ` [PATCH 02/11] KVM: selftests: Drop pointless vm_create wrapper Andrew Jones
2020-11-04 21:23 ` [PATCH 03/11] KVM: selftests: Always clear dirty bitmap after iteration Andrew Jones
2020-11-04 21:23 ` [PATCH 04/11] KVM: selftests: Use a single binary for dirty/clear log test Andrew Jones
2020-11-04 21:23 ` [PATCH 05/11] KVM: selftests: Introduce after_vcpu_run hook for dirty " Andrew Jones
2020-11-04 21:23 ` [PATCH 06/11] KVM: selftests: Make the per vcpu memory size global Andrew Jones
2020-11-04 21:23 ` [PATCH 07/11] KVM: selftests: Make the number of vcpus global Andrew Jones
2020-11-04 21:23 ` [PATCH 08/11] KVM: selftests: Factor out guest mode code Andrew Jones
2020-11-04 21:23 ` [PATCH 09/11] KVM: selftests: Make vm_create_default common Andrew Jones
2020-11-04 21:36   ` Andrew Jones
2020-11-05  7:18     ` Christian Borntraeger
2020-11-05  9:59       ` Andrew Jones
2020-11-05 18:45         ` Peter Xu
2020-11-05 18:54           ` Christian Borntraeger
2020-11-04 21:23 ` [PATCH 10/11] KVM: selftests: Introduce vm_create_[default_]vcpus Andrew Jones
2020-11-04 21:23 ` [PATCH 11/11] KVM: selftests: Remove create_vm Andrew Jones
2020-11-05  7:08 ` [PATCH 00/11] KVM: selftests: Cleanups Christian Borntraeger
2020-11-05 18:55 ` Peter Xu
2020-11-05 19:41   ` Ben Gardon
2020-11-06  9:45     ` Andrew Jones
2020-11-06 15:04       ` Peter Xu
2020-11-09 17:11         ` Ben Gardon
2020-11-06 13:01 ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).