All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
@ 2018-09-18 17:54 Andrew Jones
  2018-09-18 17:54 ` [PATCH 01/13] kvm: selftests: vcpu_setup: set cr4.osfxsr Andrew Jones
                   ` (14 more replies)
  0 siblings, 15 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

This series provides KVM selftests that test dirty log tracking on
AArch64 for both 4K and 64K guest page sizes. Additionally the
framework provides an easy way to test dirty log tracking with the
recently posted dynamic IPA and 52bit IPA series[1].

The series breaks down into parts as follows:

 01-02: generalize guest code to host userspace exit support by
        introducing "ucalls" - hypercalls to userspace
 03-05: prepare common code for a new architecture
 06-07: add virtual memory setup support for AArch64
    08: add vcpu setup support for AArch64
    09: port the dirty log test to AArch64
 10-11: add 64K guest page size support for the dirty log test
 12-13: prepare the dirty log test to also test > 40-bit guest
        physical address setups by allowing the test memory
        region to be placed at the top of physical memory

[1] https://www.spinics.net/lists/arm-kernel/msg676819.html

Thanks,
drew


Andrew Jones (13):
  kvm: selftests: vcpu_setup: set cr4.osfxsr
  kvm: selftests: introduce ucall
  kvm: selftests: move arch-specific files to arch-specific locations
  kvm: selftests: add cscope make target
  kvm: selftests: tidy up kvm_util
  kvm: selftests: add vm_phy_pages_alloc
  kvm: selftests: add virt mem support for aarch64
  kvm: selftests: add vcpu support for aarch64
  kvm: selftests: port dirty_log_test to aarch64
  kvm: selftests: introduce new VM mode for 64K pages
  kvm: selftests: dirty_log_test: also test 64K pages on aarch64
  kvm: selftests: stop lying to aarch64 tests about PA-bits
  kvm: selftests: support high GPAs in dirty_log_test

 tools/testing/selftests/kvm/.gitignore        |  11 +-
 tools/testing/selftests/kvm/Makefile          |  36 +-
 tools/testing/selftests/kvm/dirty_log_test.c  | 374 +++++++++----
 .../selftests/kvm/include/aarch64/processor.h |  55 ++
 .../testing/selftests/kvm/include/kvm_util.h  | 166 +++---
 .../testing/selftests/kvm/include/sparsebit.h |   6 +-
 .../testing/selftests/kvm/include/test_util.h |   6 +-
 .../kvm/include/{x86.h => x86_64/processor.h} |  24 +-
 .../selftests/kvm/include/{ => x86_64}/vmx.h  |   6 +-
 .../selftests/kvm/lib/aarch64/processor.c     | 311 +++++++++++
 tools/testing/selftests/kvm/lib/assert.c      |   2 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 499 +++++++-----------
 .../selftests/kvm/lib/kvm_util_internal.h     |  33 +-
 tools/testing/selftests/kvm/lib/ucall.c       | 144 +++++
 .../kvm/lib/{x86.c => x86_64/processor.c}     | 197 ++++++-
 .../selftests/kvm/lib/{ => x86_64}/vmx.c      |   4 +-
 .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  14 +-
 .../kvm/{ => x86_64}/set_sregs_test.c         |   2 +-
 .../selftests/kvm/{ => x86_64}/state_test.c   |  25 +-
 .../kvm/{ => x86_64}/sync_regs_test.c         |   2 +-
 .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  23 +-
 21 files changed, 1329 insertions(+), 611 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h
 rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (98%)
 rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
 create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c
 create mode 100644 tools/testing/selftests/kvm/lib/ucall.c
 rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (85%)
 rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (91%)
 rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
 rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (90%)
 rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (91%)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 01/13] kvm: selftests: vcpu_setup: set cr4.osfxsr
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 02/13] kvm: selftests: introduce ucall Andrew Jones
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Guest code may want to call functions that have variable arguments.
To do so, we either need to compile with -mno-sse or enable SSE in
the VCPUs. As it should be pretty safe to turn on the feature, and
-mno-sse would make linking test code with standard libraries
difficult, we choose the feature enabling.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/lib/x86.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/x86.c b/tools/testing/selftests/kvm/lib/x86.c
index a3122f1949a8..bb6c65ebfa77 100644
--- a/tools/testing/selftests/kvm/lib/x86.c
+++ b/tools/testing/selftests/kvm/lib/x86.c
@@ -626,7 +626,7 @@ void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
 	switch (vm->mode) {
 	case VM_MODE_FLAT48PG:
 		sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
-		sregs.cr4 |= X86_CR4_PAE;
+		sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
 		sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
 
 		kvm_seg_set_unusable(&sregs.ldt);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 02/13] kvm: selftests: introduce ucall
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
  2018-09-18 17:54 ` [PATCH 01/13] kvm: selftests: vcpu_setup: set cr4.osfxsr Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 03/13] kvm: selftests: move arch-specific files to arch-specific locations Andrew Jones
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Rework the guest exit to userspace code to generalize the concept
into what it is, a "hypercall to userspace", and provide two
implementations of it: the PortIO version currently used, but only
useable by x86, and an MMIO version that other architectures (except
s390) can use.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/Makefile          |   2 +-
 .../selftests/kvm/cr4_cpuid_sync_test.c       |  12 +-
 tools/testing/selftests/kvm/dirty_log_test.c  |   5 +-
 .../testing/selftests/kvm/include/kvm_util.h  |  76 ++++-----
 tools/testing/selftests/kvm/lib/kvm_util.c    |  14 --
 tools/testing/selftests/kvm/lib/ucall.c       | 144 ++++++++++++++++++
 tools/testing/selftests/kvm/state_test.c      |  23 +--
 .../selftests/kvm/vmx_tsc_adjust_test.c       |  17 +--
 8 files changed, 214 insertions(+), 79 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/lib/ucall.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 87d1a8488af8..0b254dbf6e86 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -3,7 +3,7 @@ all:
 top_srcdir = ../../../../
 UNAME_M := $(shell uname -m)
 
-LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c
+LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86.c lib/vmx.c
 
 TEST_GEN_PROGS_x86_64 = set_sregs_test
diff --git a/tools/testing/selftests/kvm/cr4_cpuid_sync_test.c b/tools/testing/selftests/kvm/cr4_cpuid_sync_test.c
index 11ec358bf969..fd4f419fe9ab 100644
--- a/tools/testing/selftests/kvm/cr4_cpuid_sync_test.c
+++ b/tools/testing/selftests/kvm/cr4_cpuid_sync_test.c
@@ -67,6 +67,7 @@ int main(int argc, char *argv[])
 	struct kvm_vm *vm;
 	struct kvm_sregs sregs;
 	struct kvm_cpuid_entry2 *entry;
+	struct ucall uc;
 	int rc;
 
 	entry = kvm_get_supported_cpuid_entry(1);
@@ -87,21 +88,20 @@ int main(int argc, char *argv[])
 		rc = _vcpu_run(vm, VCPU_ID);
 
 		if (run->exit_reason == KVM_EXIT_IO) {
-			switch (run->io.port) {
-			case GUEST_PORT_SYNC:
+			switch (get_ucall(vm, VCPU_ID, &uc)) {
+			case UCALL_SYNC:
 				/* emulate hypervisor clearing CR4.OSXSAVE */
 				vcpu_sregs_get(vm, VCPU_ID, &sregs);
 				sregs.cr4 &= ~X86_CR4_OSXSAVE;
 				vcpu_sregs_set(vm, VCPU_ID, &sregs);
 				break;
-			case GUEST_PORT_ABORT:
+			case UCALL_ABORT:
 				TEST_ASSERT(false, "Guest CR4 bit (OSXSAVE) unsynchronized with CPUID bit.");
 				break;
-			case GUEST_PORT_DONE:
+			case UCALL_DONE:
 				goto done;
 			default:
-				TEST_ASSERT(false, "Unknown port 0x%x.",
-					    run->io.port);
+				TEST_ASSERT(false, "Unknown ucall 0x%x.", uc.cmd);
 			}
 		}
 	}
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 0c2cdc105f96..7cf3e4ae6046 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -110,7 +110,7 @@ void *vcpu_worker(void *data)
 	uint64_t loops, *guest_array, pages_count = 0;
 	struct kvm_vm *vm = data;
 	struct kvm_run *run;
-	struct guest_args args;
+	struct ucall uc;
 
 	run = vcpu_state(vm, VCPU_ID);
 
@@ -124,9 +124,8 @@ void *vcpu_worker(void *data)
 	while (!READ_ONCE(host_quit)) {
 		/* Let the guest to dirty these random pages */
 		ret = _vcpu_run(vm, VCPU_ID);
-		guest_args_read(vm, VCPU_ID, &args);
 		if (run->exit_reason == KVM_EXIT_IO &&
-		    args.port == GUEST_PORT_SYNC) {
+		    get_ucall(vm, VCPU_ID, &uc) == UCALL_SYNC) {
 			pages_count += TEST_PAGES_PER_LOOP;
 			generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
 		} else {
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index bb5a25fb82c6..138289f7d489 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -148,43 +148,49 @@ allocate_kvm_dirty_log(struct kvm_userspace_memory_region *region);
 
 int vm_create_device(struct kvm_vm *vm, struct kvm_create_device *cd);
 
-#define GUEST_PORT_SYNC         0x1000
-#define GUEST_PORT_ABORT        0x1001
-#define GUEST_PORT_DONE         0x1002
-
-static inline void __exit_to_l0(uint16_t port, uint64_t arg0, uint64_t arg1)
-{
-	__asm__ __volatile__("in %[port], %%al"
-			     :
-			     : [port]"d"(port), "D"(arg0), "S"(arg1)
-			     : "rax");
-}
-
-/*
- * Allows to pass three arguments to the host: port is 16bit wide,
- * arg0 & arg1 are 64bit wide
- */
-#define GUEST_SYNC_ARGS(_port, _arg0, _arg1) \
-	__exit_to_l0(_port, (uint64_t) (_arg0), (uint64_t) (_arg1))
-
-#define GUEST_ASSERT(_condition) do {				\
-		if (!(_condition))				\
-			GUEST_SYNC_ARGS(GUEST_PORT_ABORT,	\
-					"Failed guest assert: "	\
-					#_condition, __LINE__);	\
-	} while (0)
-
-#define GUEST_SYNC(stage)  GUEST_SYNC_ARGS(GUEST_PORT_SYNC, "hello", stage)
+#define sync_global_to_guest(vm, g) ({				\
+	typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g));	\
+	memcpy(_p, &(g), sizeof(g));				\
+})
+
+#define sync_global_from_guest(vm, g) ({			\
+	typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g));	\
+	memcpy(&(g), _p, sizeof(g));				\
+})
+
+/* ucall implementation types */
+typedef enum {
+	UCALL_PIO,
+	UCALL_MMIO,
+} ucall_type_t;
+
+/* Common ucalls */
+enum {
+	UCALL_NONE,
+	UCALL_SYNC,
+	UCALL_ABORT,
+	UCALL_DONE,
+};
 
-#define GUEST_DONE()  GUEST_SYNC_ARGS(GUEST_PORT_DONE, 0, 0)
+#define UCALL_MAX_ARGS 6
 
-struct guest_args {
-	uint64_t arg0;
-	uint64_t arg1;
-	uint16_t port;
-} __attribute__ ((packed));
+struct ucall {
+	uint64_t cmd;
+	uint64_t args[UCALL_MAX_ARGS];
+};
 
-void guest_args_read(struct kvm_vm *vm, uint32_t vcpu_id,
-		     struct guest_args *args);
+void ucall_init(struct kvm_vm *vm, ucall_type_t type, void *arg);
+void ucall_uninit(struct kvm_vm *vm);
+void ucall(uint64_t cmd, int nargs, ...);
+uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc);
+
+#define GUEST_SYNC(stage)	ucall(UCALL_SYNC, 2, "hello", stage)
+#define GUEST_DONE()		ucall(UCALL_DONE, 0)
+#define GUEST_ASSERT(_condition) do {			\
+	if (!(_condition))				\
+		ucall(UCALL_ABORT, 2,			\
+			"Failed guest assert: "		\
+			#_condition, __LINE__);		\
+} while (0)
 
 #endif /* SELFTEST_KVM_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e9ba389c48db..107622ce2b8c 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1580,17 +1580,3 @@ void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva)
 {
 	return addr_gpa2hva(vm, addr_gva2gpa(vm, gva));
 }
-
-void guest_args_read(struct kvm_vm *vm, uint32_t vcpu_id,
-		     struct guest_args *args)
-{
-	struct kvm_run *run = vcpu_state(vm, vcpu_id);
-	struct kvm_regs regs;
-
-	memset(&regs, 0, sizeof(regs));
-	vcpu_regs_get(vm, vcpu_id, &regs);
-
-	args->port = run->io.port;
-	args->arg0 = regs.rdi;
-	args->arg1 = regs.rsi;
-}
diff --git a/tools/testing/selftests/kvm/lib/ucall.c b/tools/testing/selftests/kvm/lib/ucall.c
new file mode 100644
index 000000000000..4777f9bb5194
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/ucall.c
@@ -0,0 +1,144 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ucall support. A ucall is a "hypercall to userspace".
+ *
+ * Copyright (C) 2018, Red Hat, Inc.
+ */
+#include "kvm_util.h"
+#include "kvm_util_internal.h"
+
+#define UCALL_PIO_PORT ((uint16_t)0x1000)
+
+static ucall_type_t ucall_type;
+static vm_vaddr_t *ucall_exit_mmio_addr;
+
+static bool ucall_mmio_init(struct kvm_vm *vm, vm_paddr_t gpa)
+{
+	if (kvm_userspace_memory_region_find(vm, gpa, gpa + 1))
+		return false;
+
+	virt_pg_map(vm, gpa, gpa, 0);
+
+	ucall_exit_mmio_addr = (vm_vaddr_t *)gpa;
+	sync_global_to_guest(vm, ucall_exit_mmio_addr);
+
+	return true;
+}
+
+void ucall_init(struct kvm_vm *vm, ucall_type_t type, void *arg)
+{
+	ucall_type = type;
+	sync_global_to_guest(vm, ucall_type);
+
+	if (type == UCALL_PIO)
+		return;
+
+	if (type == UCALL_MMIO) {
+		vm_paddr_t gpa, start, end, step;
+		bool ret;
+
+		if (arg) {
+			gpa = (vm_paddr_t)arg;
+			ret = ucall_mmio_init(vm, gpa);
+			TEST_ASSERT(ret, "Can't set ucall mmio address to %lx", gpa);
+			return;
+		}
+
+		/*
+		 * Find an address within the allowed virtual address space,
+		 * that does _not_ have a KVM memory region associated with it.
+		 * Identity mapping an address like this allows the guest to
+		 * access it, but as KVM doesn't know what to do with it, it
+		 * will assume it's something userspace handles and exit with
+		 * KVM_EXIT_MMIO. Well, at least that's how it works for AArch64.
+		 * Here we start with a guess that the addresses around two
+		 * thirds of the VA space are unmapped and then work both down
+		 * and up from there in 1/6 VA space sized steps.
+		 */
+		start = 1ul << (vm->va_bits * 2 / 3);
+		end = 1ul << vm->va_bits;
+		step = 1ul << (vm->va_bits / 6);
+		for (gpa = start; gpa >= 0; gpa -= step) {
+			if (ucall_mmio_init(vm, gpa & ~(vm->page_size - 1)))
+				return;
+		}
+		for (gpa = start + step; gpa < end; gpa += step) {
+			if (ucall_mmio_init(vm, gpa & ~(vm->page_size - 1)))
+				return;
+		}
+		TEST_ASSERT(false, "Can't find a ucall mmio address");
+	}
+}
+
+void ucall_uninit(struct kvm_vm *vm)
+{
+	ucall_type = 0;
+	sync_global_to_guest(vm, ucall_type);
+	ucall_exit_mmio_addr = 0;
+	sync_global_to_guest(vm, ucall_exit_mmio_addr);
+}
+
+static void ucall_pio_exit(struct ucall *uc)
+{
+#ifdef __x86_64__
+	asm volatile("in %[port], %%al"
+		: : [port] "d" (UCALL_PIO_PORT), "D" (uc) : "rax");
+#endif
+}
+
+static void ucall_mmio_exit(struct ucall *uc)
+{
+	*ucall_exit_mmio_addr = (vm_vaddr_t)uc;
+}
+
+void ucall(uint64_t cmd, int nargs, ...)
+{
+	struct ucall uc = {
+		.cmd = cmd,
+	};
+	va_list va;
+	int i;
+
+	nargs = nargs <= UCALL_MAX_ARGS ? nargs : UCALL_MAX_ARGS;
+
+	va_start(va, nargs);
+	for (i = 0; i < nargs; ++i)
+		uc.args[i] = va_arg(va, uint64_t);
+	va_end(va);
+
+	switch (ucall_type) {
+	case UCALL_PIO:
+		ucall_pio_exit(&uc);
+		break;
+	case UCALL_MMIO:
+		ucall_mmio_exit(&uc);
+		break;
+	};
+}
+
+uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc)
+{
+	struct kvm_run *run = vcpu_state(vm, vcpu_id);
+
+	memset(uc, 0, sizeof(*uc));
+
+#ifdef __x86_64__
+	if (ucall_type == UCALL_PIO && run->exit_reason == KVM_EXIT_IO &&
+	    run->io.port == UCALL_PIO_PORT) {
+		struct kvm_regs regs;
+		vcpu_regs_get(vm, vcpu_id, &regs);
+		memcpy(uc, addr_gva2hva(vm, (vm_vaddr_t)regs.rdi), sizeof(*uc));
+		return uc->cmd;
+	}
+#endif
+	if (ucall_type == UCALL_MMIO && run->exit_reason == KVM_EXIT_MMIO &&
+	    run->mmio.phys_addr == (uint64_t)ucall_exit_mmio_addr) {
+		vm_vaddr_t gva;
+		TEST_ASSERT(run->mmio.is_write && run->mmio.len == 8,
+			    "Unexpected ucall exit mmio address access");
+		gva = *(vm_vaddr_t *)run->mmio.data;
+		memcpy(uc, addr_gva2hva(vm, gva), sizeof(*uc));
+	}
+
+	return uc->cmd;
+}
diff --git a/tools/testing/selftests/kvm/state_test.c b/tools/testing/selftests/kvm/state_test.c
index 900e3e9dfb9f..cdf82735d6e7 100644
--- a/tools/testing/selftests/kvm/state_test.c
+++ b/tools/testing/selftests/kvm/state_test.c
@@ -127,6 +127,7 @@ int main(int argc, char *argv[])
 	struct kvm_vm *vm;
 	struct kvm_run *run;
 	struct kvm_x86_state *state;
+	struct ucall uc;
 	int stage;
 
 	struct kvm_cpuid_entry2 *entry = kvm_get_supported_cpuid_entry(1);
@@ -155,23 +156,23 @@ int main(int argc, char *argv[])
 
 		memset(&regs1, 0, sizeof(regs1));
 		vcpu_regs_get(vm, VCPU_ID, &regs1);
-		switch (run->io.port) {
-		case GUEST_PORT_ABORT:
-			TEST_ASSERT(false, "%s at %s:%d", (const char *) regs1.rdi,
-				    __FILE__, regs1.rsi);
+		switch (get_ucall(vm, VCPU_ID, &uc)) {
+		case UCALL_ABORT:
+			TEST_ASSERT(false, "%s at %s:%d", (const char *)uc.args[0],
+				    __FILE__, uc.args[1]);
 			/* NOT REACHED */
-		case GUEST_PORT_SYNC:
+		case UCALL_SYNC:
 			break;
-		case GUEST_PORT_DONE:
+		case UCALL_DONE:
 			goto done;
 		default:
-			TEST_ASSERT(false, "Unknown port 0x%x.", run->io.port);
+			TEST_ASSERT(false, "Unknown ucall 0x%x.", uc.cmd);
 		}
 
-		/* PORT_SYNC is handled here.  */
-		TEST_ASSERT(!strcmp((const char *)regs1.rdi, "hello") &&
-			    regs1.rsi == stage, "Unexpected register values vmexit #%lx, got %lx",
-			    stage, (ulong) regs1.rsi);
+		/* UCALL_SYNC is handled here.  */
+		TEST_ASSERT(!strcmp((const char *)uc.args[0], "hello") &&
+			    uc.args[1] == stage, "Unexpected register values vmexit #%lx, got %lx",
+			    stage, (ulong)uc.args[1]);
 
 		state = vcpu_save_state(vm, VCPU_ID);
 		kvm_vm_release(vm);
diff --git a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
index 49bcc68b0235..8d487c78796a 100644
--- a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
@@ -146,26 +146,25 @@ int main(int argc, char *argv[])
 
 	for (;;) {
 		volatile struct kvm_run *run = vcpu_state(vm, VCPU_ID);
-		struct guest_args args;
+		struct ucall uc;
 
 		vcpu_run(vm, VCPU_ID);
-		guest_args_read(vm, VCPU_ID, &args);
 		TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
 			    "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n",
 			    run->exit_reason,
 			    exit_reason_str(run->exit_reason));
 
-		switch (args.port) {
-		case GUEST_PORT_ABORT:
-			TEST_ASSERT(false, "%s", (const char *) args.arg0);
+		switch (get_ucall(vm, VCPU_ID, &uc)) {
+		case UCALL_ABORT:
+			TEST_ASSERT(false, "%s", (const char *)uc.args[0]);
 			/* NOT REACHED */
-		case GUEST_PORT_SYNC:
-			report(args.arg1);
+		case UCALL_SYNC:
+			report(uc.args[1]);
 			break;
-		case GUEST_PORT_DONE:
+		case UCALL_DONE:
 			goto done;
 		default:
-			TEST_ASSERT(false, "Unknown port 0x%x.", args.port);
+			TEST_ASSERT(false, "Unknown ucall 0x%x.", uc.cmd);
 		}
 	}
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 03/13] kvm: selftests: move arch-specific files to arch-specific locations
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
  2018-09-18 17:54 ` [PATCH 01/13] kvm: selftests: vcpu_setup: set cr4.osfxsr Andrew Jones
  2018-09-18 17:54 ` [PATCH 02/13] kvm: selftests: introduce ucall Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 04/13] kvm: selftests: add cscope make target Andrew Jones
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/.gitignore        | 11 +++++-----
 tools/testing/selftests/kvm/Makefile          | 22 +++++++++----------
 .../testing/selftests/kvm/include/kvm_util.h  |  2 +-
 .../testing/selftests/kvm/include/sparsebit.h |  6 ++---
 .../testing/selftests/kvm/include/test_util.h |  6 ++---
 .../kvm/include/{x86.h => x86_64/processor.h} |  8 +++----
 .../selftests/kvm/include/{ => x86_64}/vmx.h  |  6 ++---
 tools/testing/selftests/kvm/lib/assert.c      |  2 +-
 .../selftests/kvm/lib/kvm_util_internal.h     |  8 +++----
 .../kvm/lib/{x86.c => x86_64/processor.c}     |  6 ++---
 .../selftests/kvm/lib/{ => x86_64}/vmx.c      |  4 ++--
 .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  2 +-
 .../kvm/{ => x86_64}/dirty_log_test.c         |  0
 .../kvm/{ => x86_64}/set_sregs_test.c         |  2 +-
 .../selftests/kvm/{ => x86_64}/state_test.c   |  2 +-
 .../kvm/{ => x86_64}/sync_regs_test.c         |  2 +-
 .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  6 ++---
 17 files changed, 48 insertions(+), 47 deletions(-)
 rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (99%)
 rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
 rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (99%)
 rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (98%)
 rename tools/testing/selftests/kvm/{ => x86_64}/dirty_log_test.c (100%)
 rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
 rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
 rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (98%)

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 4202139d81d9..9b3933fb5043 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -1,5 +1,6 @@
-cr4_cpuid_sync_test
-set_sregs_test
-sync_regs_test
-vmx_tsc_adjust_test
-state_test
+/x86_64/cr4_cpuid_sync_test
+/x86_64/set_sregs_test
+/x86_64/sync_regs_test
+/x86_64/vmx_tsc_adjust_test
+/x86_64/state_test
+/x86_64/dirty_log_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 0b254dbf6e86..82d77e10b554 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -1,25 +1,25 @@
 all:
 
-top_srcdir = ../../../../
+top_srcdir = ../../../..
 UNAME_M := $(shell uname -m)
 
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
-LIBKVM_x86_64 = lib/x86.c lib/vmx.c
+LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
 
-TEST_GEN_PROGS_x86_64 = set_sregs_test
-TEST_GEN_PROGS_x86_64 += sync_regs_test
-TEST_GEN_PROGS_x86_64 += vmx_tsc_adjust_test
-TEST_GEN_PROGS_x86_64 += cr4_cpuid_sync_test
-TEST_GEN_PROGS_x86_64 += state_test
-TEST_GEN_PROGS_x86_64 += dirty_log_test
+TEST_GEN_PROGS_x86_64 = x86_64/set_sregs_test
+TEST_GEN_PROGS_x86_64 += x86_64/sync_regs_test
+TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test
+TEST_GEN_PROGS_x86_64 += x86_64/cr4_cpuid_sync_test
+TEST_GEN_PROGS_x86_64 += x86_64/state_test
+TEST_GEN_PROGS_x86_64 += x86_64/dirty_log_test
 
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
 
 INSTALL_HDR_PATH = $(top_srcdir)/usr
-LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
-LINUX_TOOL_INCLUDE = $(top_srcdir)tools/include
-CFLAGS += -O2 -g -std=gnu99 -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -I..
+LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include
+LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include
+CFLAGS += -O2 -g -std=gnu99 -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude -Iinclude/$(UNAME_M) -I..
 LDFLAGS += -lpthread
 
 # After inclusion, $(OUTPUT) is defined and
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 138289f7d489..607f68ba4ceb 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -7,7 +7,7 @@
  *
  */
 #ifndef SELFTEST_KVM_UTIL_H
-#define SELFTEST_KVM_UTIL_H 1
+#define SELFTEST_KVM_UTIL_H
 
 #include "test_util.h"
 
diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h
index 54cfeb6568d3..31e030915c1f 100644
--- a/tools/testing/selftests/kvm/include/sparsebit.h
+++ b/tools/testing/selftests/kvm/include/sparsebit.h
@@ -15,8 +15,8 @@
  * even in the case where most bits are set.
  */
 
-#ifndef _TEST_SPARSEBIT_H_
-#define _TEST_SPARSEBIT_H_
+#ifndef SELFTEST_KVM_SPARSEBIT_H
+#define SELFTEST_KVM_SPARSEBIT_H
 
 #include <stdbool.h>
 #include <stdint.h>
@@ -72,4 +72,4 @@ void sparsebit_validate_internal(struct sparsebit *sbit);
 }
 #endif
 
-#endif /* _TEST_SPARSEBIT_H_ */
+#endif /* SELFTEST_KVM_SPARSEBIT_H */
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 73c3933436ec..c7dafe8bd02c 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -7,8 +7,8 @@
  *
  */
 
-#ifndef TEST_UTIL_H
-#define TEST_UTIL_H 1
+#ifndef SELFTEST_KVM_TEST_UTIL_H
+#define SELFTEST_KVM_TEST_UTIL_H
 
 #include <stdlib.h>
 #include <stdarg.h>
@@ -41,4 +41,4 @@ void test_assert(bool exp, const char *exp_str,
 		    #a, #b, #a, (unsigned long) __a, #b, (unsigned long) __b); \
 } while (0)
 
-#endif /* TEST_UTIL_H */
+#endif /* SELFTEST_KVM_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/include/x86.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
similarity index 99%
rename from tools/testing/selftests/kvm/include/x86.h
rename to tools/testing/selftests/kvm/include/x86_64/processor.h
index 42c3596815b8..bb76e8db9b2b 100644
--- a/tools/testing/selftests/kvm/include/x86.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -1,5 +1,5 @@
 /*
- * tools/testing/selftests/kvm/include/x86.h
+ * tools/testing/selftests/kvm/include/x86_64/processor.h
  *
  * Copyright (C) 2018, Google LLC.
  *
@@ -7,8 +7,8 @@
  *
  */
 
-#ifndef SELFTEST_KVM_X86_H
-#define SELFTEST_KVM_X86_H
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
 
 #include <assert.h>
 #include <stdint.h>
@@ -1044,4 +1044,4 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
 #define MSR_VM_IGNNE                    0xc0010115
 #define MSR_VM_HSAVE_PA                 0xc0010117
 
-#endif /* !SELFTEST_KVM_X86_H */
+#endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/include/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h
similarity index 99%
rename from tools/testing/selftests/kvm/include/vmx.h
rename to tools/testing/selftests/kvm/include/x86_64/vmx.h
index b9ffe1024d3a..12ebd836f7ef 100644
--- a/tools/testing/selftests/kvm/include/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h
@@ -1,5 +1,5 @@
 /*
- * tools/testing/selftests/kvm/include/vmx.h
+ * tools/testing/selftests/kvm/include/x86_64/vmx.h
  *
  * Copyright (C) 2018, Google LLC.
  *
@@ -11,7 +11,7 @@
 #define SELFTEST_KVM_VMX_H
 
 #include <stdint.h>
-#include "x86.h"
+#include "processor.h"
 
 #define CPUID_VMX_BIT				5
 
@@ -549,4 +549,4 @@ struct vmx_pages *vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva);
 bool prepare_for_vmx_operation(struct vmx_pages *vmx);
 void prepare_vmcs(struct vmx_pages *vmx, void *guest_rip, void *guest_rsp);
 
-#endif /* !SELFTEST_KVM_VMX_H */
+#endif /* SELFTEST_KVM_VMX_H */
diff --git a/tools/testing/selftests/kvm/lib/assert.c b/tools/testing/selftests/kvm/lib/assert.c
index cd01144d27c8..6398efe67885 100644
--- a/tools/testing/selftests/kvm/lib/assert.c
+++ b/tools/testing/selftests/kvm/lib/assert.c
@@ -13,7 +13,7 @@
 #include <execinfo.h>
 #include <sys/syscall.h>
 
-#include "../../kselftest.h"
+#include "kselftest.h"
 
 /* Dumps the current stack trace to stderr. */
 static void __attribute__((noinline)) test_dump_stack(void);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index 542ed606b338..0278315cd930 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -1,13 +1,13 @@
 /*
- * tools/testing/selftests/kvm/lib/kvm_util.c
+ * tools/testing/selftests/kvm/lib/kvm_util_internal.h
  *
  * Copyright (C) 2018, Google LLC.
  *
  * This work is licensed under the terms of the GNU GPL, version 2.
  */
 
-#ifndef KVM_UTIL_INTERNAL_H
-#define KVM_UTIL_INTERNAL_H 1
+#ifndef SELFTEST_KVM_UTIL_INTERNAL_H
+#define SELFTEST_KVM_UTIL_INTERNAL_H
 
 #include "sparsebit.h"
 
@@ -69,4 +69,4 @@ void regs_dump(FILE *stream, struct kvm_regs *regs,
 void sregs_dump(FILE *stream, struct kvm_sregs *sregs,
 	uint8_t indent);
 
-#endif
+#endif /* SELFTEST_KVM_UTIL_INTERNAL_H */
diff --git a/tools/testing/selftests/kvm/lib/x86.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
similarity index 99%
rename from tools/testing/selftests/kvm/lib/x86.c
rename to tools/testing/selftests/kvm/lib/x86_64/processor.c
index bb6c65ebfa77..79c2c5c203c0 100644
--- a/tools/testing/selftests/kvm/lib/x86.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1,5 +1,5 @@
 /*
- * tools/testing/selftests/kvm/lib/x86.c
+ * tools/testing/selftests/kvm/lib/x86_64/processor.c
  *
  * Copyright (C) 2018, Google LLC.
  *
@@ -10,8 +10,8 @@
 
 #include "test_util.h"
 #include "kvm_util.h"
-#include "kvm_util_internal.h"
-#include "x86.h"
+#include "../kvm_util_internal.h"
+#include "processor.h"
 
 /* Minimum physical address used for virtual translation tables. */
 #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
diff --git a/tools/testing/selftests/kvm/lib/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
similarity index 99%
rename from tools/testing/selftests/kvm/lib/vmx.c
rename to tools/testing/selftests/kvm/lib/x86_64/vmx.c
index b987c3c970eb..d7c401472247 100644
--- a/tools/testing/selftests/kvm/lib/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
@@ -1,5 +1,5 @@
 /*
- * tools/testing/selftests/kvm/lib/x86.c
+ * tools/testing/selftests/kvm/lib/x86_64/vmx.c
  *
  * Copyright (C) 2018, Google LLC.
  *
@@ -10,7 +10,7 @@
 
 #include "test_util.h"
 #include "kvm_util.h"
-#include "x86.h"
+#include "processor.h"
 #include "vmx.h"
 
 /* Allocate memory regions for nested VMX tests.
diff --git a/tools/testing/selftests/kvm/cr4_cpuid_sync_test.c b/tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c
similarity index 98%
rename from tools/testing/selftests/kvm/cr4_cpuid_sync_test.c
rename to tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c
index fd4f419fe9ab..d503a51fad30 100644
--- a/tools/testing/selftests/kvm/cr4_cpuid_sync_test.c
+++ b/tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c
@@ -17,7 +17,7 @@
 #include "test_util.h"
 
 #include "kvm_util.h"
-#include "x86.h"
+#include "processor.h"
 
 #define X86_FEATURE_XSAVE	(1<<26)
 #define X86_FEATURE_OSXSAVE	(1<<27)
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/x86_64/dirty_log_test.c
similarity index 100%
rename from tools/testing/selftests/kvm/dirty_log_test.c
rename to tools/testing/selftests/kvm/x86_64/dirty_log_test.c
diff --git a/tools/testing/selftests/kvm/set_sregs_test.c b/tools/testing/selftests/kvm/x86_64/set_sregs_test.c
similarity index 98%
rename from tools/testing/selftests/kvm/set_sregs_test.c
rename to tools/testing/selftests/kvm/x86_64/set_sregs_test.c
index 881419d5746e..35640e8e95bc 100644
--- a/tools/testing/selftests/kvm/set_sregs_test.c
+++ b/tools/testing/selftests/kvm/x86_64/set_sregs_test.c
@@ -22,7 +22,7 @@
 #include "test_util.h"
 
 #include "kvm_util.h"
-#include "x86.h"
+#include "processor.h"
 
 #define VCPU_ID                  5
 
diff --git a/tools/testing/selftests/kvm/state_test.c b/tools/testing/selftests/kvm/x86_64/state_test.c
similarity index 99%
rename from tools/testing/selftests/kvm/state_test.c
rename to tools/testing/selftests/kvm/x86_64/state_test.c
index cdf82735d6e7..43df194a7c1e 100644
--- a/tools/testing/selftests/kvm/state_test.c
+++ b/tools/testing/selftests/kvm/x86_64/state_test.c
@@ -17,7 +17,7 @@
 #include "test_util.h"
 
 #include "kvm_util.h"
-#include "x86.h"
+#include "processor.h"
 #include "vmx.h"
 
 #define VCPU_ID		5
diff --git a/tools/testing/selftests/kvm/sync_regs_test.c b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
similarity index 99%
rename from tools/testing/selftests/kvm/sync_regs_test.c
rename to tools/testing/selftests/kvm/x86_64/sync_regs_test.c
index 213343e5dff9..c8478ce9ea77 100644
--- a/tools/testing/selftests/kvm/sync_regs_test.c
+++ b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
@@ -19,7 +19,7 @@
 
 #include "test_util.h"
 #include "kvm_util.h"
-#include "x86.h"
+#include "processor.h"
 
 #define VCPU_ID 5
 
diff --git a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86_64/vmx_tsc_adjust_test.c
similarity index 98%
rename from tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
rename to tools/testing/selftests/kvm/x86_64/vmx_tsc_adjust_test.c
index 8d487c78796a..38a91a5f04ac 100644
--- a/tools/testing/selftests/kvm/vmx_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86_64/vmx_tsc_adjust_test.c
@@ -1,5 +1,5 @@
 /*
- * gtests/tests/vmx_tsc_adjust_test.c
+ * vmx_tsc_adjust_test
  *
  * Copyright (C) 2018, Google LLC.
  *
@@ -22,13 +22,13 @@
 
 #include "test_util.h"
 #include "kvm_util.h"
-#include "x86.h"
+#include "processor.h"
 #include "vmx.h"
 
 #include <string.h>
 #include <sys/ioctl.h>
 
-#include "../kselftest.h"
+#include "kselftest.h"
 
 #ifndef MSR_IA32_TSC_ADJUST
 #define MSR_IA32_TSC_ADJUST 0x3b
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 04/13] kvm: selftests: add cscope make target
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (2 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 03/13] kvm: selftests: move arch-specific files to arch-specific locations Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 05/13] kvm: selftests: tidy up kvm_util Andrew Jones
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/Makefile | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 82d77e10b554..d603d7865cd1 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -28,7 +28,7 @@ include ../lib.mk
 
 STATIC_LIBS := $(OUTPUT)/libkvm.a
 LIBKVM_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM))
-EXTRA_CLEAN += $(LIBKVM_OBJ) $(STATIC_LIBS)
+EXTRA_CLEAN += $(LIBKVM_OBJ) $(STATIC_LIBS) cscope.*
 
 x := $(shell mkdir -p $(sort $(dir $(LIBKVM_OBJ))))
 $(LIBKVM_OBJ): $(OUTPUT)/%.o: %.c
@@ -40,3 +40,12 @@ $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ)
 all: $(STATIC_LIBS)
 $(TEST_GEN_PROGS): $(STATIC_LIBS)
 $(STATIC_LIBS):| khdr
+
+cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib ..
+cscope:
+	$(RM) cscope.*
+	(find $(include_paths) -name '*.h' \
+		-exec realpath --relative-base=$(PWD) {} \;; \
+	find . -name '*.c' \
+		-exec realpath --relative-base=$(PWD) {} \;) | sort -u > cscope.files
+	cscope -b
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 05/13] kvm: selftests: tidy up kvm_util
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (3 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 04/13] kvm: selftests: add cscope make target Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 06/13] kvm: selftests: add vm_phy_pages_alloc Andrew Jones
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Tidy up kvm-util code: code/comment formatting, remove unused code,
and move x86 specific code out. We also move vcpu_dump() out of
common code, because not all arches (AArch64) have KVM_GET_REGS.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  77 ++--
 .../selftests/kvm/include/x86_64/processor.h  |  16 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 372 +++++-------------
 .../selftests/kvm/lib/kvm_util_internal.h     |  22 +-
 .../selftests/kvm/lib/x86_64/processor.c      | 179 +++++++++
 .../selftests/kvm/x86_64/dirty_log_test.c     |   1 +
 6 files changed, 337 insertions(+), 330 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 607f68ba4ceb..fcde5cf48d06 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -17,12 +17,6 @@
 
 #include "sparsebit.h"
 
-/*
- * Memslots can't cover the gfn starting at this gpa otherwise vCPUs can't be
- * created. Only applies to VMs using EPT.
- */
-#define KVM_DEFAULT_IDENTITY_MAP_ADDRESS 0xfffbc000ul
-
 
 /* Callers of kvm_util only have an incomplete/opaque description of the
  * structure kvm_util is using to maintain the state of a VM.
@@ -33,11 +27,11 @@ typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
 typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
 
 /* Minimum allocated guest virtual and physical addresses */
-#define KVM_UTIL_MIN_VADDR 0x2000
+#define KVM_UTIL_MIN_VADDR		0x2000
 
 #define DEFAULT_GUEST_PHY_PAGES		512
 #define DEFAULT_GUEST_STACK_VADDR_MIN	0xab6000
-#define DEFAULT_STACK_PGS               5
+#define DEFAULT_STACK_PGS		5
 
 enum vm_guest_mode {
 	VM_MODE_FLAT48PG,
@@ -57,15 +51,15 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm);
 void kvm_vm_release(struct kvm_vm *vmp);
 void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log);
 
-int kvm_memcmp_hva_gva(void *hva,
-	struct kvm_vm *vm, const vm_vaddr_t gva, size_t len);
+int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
+		       size_t len);
 
 void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename,
-	uint32_t data_memslot, uint32_t pgd_memslot);
+		     uint32_t data_memslot, uint32_t pgd_memslot);
 
 void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
-void vcpu_dump(FILE *stream, struct kvm_vm *vm,
-	uint32_t vcpuid, uint8_t indent);
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid,
+	       uint8_t indent);
 
 void vm_create_irqchip(struct kvm_vm *vm);
 
@@ -74,13 +68,14 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	uint64_t guest_paddr, uint32_t slot, uint64_t npages,
 	uint32_t flags);
 
-void vcpu_ioctl(struct kvm_vm *vm,
-	uint32_t vcpuid, unsigned long ioctl, void *arg);
+void vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, unsigned long ioctl,
+		void *arg);
 void vm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
 void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
-void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid, int pgd_memslot, int gdt_memslot);
+void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid, int pgd_memslot,
+		 int gdt_memslot);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
-	uint32_t data_memslot, uint32_t pgd_memslot);
+			  uint32_t data_memslot, uint32_t pgd_memslot);
 void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 	      size_t size, uint32_t pgd_memslot);
 void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
@@ -92,53 +87,33 @@ struct kvm_run *vcpu_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_run(struct kvm_vm *vm, uint32_t vcpuid);
 int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid,
-	struct kvm_mp_state *mp_state);
-void vcpu_regs_get(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_regs *regs);
-void vcpu_regs_set(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_regs *regs);
+		       struct kvm_mp_state *mp_state);
+void vcpu_regs_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs);
+void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs);
 void vcpu_args_set(struct kvm_vm *vm, uint32_t vcpuid, unsigned int num, ...);
-void vcpu_sregs_get(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_sregs *sregs);
-void vcpu_sregs_set(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_sregs *sregs);
-int _vcpu_sregs_set(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_sregs *sregs);
+void vcpu_sregs_get(struct kvm_vm *vm, uint32_t vcpuid,
+		    struct kvm_sregs *sregs);
+void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
+		    struct kvm_sregs *sregs);
+int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid,
+		    struct kvm_sregs *sregs);
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
-			  struct kvm_vcpu_events *events);
+		     struct kvm_vcpu_events *events);
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
-			  struct kvm_vcpu_events *events);
+		     struct kvm_vcpu_events *events);
 
 const char *exit_reason_str(unsigned int exit_reason);
 
 void virt_pgd_alloc(struct kvm_vm *vm, uint32_t pgd_memslot);
 void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
-	uint32_t pgd_memslot);
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm,
-	vm_paddr_t paddr_min, uint32_t memslot);
-
-struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
-void vcpu_set_cpuid(
-	struct kvm_vm *vm, uint32_t vcpuid, struct kvm_cpuid2 *cpuid);
-
-struct kvm_cpuid_entry2 *
-kvm_get_supported_cpuid_index(uint32_t function, uint32_t index);
-
-static inline struct kvm_cpuid_entry2 *
-kvm_get_supported_cpuid_entry(uint32_t function)
-{
-	return kvm_get_supported_cpuid_index(function, 0);
-}
+		 uint32_t pgd_memslot);
+vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
+			     uint32_t memslot);
 
 struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_size,
 				 void *guest_code);
 void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code);
 
-typedef void (*vmx_guest_code_t)(vm_vaddr_t vmxon_vaddr,
-				 vm_paddr_t vmxon_paddr,
-				 vm_vaddr_t vmcs_vaddr,
-				 vm_paddr_t vmcs_paddr);
-
 struct kvm_userspace_memory_region *
 kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start,
 				 uint64_t end);
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index bb76e8db9b2b..edec2aad2440 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -305,7 +305,21 @@ static inline unsigned long get_xmm(int n)
 
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
-void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *state);
+void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
+		     struct kvm_x86_state *state);
+
+struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
+void vcpu_set_cpuid(struct kvm_vm *vm, uint32_t vcpuid,
+		    struct kvm_cpuid2 *cpuid);
+
+struct kvm_cpuid_entry2 *
+kvm_get_supported_cpuid_index(uint32_t function, uint32_t index);
+
+static inline struct kvm_cpuid_entry2 *
+kvm_get_supported_cpuid_entry(uint32_t function)
+{
+	return kvm_get_supported_cpuid_index(function, 0);
+}
 
 /*
  * Basic CPU control in CR0
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 107622ce2b8c..19649ad6e015 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -16,8 +16,6 @@
 #include <sys/stat.h>
 #include <linux/kernel.h>
 
-#define KVM_DEV_PATH "/dev/kvm"
-
 #define KVM_UTIL_PGS_PER_HUGEPG 512
 #define KVM_UTIL_MIN_PADDR      0x2000
 
@@ -30,7 +28,8 @@ static void *align(void *x, size_t size)
 	return (void *) (((size_t) x + mask) & ~mask);
 }
 
-/* Capability
+/*
+ * Capability
  *
  * Input Args:
  *   cap - Capability
@@ -69,13 +68,13 @@ static void vm_open(struct kvm_vm *vm, int perm)
 	if (vm->kvm_fd < 0)
 		exit(KSFT_SKIP);
 
-	/* Create VM. */
 	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, NULL);
 	TEST_ASSERT(vm->fd >= 0, "KVM_CREATE_VM ioctl failed, "
 		"rc: %i errno: %i", vm->fd, errno);
 }
 
-/* VM Create
+/*
+ * VM Create
  *
  * Input Args:
  *   mode - VM Mode (e.g. VM_MODE_FLAT48PG)
@@ -98,7 +97,6 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	struct kvm_vm *vm;
 	int kvm_fd;
 
-	/* Allocate memory. */
 	vm = calloc(1, sizeof(*vm));
 	TEST_ASSERT(vm != NULL, "Insufficent Memory");
 
@@ -136,7 +134,8 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	return vm;
 }
 
-/* VM Restart
+/*
+ * VM Restart
  *
  * Input Args:
  *   vm - VM that has been released before
@@ -163,7 +162,8 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
 			    "  rc: %i errno: %i\n"
 			    "  slot: %u flags: 0x%x\n"
 			    "  guest_phys_addr: 0x%lx size: 0x%lx",
-			    ret, errno, region->region.slot, region->region.flags,
+			    ret, errno, region->region.slot,
+			    region->region.flags,
 			    region->region.guest_phys_addr,
 			    region->region.memory_size);
 	}
@@ -179,7 +179,8 @@ void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
 		    strerror(-ret));
 }
 
-/* Userspace Memory Region Find
+/*
+ * Userspace Memory Region Find
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -197,8 +198,8 @@ void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
  * of the regions is returned.  Null is returned only when no overlapping
  * region exists.
  */
-static struct userspace_mem_region *userspace_mem_region_find(
-	struct kvm_vm *vm, uint64_t start, uint64_t end)
+static struct userspace_mem_region *
+userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end)
 {
 	struct userspace_mem_region *region;
 
@@ -214,7 +215,8 @@ static struct userspace_mem_region *userspace_mem_region_find(
 	return NULL;
 }
 
-/* KVM Userspace Memory Region Find
+/*
+ * KVM Userspace Memory Region Find
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -242,7 +244,8 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start,
 	return &region->region;
 }
 
-/* VCPU Find
+/*
+ * VCPU Find
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -257,8 +260,7 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start,
  * returns a pointer to it.  Returns NULL if the VM doesn't contain a VCPU
  * for the specified vcpuid.
  */
-struct vcpu *vcpu_find(struct kvm_vm *vm,
-	uint32_t vcpuid)
+struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
 {
 	struct vcpu *vcpup;
 
@@ -270,7 +272,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm,
 	return NULL;
 }
 
-/* VM VCPU Remove
+/*
+ * VM VCPU Remove
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -307,11 +310,9 @@ void kvm_vm_release(struct kvm_vm *vmp)
 {
 	int ret;
 
-	/* Free VCPUs. */
 	while (vmp->vcpu_head)
 		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
 
-	/* Close file descriptor for the VM. */
 	ret = close(vmp->fd);
 	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
 		"  vmp->fd: %i rc: %i errno: %i", vmp->fd, ret, errno);
@@ -321,7 +322,8 @@ void kvm_vm_release(struct kvm_vm *vmp)
 		"  vmp->kvm_fd: %i rc: %i errno: %i", vmp->kvm_fd, ret, errno);
 }
 
-/* Destroys and frees the VM pointed to by vmp.
+/*
+ * Destroys and frees the VM pointed to by vmp.
  */
 void kvm_vm_free(struct kvm_vm *vmp)
 {
@@ -360,7 +362,8 @@ void kvm_vm_free(struct kvm_vm *vmp)
 	free(vmp);
 }
 
-/* Memory Compare, host virtual to guest virtual
+/*
+ * Memory Compare, host virtual to guest virtual
  *
  * Input Args:
  *   hva - Starting host virtual address
@@ -382,23 +385,25 @@ void kvm_vm_free(struct kvm_vm *vmp)
  * a length of len, to the guest bytes starting at the guest virtual
  * address given by gva.
  */
-int kvm_memcmp_hva_gva(void *hva,
-	struct kvm_vm *vm, vm_vaddr_t gva, size_t len)
+int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, vm_vaddr_t gva, size_t len)
 {
 	size_t amt;
 
-	/* Compare a batch of bytes until either a match is found
+	/*
+	 * Compare a batch of bytes until either a match is found
 	 * or all the bytes have been compared.
 	 */
 	for (uintptr_t offset = 0; offset < len; offset += amt) {
 		uintptr_t ptr1 = (uintptr_t)hva + offset;
 
-		/* Determine host address for guest virtual address
+		/*
+		 * Determine host address for guest virtual address
 		 * at offset.
 		 */
 		uintptr_t ptr2 = (uintptr_t)addr_gva2hva(vm, gva + offset);
 
-		/* Determine amount to compare on this pass.
+		/*
+		 * Determine amount to compare on this pass.
 		 * Don't allow the comparsion to cross a page boundary.
 		 */
 		amt = len - offset;
@@ -410,7 +415,8 @@ int kvm_memcmp_hva_gva(void *hva,
 		assert((ptr1 >> vm->page_shift) == ((ptr1 + amt - 1) >> vm->page_shift));
 		assert((ptr2 >> vm->page_shift) == ((ptr2 + amt - 1) >> vm->page_shift));
 
-		/* Perform the comparison.  If there is a difference
+		/*
+		 * Perform the comparison.  If there is a difference
 		 * return that result to the caller, otherwise need
 		 * to continue on looking for a mismatch.
 		 */
@@ -419,109 +425,15 @@ int kvm_memcmp_hva_gva(void *hva,
 			return ret;
 	}
 
-	/* No mismatch found.  Let the caller know the two memory
+	/*
+	 * No mismatch found.  Let the caller know the two memory
 	 * areas are equal.
 	 */
 	return 0;
 }
 
-/* Allocate an instance of struct kvm_cpuid2
- *
- * Input Args: None
- *
- * Output Args: None
- *
- * Return: A pointer to the allocated struct. The caller is responsible
- * for freeing this struct.
- *
- * Since kvm_cpuid2 uses a 0-length array to allow a the size of the
- * array to be decided at allocation time, allocation is slightly
- * complicated. This function uses a reasonable default length for
- * the array and performs the appropriate allocation.
- */
-static struct kvm_cpuid2 *allocate_kvm_cpuid2(void)
-{
-	struct kvm_cpuid2 *cpuid;
-	int nent = 100;
-	size_t size;
-
-	size = sizeof(*cpuid);
-	size += nent * sizeof(struct kvm_cpuid_entry2);
-	cpuid = malloc(size);
-	if (!cpuid) {
-		perror("malloc");
-		abort();
-	}
-
-	cpuid->nent = nent;
-
-	return cpuid;
-}
-
-/* KVM Supported CPUID Get
- *
- * Input Args: None
- *
- * Output Args:
- *
- * Return: The supported KVM CPUID
- *
- * Get the guest CPUID supported by KVM.
- */
-struct kvm_cpuid2 *kvm_get_supported_cpuid(void)
-{
-	static struct kvm_cpuid2 *cpuid;
-	int ret;
-	int kvm_fd;
-
-	if (cpuid)
-		return cpuid;
-
-	cpuid = allocate_kvm_cpuid2();
-	kvm_fd = open(KVM_DEV_PATH, O_RDONLY);
-	if (kvm_fd < 0)
-		exit(KSFT_SKIP);
-
-	ret = ioctl(kvm_fd, KVM_GET_SUPPORTED_CPUID, cpuid);
-	TEST_ASSERT(ret == 0, "KVM_GET_SUPPORTED_CPUID failed %d %d\n",
-		    ret, errno);
-
-	close(kvm_fd);
-	return cpuid;
-}
-
-/* Locate a cpuid entry.
- *
- * Input Args:
- *   cpuid: The cpuid.
- *   function: The function of the cpuid entry to find.
- *
- * Output Args: None
- *
- * Return: A pointer to the cpuid entry. Never returns NULL.
- */
-struct kvm_cpuid_entry2 *
-kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
-{
-	struct kvm_cpuid2 *cpuid;
-	struct kvm_cpuid_entry2 *entry = NULL;
-	int i;
-
-	cpuid = kvm_get_supported_cpuid();
-	for (i = 0; i < cpuid->nent; i++) {
-		if (cpuid->entries[i].function == function &&
-		    cpuid->entries[i].index == index) {
-			entry = &cpuid->entries[i];
-			break;
-		}
-	}
-
-	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
-		    function, index);
-	return entry;
-}
-
-/* VM Userspace Memory Region Add
+/*
+ * VM Userspace Memory Region Add
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -563,7 +475,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		"  vm->max_gfn: 0x%lx vm->page_size: 0x%x",
 		guest_paddr, npages, vm->max_gfn, vm->page_size);
 
-	/* Confirm a mem region with an overlapping address doesn't
+	/*
+	 * Confirm a mem region with an overlapping address doesn't
 	 * already exist.
 	 */
 	region = (struct userspace_mem_region *) userspace_mem_region_find(
@@ -654,7 +567,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	vm->userspace_mem_region_head = region;
 }
 
-/* Memslot to region
+/*
+ * Memslot to region
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -668,8 +582,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
  *   on error (e.g. currently no memory region using memslot as a KVM
  *   memory slot ID).
  */
-static struct userspace_mem_region *memslot2region(struct kvm_vm *vm,
-	uint32_t memslot)
+static struct userspace_mem_region *
+memslot2region(struct kvm_vm *vm, uint32_t memslot)
 {
 	struct userspace_mem_region *region;
 
@@ -689,7 +603,8 @@ static struct userspace_mem_region *memslot2region(struct kvm_vm *vm,
 	return region;
 }
 
-/* VM Memory Region Flags Set
+/*
+ * VM Memory Region Flags Set
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -707,7 +622,6 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
 	int ret;
 	struct userspace_mem_region *region;
 
-	/* Locate memory region. */
 	region = memslot2region(vm, slot);
 
 	region->region.flags = flags;
@@ -719,7 +633,8 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
 		ret, errno, slot, flags);
 }
 
-/* VCPU mmap Size
+/*
+ * VCPU mmap Size
  *
  * Input Args: None
  *
@@ -749,7 +664,8 @@ static int vcpu_mmap_sz(void)
 	return ret;
 }
 
-/* VM VCPU Add
+/*
+ * VM VCPU Add
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -762,7 +678,8 @@ static int vcpu_mmap_sz(void)
  * Creates and adds to the VM specified by vm and virtual CPU with
  * the ID given by vcpuid.
  */
-void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid, int pgd_memslot, int gdt_memslot)
+void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid, int pgd_memslot,
+		 int gdt_memslot)
 {
 	struct vcpu *vcpu;
 
@@ -800,7 +717,8 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid, int pgd_memslot, int gdt_me
 	vcpu_setup(vm, vcpuid, pgd_memslot, gdt_memslot);
 }
 
-/* VM Virtual Address Unused Gap
+/*
+ * VM Virtual Address Unused Gap
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -820,14 +738,14 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid, int pgd_memslot, int gdt_me
  * sz unallocated bytes >= vaddr_min is available.
  */
 static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
-	vm_vaddr_t vaddr_min)
+				      vm_vaddr_t vaddr_min)
 {
 	uint64_t pages = (sz + vm->page_size - 1) >> vm->page_shift;
 
 	/* Determine lowest permitted virtual page index. */
 	uint64_t pgidx_start = (vaddr_min + vm->page_size - 1) >> vm->page_shift;
 	if ((pgidx_start * vm->page_size) < vaddr_min)
-			goto no_va_found;
+		goto no_va_found;
 
 	/* Loop over section with enough valid virtual page indexes. */
 	if (!sparsebit_is_set_num(vm->vpages_valid,
@@ -886,7 +804,8 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
 	return pgidx_start * vm->page_size;
 }
 
-/* VM Virtual Address Allocate
+/*
+ * VM Virtual Address Allocate
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -907,13 +826,14 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
  * a page.
  */
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
-	uint32_t data_memslot, uint32_t pgd_memslot)
+			  uint32_t data_memslot, uint32_t pgd_memslot)
 {
 	uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
 
 	virt_pgd_alloc(vm, pgd_memslot);
 
-	/* Find an unused range of virtual page addresses of at least
+	/*
+	 * Find an unused range of virtual page addresses of at least
 	 * pages in length.
 	 */
 	vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, sz, vaddr_min);
@@ -967,7 +887,8 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 	}
 }
 
-/* Address VM Physical to Host Virtual
+/*
+ * Address VM Physical to Host Virtual
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -999,7 +920,8 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
 	return NULL;
 }
 
-/* Address Host Virtual to VM Physical
+/*
+ * Address Host Virtual to VM Physical
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1033,7 +955,8 @@ vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
 	return -1;
 }
 
-/* VM Create IRQ Chip
+/*
+ * VM Create IRQ Chip
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1055,7 +978,8 @@ void vm_create_irqchip(struct kvm_vm *vm)
 	vm->has_irqchip = true;
 }
 
-/* VM VCPU State
+/*
+ * VM VCPU State
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1077,7 +1001,8 @@ struct kvm_run *vcpu_state(struct kvm_vm *vm, uint32_t vcpuid)
 	return vcpu->state;
 }
 
-/* VM VCPU Run
+/*
+ * VM VCPU Run
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1103,13 +1028,14 @@ int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid)
 	int rc;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
-        do {
+	do {
 		rc = ioctl(vcpu->fd, KVM_RUN, NULL);
 	} while (rc == -1 && errno == EINTR);
 	return rc;
 }
 
-/* VM VCPU Set MP State
+/*
+ * VM VCPU Set MP State
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1124,7 +1050,7 @@ int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid)
  * by mp_state.
  */
 void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid,
-	struct kvm_mp_state *mp_state)
+		       struct kvm_mp_state *mp_state)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
@@ -1136,7 +1062,8 @@ void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid,
 		"rc: %i errno: %i", ret, errno);
 }
 
-/* VM VCPU Regs Get
+/*
+ * VM VCPU Regs Get
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1150,21 +1077,20 @@ void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid,
  * Obtains the current register state for the VCPU specified by vcpuid
  * and stores it at the location given by regs.
  */
-void vcpu_regs_get(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_regs *regs)
+void vcpu_regs_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
 
-	/* Get the regs. */
 	ret = ioctl(vcpu->fd, KVM_GET_REGS, regs);
 	TEST_ASSERT(ret == 0, "KVM_GET_REGS failed, rc: %i errno: %i",
 		ret, errno);
 }
 
-/* VM VCPU Regs Set
+/*
+ * VM VCPU Regs Set
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1178,99 +1104,46 @@ void vcpu_regs_get(struct kvm_vm *vm,
  * Sets the regs of the VCPU specified by vcpuid to the values
  * given by regs.
  */
-void vcpu_regs_set(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_regs *regs)
+void vcpu_regs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
 
-	/* Set the regs. */
 	ret = ioctl(vcpu->fd, KVM_SET_REGS, regs);
 	TEST_ASSERT(ret == 0, "KVM_SET_REGS failed, rc: %i errno: %i",
 		ret, errno);
 }
 
 void vcpu_events_get(struct kvm_vm *vm, uint32_t vcpuid,
-			  struct kvm_vcpu_events *events)
+		     struct kvm_vcpu_events *events)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
 
-	/* Get the regs. */
 	ret = ioctl(vcpu->fd, KVM_GET_VCPU_EVENTS, events);
 	TEST_ASSERT(ret == 0, "KVM_GET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
 
 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid,
-			  struct kvm_vcpu_events *events)
+		     struct kvm_vcpu_events *events)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
 
-	/* Set the regs. */
 	ret = ioctl(vcpu->fd, KVM_SET_VCPU_EVENTS, events);
 	TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i",
 		ret, errno);
 }
 
-/* VM VCPU Args Set
- *
- * Input Args:
- *   vm - Virtual Machine
- *   vcpuid - VCPU ID
- *   num - number of arguments
- *   ... - arguments, each of type uint64_t
- *
- * Output Args: None
- *
- * Return: None
- *
- * Sets the first num function input arguments to the values
- * given as variable args.  Each of the variable args is expected to
- * be of type uint64_t.
- */
-void vcpu_args_set(struct kvm_vm *vm, uint32_t vcpuid, unsigned int num, ...)
-{
-	va_list ap;
-	struct kvm_regs regs;
-
-	TEST_ASSERT(num >= 1 && num <= 6, "Unsupported number of args,\n"
-		    "  num: %u\n",
-		    num);
-
-	va_start(ap, num);
-	vcpu_regs_get(vm, vcpuid, &regs);
-
-	if (num >= 1)
-		regs.rdi = va_arg(ap, uint64_t);
-
-	if (num >= 2)
-		regs.rsi = va_arg(ap, uint64_t);
-
-	if (num >= 3)
-		regs.rdx = va_arg(ap, uint64_t);
-
-	if (num >= 4)
-		regs.rcx = va_arg(ap, uint64_t);
-
-	if (num >= 5)
-		regs.r8 = va_arg(ap, uint64_t);
-
-	if (num >= 6)
-		regs.r9 = va_arg(ap, uint64_t);
-
-	vcpu_regs_set(vm, vcpuid, &regs);
-	va_end(ap);
-}
-
-/* VM VCPU System Regs Get
+/*
+ * VM VCPU System Regs Get
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1284,22 +1157,20 @@ void vcpu_args_set(struct kvm_vm *vm, uint32_t vcpuid, unsigned int num, ...)
  * Obtains the current system register state for the VCPU specified by
  * vcpuid and stores it at the location given by sregs.
  */
-void vcpu_sregs_get(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_sregs *sregs)
+void vcpu_sregs_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
 
-	/* Get the regs. */
-	/* Get the regs. */
 	ret = ioctl(vcpu->fd, KVM_GET_SREGS, sregs);
 	TEST_ASSERT(ret == 0, "KVM_GET_SREGS failed, rc: %i errno: %i",
 		ret, errno);
 }
 
-/* VM VCPU System Regs Set
+/*
+ * VM VCPU System Regs Set
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1313,27 +1184,25 @@ void vcpu_sregs_get(struct kvm_vm *vm,
  * Sets the system regs of the VCPU specified by vcpuid to the values
  * given by sregs.
  */
-void vcpu_sregs_set(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_sregs *sregs)
+void vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
 {
 	int ret = _vcpu_sregs_set(vm, vcpuid, sregs);
 	TEST_ASSERT(ret == 0, "KVM_RUN IOCTL failed, "
 		"rc: %i errno: %i", ret, errno);
 }
 
-int _vcpu_sregs_set(struct kvm_vm *vm,
-	uint32_t vcpuid, struct kvm_sregs *sregs)
+int _vcpu_sregs_set(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_sregs *sregs)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
 
-	/* Get the regs. */
 	return ioctl(vcpu->fd, KVM_SET_SREGS, sregs);
 }
 
-/* VCPU Ioctl
+/*
+ * VCPU Ioctl
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1345,8 +1214,8 @@ int _vcpu_sregs_set(struct kvm_vm *vm,
  *
  * Issues an arbitrary ioctl on a VCPU fd.
  */
-void vcpu_ioctl(struct kvm_vm *vm,
-	uint32_t vcpuid, unsigned long cmd, void *arg)
+void vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid,
+		unsigned long cmd, void *arg)
 {
 	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
@@ -1358,7 +1227,8 @@ void vcpu_ioctl(struct kvm_vm *vm,
 		cmd, ret, errno, strerror(errno));
 }
 
-/* VM Ioctl
+/*
+ * VM Ioctl
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1378,7 +1248,8 @@ void vm_ioctl(struct kvm_vm *vm, unsigned long cmd, void *arg)
 		cmd, ret, errno, strerror(errno));
 }
 
-/* VM Dump
+/*
+ * VM Dump
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1425,38 +1296,6 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 		vcpu_dump(stream, vm, vcpu->id, indent + 2);
 }
 
-/* VM VCPU Dump
- *
- * Input Args:
- *   vm - Virtual Machine
- *   vcpuid - VCPU ID
- *   indent - Left margin indent amount
- *
- * Output Args:
- *   stream - Output FILE stream
- *
- * Return: None
- *
- * Dumps the current state of the VCPU specified by vcpuid, within the VM
- * given by vm, to the FILE stream given by stream.
- */
-void vcpu_dump(FILE *stream, struct kvm_vm *vm,
-	uint32_t vcpuid, uint8_t indent)
-{
-		struct kvm_regs regs;
-		struct kvm_sregs sregs;
-
-		fprintf(stream, "%*scpuid: %u\n", indent, "", vcpuid);
-
-		fprintf(stream, "%*sregs:\n", indent + 2, "");
-		vcpu_regs_get(vm, vcpuid, &regs);
-		regs_dump(stream, &regs, indent + 4);
-
-		fprintf(stream, "%*ssregs:\n", indent + 2, "");
-		vcpu_sregs_get(vm, vcpuid, &sregs);
-		sregs_dump(stream, &sregs, indent + 4);
-}
-
 /* Known KVM exit reasons */
 static struct exit_reason {
 	unsigned int reason;
@@ -1487,7 +1326,8 @@ static struct exit_reason {
 #endif
 };
 
-/* Exit Reason String
+/*
+ * Exit Reason String
  *
  * Input Args:
  *   exit_reason - Exit reason
@@ -1513,7 +1353,8 @@ const char *exit_reason_str(unsigned int exit_reason)
 	return "Unknown";
 }
 
-/* Physical Page Allocate
+/*
+ * Physical Page Allocate
  *
  * Input Args:
  *   vm - Virtual Machine
@@ -1530,8 +1371,8 @@ const char *exit_reason_str(unsigned int exit_reason)
  * and its address is returned.  A TEST_ASSERT failure occurs if no
  * page is available at or above paddr_min.
  */
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm,
-	vm_paddr_t paddr_min, uint32_t memslot)
+vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
+			     uint32_t memslot)
 {
 	struct userspace_mem_region *region;
 	sparsebit_idx_t pg;
@@ -1541,17 +1382,15 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm,
 		"  paddr_min: 0x%lx page_size: 0x%x",
 		paddr_min, vm->page_size);
 
-	/* Locate memory region. */
 	region = memslot2region(vm, memslot);
-
-	/* Locate next available physical page at or above paddr_min. */
 	pg = paddr_min >> vm->page_shift;
 
+	/* Locate next available physical page at or above paddr_min. */
 	if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
 		pg = sparsebit_next_set(region->unused_phy_pages, pg);
 		if (pg == 0) {
 			fprintf(stderr, "No guest physical page available, "
-				"paddr_min: 0x%lx page_size: 0x%x memslot: %u",
+				"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
 				paddr_min, vm->page_size, memslot);
 			fputs("---- vm dump ----\n", stderr);
 			vm_dump(stderr, vm, 2);
@@ -1565,7 +1404,8 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm,
 	return pg * vm->page_size;
 }
 
-/* Address Guest Virtual to Host Virtual
+/*
+ * Address Guest Virtual to Host Virtual
  *
  * Input Args:
  *   vm - Virtual Machine
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index 0278315cd930..62b9dc1926dc 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -11,18 +11,19 @@
 
 #include "sparsebit.h"
 
+#define KVM_DEV_PATH		"/dev/kvm"
+
 #ifndef BITS_PER_BYTE
-#define BITS_PER_BYTE           8
+#define BITS_PER_BYTE		8
 #endif
 
 #ifndef BITS_PER_LONG
-#define BITS_PER_LONG (BITS_PER_BYTE * sizeof(long))
+#define BITS_PER_LONG		(BITS_PER_BYTE * sizeof(long))
 #endif
 
 #define DIV_ROUND_UP(n, d)	(((n) + (d) - 1) / (d))
-#define BITS_TO_LONGS(nr)       DIV_ROUND_UP(nr, BITS_PER_LONG)
+#define BITS_TO_LONGS(nr)	DIV_ROUND_UP(nr, BITS_PER_LONG)
 
-/* Concrete definition of struct kvm_vm. */
 struct userspace_mem_region {
 	struct userspace_mem_region *next, *prev;
 	struct kvm_userspace_memory_region region;
@@ -52,7 +53,6 @@ struct kvm_vm {
 	struct userspace_mem_region *userspace_mem_region_head;
 	struct sparsebit *vpages_valid;
 	struct sparsebit *vpages_mapped;
-
 	bool has_irqchip;
 	bool pgd_created;
 	vm_paddr_t pgd;
@@ -60,13 +60,11 @@ struct kvm_vm {
 	vm_vaddr_t tss;
 };
 
-struct vcpu *vcpu_find(struct kvm_vm *vm,
-	uint32_t vcpuid);
-void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot);
+struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid);
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot,
+		int gdt_memslot);
 void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
-void regs_dump(FILE *stream, struct kvm_regs *regs,
-	uint8_t indent);
-void sregs_dump(FILE *stream, struct kvm_sregs *sregs,
-	uint8_t indent);
+void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent);
+void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent);
 
 #endif /* SELFTEST_KVM_UTIL_INTERNAL_H */
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 79c2c5c203c0..d96b5f9cc344 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -672,6 +672,102 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
 	vcpu_set_mp_state(vm, vcpuid, &mp_state);
 }
 
+/* Allocate an instance of struct kvm_cpuid2
+ *
+ * Input Args: None
+ *
+ * Output Args: None
+ *
+ * Return: A pointer to the allocated struct. The caller is responsible
+ * for freeing this struct.
+ *
+ * Since kvm_cpuid2 uses a 0-length array to allow a the size of the
+ * array to be decided at allocation time, allocation is slightly
+ * complicated. This function uses a reasonable default length for
+ * the array and performs the appropriate allocation.
+ */
+static struct kvm_cpuid2 *allocate_kvm_cpuid2(void)
+{
+	struct kvm_cpuid2 *cpuid;
+	int nent = 100;
+	size_t size;
+
+	size = sizeof(*cpuid);
+	size += nent * sizeof(struct kvm_cpuid_entry2);
+	cpuid = malloc(size);
+	if (!cpuid) {
+		perror("malloc");
+		abort();
+	}
+
+	cpuid->nent = nent;
+
+	return cpuid;
+}
+
+/* KVM Supported CPUID Get
+ *
+ * Input Args: None
+ *
+ * Output Args:
+ *
+ * Return: The supported KVM CPUID
+ *
+ * Get the guest CPUID supported by KVM.
+ */
+struct kvm_cpuid2 *kvm_get_supported_cpuid(void)
+{
+	static struct kvm_cpuid2 *cpuid;
+	int ret;
+	int kvm_fd;
+
+	if (cpuid)
+		return cpuid;
+
+	cpuid = allocate_kvm_cpuid2();
+	kvm_fd = open(KVM_DEV_PATH, O_RDONLY);
+	if (kvm_fd < 0)
+		exit(KSFT_SKIP);
+
+	ret = ioctl(kvm_fd, KVM_GET_SUPPORTED_CPUID, cpuid);
+	TEST_ASSERT(ret == 0, "KVM_GET_SUPPORTED_CPUID failed %d %d\n",
+		    ret, errno);
+
+	close(kvm_fd);
+	return cpuid;
+}
+
+/* Locate a cpuid entry.
+ *
+ * Input Args:
+ *   cpuid: The cpuid.
+ *   function: The function of the cpuid entry to find.
+ *
+ * Output Args: None
+ *
+ * Return: A pointer to the cpuid entry. Never returns NULL.
+ */
+struct kvm_cpuid_entry2 *
+kvm_get_supported_cpuid_index(uint32_t function, uint32_t index)
+{
+	struct kvm_cpuid2 *cpuid;
+	struct kvm_cpuid_entry2 *entry = NULL;
+	int i;
+
+	cpuid = kvm_get_supported_cpuid();
+	for (i = 0; i < cpuid->nent; i++) {
+		if (cpuid->entries[i].function == function &&
+		    cpuid->entries[i].index == index) {
+			entry = &cpuid->entries[i];
+			break;
+		}
+	}
+
+	TEST_ASSERT(entry, "Guest CPUID entry not found: (EAX=%x, ECX=%x).",
+		    function, index);
+	return entry;
+}
+
 /* VM VCPU CPUID Set
  *
  * Input Args:
@@ -698,6 +794,57 @@ void vcpu_set_cpuid(struct kvm_vm *vm,
 		    rc, errno);
 
 }
+
+/* VM VCPU Args Set
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   vcpuid - VCPU ID
+ *   num - number of arguments
+ *   ... - arguments, each of type uint64_t
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Sets the first num function input arguments to the values
+ * given as variable args.  Each of the variable args is expected to
+ * be of type uint64_t.
+ */
+void vcpu_args_set(struct kvm_vm *vm, uint32_t vcpuid, unsigned int num, ...)
+{
+	va_list ap;
+	struct kvm_regs regs;
+
+	TEST_ASSERT(num >= 1 && num <= 6, "Unsupported number of args,\n"
+		    "  num: %u\n",
+		    num);
+
+	va_start(ap, num);
+	vcpu_regs_get(vm, vcpuid, &regs);
+
+	if (num >= 1)
+		regs.rdi = va_arg(ap, uint64_t);
+
+	if (num >= 2)
+		regs.rsi = va_arg(ap, uint64_t);
+
+	if (num >= 3)
+		regs.rdx = va_arg(ap, uint64_t);
+
+	if (num >= 4)
+		regs.rcx = va_arg(ap, uint64_t);
+
+	if (num >= 5)
+		regs.r8 = va_arg(ap, uint64_t);
+
+	if (num >= 6)
+		regs.r9 = va_arg(ap, uint64_t);
+
+	vcpu_regs_set(vm, vcpuid, &regs);
+	va_end(ap);
+}
+
 /* Create a VM with reasonable defaults
  *
  * Input Args:
@@ -742,6 +889,38 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	return vm;
 }
 
+/*
+ * VM VCPU Dump
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   vcpuid - VCPU ID
+ *   indent - Left margin indent amount
+ *
+ * Output Args:
+ *   stream - Output FILE stream
+ *
+ * Return: None
+ *
+ * Dumps the current state of the VCPU specified by vcpuid, within the VM
+ * given by vm, to the FILE stream given by stream.
+ */
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	struct kvm_regs regs;
+	struct kvm_sregs sregs;
+
+	fprintf(stream, "%*scpuid: %u\n", indent, "", vcpuid);
+
+	fprintf(stream, "%*sregs:\n", indent + 2, "");
+	vcpu_regs_get(vm, vcpuid, &regs);
+	regs_dump(stream, &regs, indent + 4);
+
+	fprintf(stream, "%*ssregs:\n", indent + 2, "");
+	vcpu_sregs_get(vm, vcpuid, &sregs);
+	sregs_dump(stream, &sregs, indent + 4);
+}
+
 struct kvm_x86_state {
 	struct kvm_vcpu_events events;
 	struct kvm_mp_state mp_state;
diff --git a/tools/testing/selftests/kvm/x86_64/dirty_log_test.c b/tools/testing/selftests/kvm/x86_64/dirty_log_test.c
index 7cf3e4ae6046..395a6e07d37c 100644
--- a/tools/testing/selftests/kvm/x86_64/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/x86_64/dirty_log_test.c
@@ -15,6 +15,7 @@
 
 #include "test_util.h"
 #include "kvm_util.h"
+#include "processor.h"
 
 #define  DEBUG                 printf
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 06/13] kvm: selftests: add vm_phy_pages_alloc
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (4 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 05/13] kvm: selftests: tidy up kvm_util Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 07/13] kvm: selftests: add virt mem support for aarch64 Andrew Jones
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  2 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 60 ++++++++++++-------
 2 files changed, 39 insertions(+), 23 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index fcde5cf48d06..752715a78a9b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -109,6 +109,8 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 		 uint32_t pgd_memslot);
 vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 			     uint32_t memslot);
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot);
 
 struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_size,
 				 void *guest_code);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 19649ad6e015..ea5e5ea316de 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1354,10 +1354,11 @@ const char *exit_reason_str(unsigned int exit_reason)
 }
 
 /*
- * Physical Page Allocate
+ * Physical Contiguous Page Allocator
  *
  * Input Args:
  *   vm - Virtual Machine
+ *   num - number of pages
  *   paddr_min - Physical address minimum
  *   memslot - Memory region to allocate page from
  *
@@ -1366,16 +1367,18 @@ const char *exit_reason_str(unsigned int exit_reason)
  * Return:
  *   Starting physical address
  *
- * Within the VM specified by vm, locates an available physical page
- * at or above paddr_min.  If found, the page is marked as in use
- * and its address is returned.  A TEST_ASSERT failure occurs if no
- * page is available at or above paddr_min.
+ * Within the VM specified by vm, locates a range of available physical
+ * pages at or above paddr_min. If found, the pages are marked as in use
+ * and thier base address is returned. A TEST_ASSERT failure occurs if
+ * not enough pages are available at or above paddr_min.
  */
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
-			     uint32_t memslot)
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot)
 {
 	struct userspace_mem_region *region;
-	sparsebit_idx_t pg;
+	sparsebit_idx_t pg, base;
+
+	TEST_ASSERT(num > 0, "Must allocate at least one page");
 
 	TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address "
 		"not divisible by page size.\n"
@@ -1383,25 +1386,36 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 		paddr_min, vm->page_size);
 
 	region = memslot2region(vm, memslot);
-	pg = paddr_min >> vm->page_shift;
-
-	/* Locate next available physical page at or above paddr_min. */
-	if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
-		pg = sparsebit_next_set(region->unused_phy_pages, pg);
-		if (pg == 0) {
-			fprintf(stderr, "No guest physical page available, "
-				"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
-				paddr_min, vm->page_size, memslot);
-			fputs("---- vm dump ----\n", stderr);
-			vm_dump(stderr, vm, 2);
-			abort();
+	base = pg = paddr_min >> vm->page_shift;
+
+	do {
+		for (; pg < base + num; ++pg) {
+			if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
+				base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
+				break;
+			}
 		}
+	} while (pg && pg != base + num);
+
+	if (pg == 0) {
+		fprintf(stderr, "No guest physical page available, "
+			"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
+			paddr_min, vm->page_size, memslot);
+		fputs("---- vm dump ----\n", stderr);
+		vm_dump(stderr, vm, 2);
+		abort();
 	}
 
-	/* Specify page as in use and return its address. */
-	sparsebit_clear(region->unused_phy_pages, pg);
+	for (pg = base; pg < base + num; ++pg)
+		sparsebit_clear(region->unused_phy_pages, pg);
+
+	return base * vm->page_size;
+}
 
-	return pg * vm->page_size;
+vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
+			     uint32_t memslot)
+{
+	return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
 }
 
 /*
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 07/13] kvm: selftests: add virt mem support for aarch64
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (5 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 06/13] kvm: selftests: add vm_phy_pages_alloc Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 08/13] kvm: selftests: add vcpu " Andrew Jones
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/lib/aarch64/processor.c     | 216 ++++++++++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    |   2 +
 .../selftests/kvm/lib/kvm_util_internal.h     |   2 +
 3 files changed, 220 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c

diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
new file mode 100644
index 000000000000..464d4d074249
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -0,0 +1,216 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * AArch64 code
+ *
+ * Copyright (C) 2018, Red Hat, Inc.
+ */
+
+#include "kvm_util.h"
+#include "../kvm_util_internal.h"
+
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+
+static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
+{
+	return (v + vm->page_size) & ~(vm->page_size - 1);
+}
+
+static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	unsigned int shift = (vm->pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift;
+	uint64_t mask = (1UL << (vm->va_bits - shift)) - 1;
+
+	return (gva >> shift) & mask;
+}
+
+static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	unsigned int shift = 2 * (vm->page_shift - 3) + vm->page_shift;
+	uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
+
+	TEST_ASSERT(vm->pgtable_levels == 4,
+		"Mode %d does not have 4 page table levels", vm->mode);
+
+	return (gva >> shift) & mask;
+}
+
+static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	unsigned int shift = (vm->page_shift - 3) + vm->page_shift;
+	uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
+
+	TEST_ASSERT(vm->pgtable_levels >= 3,
+		"Mode %d does not have >= 3 page table levels", vm->mode);
+
+	return (gva >> shift) & mask;
+}
+
+static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
+	return (gva >> vm->page_shift) & mask;
+}
+
+static uint64_t pte_addr(struct kvm_vm *vm, uint64_t entry)
+{
+	uint64_t mask = ((1UL << (vm->va_bits - vm->page_shift)) - 1) << vm->page_shift;
+	return entry & mask;
+}
+
+static uint64_t ptrs_per_pgd(struct kvm_vm *vm)
+{
+	unsigned int shift = (vm->pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift;
+	return 1 << (vm->va_bits - shift);
+}
+
+static uint64_t ptrs_per_pte(struct kvm_vm *vm)
+{
+	return 1 << (vm->page_shift - 3);
+}
+
+void virt_pgd_alloc(struct kvm_vm *vm, uint32_t pgd_memslot)
+{
+	int rc;
+
+	if (!vm->pgd_created) {
+		vm_paddr_t paddr = vm_phy_pages_alloc(vm,
+			page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size,
+			KVM_GUEST_PAGE_TABLE_MIN_PADDR, pgd_memslot);
+		vm->pgd = paddr;
+		vm->pgd_created = true;
+	}
+}
+
+void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+		  uint32_t pgd_memslot, uint64_t flags)
+{
+	uint8_t attr_idx = flags & 7;
+	uint64_t *ptep;
+
+	TEST_ASSERT((vaddr % vm->page_size) == 0,
+		"Virtual address not on page boundary,\n"
+		"  vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
+	TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
+		(vaddr >> vm->page_shift)),
+		"Invalid virtual address, vaddr: 0x%lx", vaddr);
+	TEST_ASSERT((paddr % vm->page_size) == 0,
+		"Physical address not on page boundary,\n"
+		"  paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
+	TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
+		"Physical address beyond beyond maximum supported,\n"
+		"  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+		paddr, vm->max_gfn, vm->page_size);
+
+	ptep = addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, vaddr) * 8;
+	if (!*ptep) {
+		*ptep = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, pgd_memslot);
+		*ptep |= 3;
+	}
+
+	switch (vm->pgtable_levels) {
+	case 4:
+		ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, vaddr) * 8;
+		if (!*ptep) {
+			*ptep = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, pgd_memslot);
+			*ptep |= 3;
+		}
+		/* fall through */
+	case 3:
+		ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pmd_index(vm, vaddr) * 8;
+		if (!*ptep) {
+			*ptep = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, pgd_memslot);
+			*ptep |= 3;
+		}
+		/* fall through */
+	case 2:
+		ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, vaddr) * 8;
+		break;
+	default:
+		TEST_ASSERT(false, "Page table levels must be 2, 3, or 4");
+	}
+
+	*ptep = paddr | 3;
+	*ptep |= (attr_idx << 2) | (1 << 10) /* Access Flag */;
+}
+
+void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+		 uint32_t pgd_memslot)
+{
+	uint64_t attr_idx = 4; /* NORMAL (See DEFAULT_MAIR_EL1) */
+
+	_virt_pg_map(vm, vaddr, paddr, pgd_memslot, attr_idx);
+}
+
+vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+	uint64_t *ptep;
+
+	if (!vm->pgd_created)
+		goto unmapped_gva;
+
+	ptep = addr_gpa2hva(vm, vm->pgd) + pgd_index(vm, gva) * 8;
+	if (!ptep)
+		goto unmapped_gva;
+
+	switch (vm->pgtable_levels) {
+	case 4:
+		ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, gva) * 8;
+		if (!ptep)
+			goto unmapped_gva;
+		/* fall through */
+	case 3:
+		ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pmd_index(vm, gva) * 8;
+		if (!ptep)
+			goto unmapped_gva;
+		/* fall through */
+	case 2:
+		ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, gva) * 8;
+		if (!ptep)
+			goto unmapped_gva;
+		break;
+	default:
+		TEST_ASSERT(false, "Page table levels must be 2, 3, or 4");
+	}
+
+	return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
+
+unmapped_gva:
+	TEST_ASSERT(false, "No mapping for vm virtual address, "
+		    "gva: 0x%lx", gva);
+}
+
+static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level)
+{
+#ifdef DEBUG_VM
+	static const char * const type[] = { "", "pud", "pmd", "pte" };
+	uint64_t pte, *ptep;
+
+	if (level == 4)
+		return;
+
+	for (pte = page; pte < page + ptrs_per_pte(vm) * 8; pte += 8) {
+		ptep = addr_gpa2hva(vm, pte);
+		if (!*ptep)
+			continue;
+		printf("%*s%s: %lx: %lx at %p\n", indent, "", type[level], pte, *ptep, ptep);
+		pte_dump(stream, vm, indent + 1, pte_addr(vm, *ptep), level + 1);
+	}
+#endif
+}
+
+void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+	int level = 4 - (vm->pgtable_levels - 1);
+	uint64_t pgd, *ptep;
+
+	if (!vm->pgd_created)
+		return;
+
+	for (pgd = vm->pgd; pgd < vm->pgd + ptrs_per_pgd(vm) * 8; pgd += 8) {
+		ptep = addr_gpa2hva(vm, pgd);
+		if (!*ptep)
+			continue;
+		printf("%*spgd: %lx: %lx at %p\n", indent, "", pgd, *ptep, ptep);
+		pte_dump(stream, vm, indent + 1, pte_addr(vm, *ptep), level);
+	}
+}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ea5e5ea316de..b5e9eb7360a2 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -106,6 +106,8 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	/* Setup mode specific traits. */
 	switch (vm->mode) {
 	case VM_MODE_FLAT48PG:
+		vm->pgtable_levels = 4;
+		vm->va_bits = 48;
 		vm->page_size = 0x1000;
 		vm->page_shift = 12;
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index 62b9dc1926dc..813145f3e4b9 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -46,6 +46,8 @@ struct kvm_vm {
 	int mode;
 	int kvm_fd;
 	int fd;
+	unsigned int pgtable_levels;
+	unsigned int va_bits;
 	unsigned int page_size;
 	unsigned int page_shift;
 	uint64_t max_gfn;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 08/13] kvm: selftests: add vcpu support for aarch64
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (6 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 07/13] kvm: selftests: add virt mem support for aarch64 Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 09/13] kvm: selftests: port dirty_log_test to aarch64 Andrew Jones
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

This code adds VM and VCPU setup code for the VM_MODE_FLAT48PG mode.
The VM_MODE_FLAT48PG isn't yet fully supportable, as it defines the
guest physical address limit as 52-bits, and KVM currently only
supports guests with up to 40-bit physical addresses (see
KVM_PHYS_SHIFT). VM_MODE_FLAT48PG will work fine, though, as long as
no >= 40-bit physical addresses are used.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../selftests/kvm/include/aarch64/processor.h | 55 ++++++++++++
 .../selftests/kvm/lib/aarch64/processor.c     | 83 +++++++++++++++++++
 2 files changed, 138 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h

diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
new file mode 100644
index 000000000000..9ef2ab1a0c08
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AArch64 processor specific defines
+ *
+ * Copyright (C) 2018, Red Hat, Inc.
+ */
+#ifndef SELFTEST_KVM_PROCESSOR_H
+#define SELFTEST_KVM_PROCESSOR_H
+
+#include "kvm_util.h"
+
+
+#define ARM64_CORE_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
+			   KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x))
+
+#define CPACR_EL1	3, 0,  1, 0, 2
+#define TCR_EL1		3, 0,  2, 0, 2
+#define MAIR_EL1	3, 0, 10, 2, 0
+#define TTBR0_EL1	3, 0,  2, 0, 0
+#define SCTLR_EL1	3, 0,  1, 0, 0
+
+/*
+ * Default MAIR
+ *                  index   attribute
+ * DEVICE_nGnRnE      0     0000:0000
+ * DEVICE_nGnRE       1     0000:0100
+ * DEVICE_GRE         2     0000:1100
+ * NORMAL_NC          3     0100:0100
+ * NORMAL             4     1111:1111
+ * NORMAL_WT          5     1011:1011
+ */
+#define DEFAULT_MAIR_EL1 ((0x00ul << (0 * 8)) | \
+			  (0x04ul << (1 * 8)) | \
+			  (0x0cul << (2 * 8)) | \
+			  (0x44ul << (3 * 8)) | \
+			  (0xfful << (4 * 8)) | \
+			  (0xbbul << (5 * 8)))
+
+static inline void get_reg(struct kvm_vm *vm, uint32_t vcpuid, uint64_t id, uint64_t *addr)
+{
+	struct kvm_one_reg reg;
+	reg.id = id;
+	reg.addr = (uint64_t)addr;
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &reg);
+}
+
+static inline void set_reg(struct kvm_vm *vm, uint32_t vcpuid, uint64_t id, uint64_t val)
+{
+	struct kvm_one_reg reg;
+	reg.id = id;
+	reg.addr = (uint64_t)&val;
+	vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &reg);
+}
+
+#endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 464d4d074249..871fe2173679 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -5,10 +5,14 @@
  * Copyright (C) 2018, Red Hat, Inc.
  */
 
+#define _GNU_SOURCE /* for program_invocation_name */
+
 #include "kvm_util.h"
 #include "../kvm_util_internal.h"
+#include "processor.h"
 
 #define KVM_GUEST_PAGE_TABLE_MIN_PADDR		0x180000
+#define DEFAULT_ARM64_GUEST_STACK_VADDR_MIN	0xac0000
 
 static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
 {
@@ -214,3 +218,82 @@ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 		pte_dump(stream, vm, indent + 1, pte_addr(vm, *ptep), level);
 	}
 }
+
+struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+				 void *guest_code)
+{
+	uint64_t ptrs_per_4k_pte = 512;
+	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
+	struct kvm_vm *vm;
+
+	vm = vm_create(VM_MODE_FLAT48PG, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+
+	return vm;
+}
+
+void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
+{
+	size_t stack_size = vm->page_size == 4096 ?
+					DEFAULT_STACK_PGS * vm->page_size :
+					vm->page_size;
+	uint64_t stack_vaddr = vm_vaddr_alloc(vm, stack_size,
+					DEFAULT_ARM64_GUEST_STACK_VADDR_MIN, 0, 0);
+
+	vm_vcpu_add(vm, vcpuid, 0, 0);
+
+	set_reg(vm, vcpuid, ARM64_CORE_REG(sp_el1), stack_vaddr + stack_size);
+	set_reg(vm, vcpuid, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+}
+
+void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
+{
+	struct kvm_vcpu_init init;
+	uint64_t sctlr_el1, tcr_el1;
+
+	memset(&init, 0, sizeof(init));
+	init.target = KVM_ARM_TARGET_GENERIC_V8;
+	vcpu_ioctl(vm, vcpuid, KVM_ARM_VCPU_INIT, &init);
+
+	/*
+	 * Enable FP/ASIMD to avoid trapping when accessing Q0-Q15
+	 * registers, which the variable argument list macros do.
+	 */
+	set_reg(vm, vcpuid, ARM64_SYS_REG(CPACR_EL1), 3 << 20);
+
+	get_reg(vm, vcpuid, ARM64_SYS_REG(SCTLR_EL1), &sctlr_el1);
+	get_reg(vm, vcpuid, ARM64_SYS_REG(TCR_EL1), &tcr_el1);
+
+	switch (vm->mode) {
+	case VM_MODE_FLAT48PG:
+		tcr_el1 |= 0ul << 14; /* TG0 = 4KB */
+		tcr_el1 |= 6ul << 32; /* IPS = 52 bits */
+		break;
+	default:
+		TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", vm->mode);
+	}
+
+	sctlr_el1 |= (1 << 0) | (1 << 2) | (1 << 12) /* M | C | I */;
+	/* TCR_EL1 |= IRGN0:WBWA | ORGN0:WBWA | SH0:Inner-Shareable */;
+	tcr_el1 |= (1 << 8) | (1 << 10) | (3 << 12);
+	tcr_el1 |= (64 - vm->va_bits) /* T0SZ */;
+
+	set_reg(vm, vcpuid, ARM64_SYS_REG(SCTLR_EL1), sctlr_el1);
+	set_reg(vm, vcpuid, ARM64_SYS_REG(TCR_EL1), tcr_el1);
+	set_reg(vm, vcpuid, ARM64_SYS_REG(MAIR_EL1), DEFAULT_MAIR_EL1);
+	set_reg(vm, vcpuid, ARM64_SYS_REG(TTBR0_EL1), vm->pgd);
+}
+
+void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+{
+	uint64_t pstate, pc;
+
+	get_reg(vm, vcpuid, ARM64_CORE_REG(regs.pstate), &pstate);
+	get_reg(vm, vcpuid, ARM64_CORE_REG(regs.pc), &pc);
+
+        fprintf(stream, "%*spstate: 0x%.16llx pc: 0x%.16llx\n",
+                indent, "", pstate, pc);
+
+}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 09/13] kvm: selftests: port dirty_log_test to aarch64
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (7 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 08/13] kvm: selftests: add vcpu " Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 10/13] kvm: selftests: introduce new VM mode for 64K pages Andrew Jones
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

While we're messing with the code for the port and to support guest
page sizes that are less than the host page size, we also make some
code formatting cleanups and apply sync_global_to_guest().

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/.gitignore        |   2 +-
 tools/testing/selftests/kvm/Makefile          |   5 +-
 .../kvm/{x86_64 => }/dirty_log_test.c         | 163 +++++++++---------
 3 files changed, 90 insertions(+), 80 deletions(-)
 rename tools/testing/selftests/kvm/{x86_64 => }/dirty_log_test.c (65%)

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 9b3933fb5043..c228400cba07 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -3,4 +3,4 @@
 /x86_64/sync_regs_test
 /x86_64/vmx_tsc_adjust_test
 /x86_64/state_test
-/x86_64/dirty_log_test
+/dirty_log_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index d603d7865cd1..3875639025fe 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -5,13 +5,16 @@ UNAME_M := $(shell uname -m)
 
 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/ucall.c lib/sparsebit.c
 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c
+LIBKVM_aarch64 = lib/aarch64/processor.c
 
 TEST_GEN_PROGS_x86_64 = x86_64/set_sregs_test
 TEST_GEN_PROGS_x86_64 += x86_64/sync_regs_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test
 TEST_GEN_PROGS_x86_64 += x86_64/cr4_cpuid_sync_test
 TEST_GEN_PROGS_x86_64 += x86_64/state_test
-TEST_GEN_PROGS_x86_64 += x86_64/dirty_log_test
+TEST_GEN_PROGS_x86_64 += dirty_log_test
+
+TEST_GEN_PROGS_aarch64 += dirty_log_test
 
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
diff --git a/tools/testing/selftests/kvm/x86_64/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
similarity index 65%
rename from tools/testing/selftests/kvm/x86_64/dirty_log_test.c
rename to tools/testing/selftests/kvm/dirty_log_test.c
index 395a6e07d37c..d90f1091f687 100644
--- a/tools/testing/selftests/kvm/x86_64/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -17,75 +17,75 @@
 #include "kvm_util.h"
 #include "processor.h"
 
-#define  DEBUG                 printf
+#define DEBUG printf
+
+#define VCPU_ID				1
 
-#define  VCPU_ID                        1
 /* The memory slot index to track dirty pages */
-#define  TEST_MEM_SLOT_INDEX            1
+#define TEST_MEM_SLOT_INDEX		1
+
 /*
  * GPA offset of the testing memory slot. Must be bigger than the
  * default vm mem slot, which is DEFAULT_GUEST_PHY_PAGES.
  */
-#define  TEST_MEM_OFFSET                (1ULL << 30) /* 1G */
+#define TEST_MEM_OFFSET			(1ul << 30) /* 1G */
+
 /* Size of the testing memory slot */
-#define  TEST_MEM_PAGES                 (1ULL << 18) /* 1G for 4K pages */
+#define TEST_MEM_PAGES			(1ul << 18) /* 1G for 4K pages */
+
 /* How many pages to dirty for each guest loop */
-#define  TEST_PAGES_PER_LOOP            1024
+#define TEST_PAGES_PER_LOOP		1024
+
 /* How many host loops to run (one KVM_GET_DIRTY_LOG for each loop) */
-#define  TEST_HOST_LOOP_N               32
+#define TEST_HOST_LOOP_N		32
+
 /* Interval for each host loop (ms) */
-#define  TEST_HOST_LOOP_INTERVAL        10
+#define TEST_HOST_LOOP_INTERVAL		10
 
 /*
- * Guest variables.  We use these variables to share data between host
- * and guest.  There are two copies of the variables, one in host memory
- * (which is unused) and one in guest memory.  When the host wants to
- * access these variables, it needs to call addr_gva2hva() to access the
- * guest copy.
+ * Guest/Host shared variables. Ensure addr_gva2hva() and/or
+ * sync_global_to/from_guest() are used when accessing from
+ * the host. READ/WRITE_ONCE() should also be used with anything
+ * that may change.
  */
-uint64_t guest_random_array[TEST_PAGES_PER_LOOP];
-uint64_t guest_iteration;
-uint64_t guest_page_size;
+static uint64_t host_page_size;
+static uint64_t guest_page_size;
+static uint64_t random_array[TEST_PAGES_PER_LOOP];
+static uint64_t iteration;
 
 /*
- * Writes to the first byte of a random page within the testing memory
- * region continuously.
+ * Continuously write to the first 8 bytes of a random pages within
+ * the testing memory region.
  */
-void guest_code(void)
+static void guest_code(void)
 {
-	int i = 0;
-	uint64_t volatile *array = guest_random_array;
-	uint64_t volatile *guest_addr;
+	int i;
 
 	while (true) {
 		for (i = 0; i < TEST_PAGES_PER_LOOP; i++) {
-			/*
-			 * Write to the first 8 bytes of a random page
-			 * on the testing memory region.
-			 */
-			guest_addr = (uint64_t *)
-			    (TEST_MEM_OFFSET +
-			     (array[i] % TEST_MEM_PAGES) * guest_page_size);
-			*guest_addr = guest_iteration;
+			uint64_t addr = TEST_MEM_OFFSET;
+			addr += (READ_ONCE(random_array[i]) % TEST_MEM_PAGES)
+				* guest_page_size;
+			addr &= ~(host_page_size - 1);
+			*(uint64_t *)addr = READ_ONCE(iteration);
 		}
+
 		/* Tell the host that we need more random numbers */
 		GUEST_SYNC(1);
 	}
 }
 
-/*
- * Host variables.  These variables should only be used by the host
- * rather than the guest.
- */
-bool host_quit;
+/* Host variables */
+static bool host_quit;
 
 /* Points to the test VM memory region on which we track dirty logs */
-void *host_test_mem;
+static void *host_test_mem;
+static uint64_t host_num_pages;
 
 /* For statistics only */
-uint64_t host_dirty_count;
-uint64_t host_clear_count;
-uint64_t host_track_next_count;
+static uint64_t host_dirty_count;
+static uint64_t host_clear_count;
+static uint64_t host_track_next_count;
 
 /*
  * We use this bitmap to track some pages that should have its dirty
@@ -94,39 +94,34 @@ uint64_t host_track_next_count;
  * page bit is cleared in the latest bitmap, then the system must
  * report that write in the next get dirty log call.
  */
-unsigned long *host_bmap_track;
+static unsigned long *host_bmap_track;
 
-void generate_random_array(uint64_t *guest_array, uint64_t size)
+static void generate_random_array(uint64_t *guest_array, uint64_t size)
 {
 	uint64_t i;
 
-	for (i = 0; i < size; i++) {
+	for (i = 0; i < size; i++)
 		guest_array[i] = random();
-	}
 }
 
-void *vcpu_worker(void *data)
+static void *vcpu_worker(void *data)
 {
 	int ret;
-	uint64_t loops, *guest_array, pages_count = 0;
 	struct kvm_vm *vm = data;
+	uint64_t *guest_array;
+	uint64_t pages_count = 0;
 	struct kvm_run *run;
 	struct ucall uc;
 
 	run = vcpu_state(vm, VCPU_ID);
 
-	/* Retrieve the guest random array pointer and cache it */
-	guest_array = addr_gva2hva(vm, (vm_vaddr_t)guest_random_array);
-
-	DEBUG("VCPU starts\n");
-
+	guest_array = addr_gva2hva(vm, (vm_vaddr_t)random_array);
 	generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
 
 	while (!READ_ONCE(host_quit)) {
-		/* Let the guest to dirty these random pages */
+		/* Let the guest dirty the random pages */
 		ret = _vcpu_run(vm, VCPU_ID);
-		if (run->exit_reason == KVM_EXIT_IO &&
-		    get_ucall(vm, VCPU_ID, &uc) == UCALL_SYNC) {
+		if (get_ucall(vm, VCPU_ID, &uc) == UCALL_SYNC) {
 			pages_count += TEST_PAGES_PER_LOOP;
 			generate_random_array(guest_array, TEST_PAGES_PER_LOOP);
 		} else {
@@ -137,18 +132,18 @@ void *vcpu_worker(void *data)
 		}
 	}
 
-	DEBUG("VCPU exits, dirtied %"PRIu64" pages\n", pages_count);
+	DEBUG("Dirtied %"PRIu64" pages\n", pages_count);
 
 	return NULL;
 }
 
-void vm_dirty_log_verify(unsigned long *bmap, uint64_t iteration)
+static void vm_dirty_log_verify(unsigned long *bmap)
 {
 	uint64_t page;
-	uint64_t volatile *value_ptr;
+	uint64_t *value_ptr;
 
-	for (page = 0; page < TEST_MEM_PAGES; page++) {
-		value_ptr = host_test_mem + page * getpagesize();
+	for (page = 0; page < host_num_pages; page++) {
+		value_ptr = host_test_mem + page * host_page_size;
 
 		/* If this is a special page that we were tracking... */
 		if (test_and_clear_bit(page, host_bmap_track)) {
@@ -208,7 +203,7 @@ void vm_dirty_log_verify(unsigned long *bmap, uint64_t iteration)
 	}
 }
 
-void help(char *name)
+static void help(char *name)
 {
 	puts("");
 	printf("usage: %s [-i iterations] [-I interval] [-h]\n", name);
@@ -225,9 +220,9 @@ int main(int argc, char *argv[])
 {
 	pthread_t vcpu_thread;
 	struct kvm_vm *vm;
-	uint64_t volatile *psize, *iteration;
-	unsigned long *bmap, iterations = TEST_HOST_LOOP_N,
-	    interval = TEST_HOST_LOOP_INTERVAL;
+	unsigned long iterations = TEST_HOST_LOOP_N;
+	unsigned long interval = TEST_HOST_LOOP_INTERVAL;
+	unsigned long *bmap;
 	int opt;
 
 	while ((opt = getopt(argc, argv, "hi:I:")) != -1) {
@@ -245,16 +240,21 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	TEST_ASSERT(iterations > 2, "Iteration must be bigger than zero\n");
-	TEST_ASSERT(interval > 0, "Interval must be bigger than zero");
+	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
+	TEST_ASSERT(interval > 0, "Interval must be greater than zero");
 
 	DEBUG("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n",
 	      iterations, interval);
 
 	srandom(time(0));
 
-	bmap = bitmap_alloc(TEST_MEM_PAGES);
-	host_bmap_track = bitmap_alloc(TEST_MEM_PAGES);
+	guest_page_size = 4096;
+	host_page_size = getpagesize();
+	host_num_pages = (TEST_MEM_PAGES * guest_page_size) / host_page_size +
+			 !!((TEST_MEM_PAGES * guest_page_size) % host_page_size);
+
+	bmap = bitmap_alloc(host_num_pages);
+	host_bmap_track = bitmap_alloc(host_num_pages);
 
 	vm = vm_create_default(VCPU_ID, TEST_MEM_PAGES, guest_code);
 
@@ -264,32 +264,38 @@ int main(int argc, char *argv[])
 				    TEST_MEM_SLOT_INDEX,
 				    TEST_MEM_PAGES,
 				    KVM_MEM_LOG_DIRTY_PAGES);
-	/* Cache the HVA pointer of the region */
-	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)TEST_MEM_OFFSET);
 
 	/* Do 1:1 mapping for the dirty track memory slot */
 	virt_map(vm, TEST_MEM_OFFSET, TEST_MEM_OFFSET,
-		 TEST_MEM_PAGES * getpagesize(), 0);
+		 TEST_MEM_PAGES * guest_page_size, 0);
+
+	/* Cache the HVA pointer of the region */
+	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)TEST_MEM_OFFSET);
 
+#ifdef __x86_64__
 	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+#endif
+#ifdef __aarch64__
+	ucall_init(vm, UCALL_MMIO, NULL);
+#endif
 
-	/* Tell the guest about the page size on the system */
-	psize = addr_gva2hva(vm, (vm_vaddr_t)&guest_page_size);
-	*psize = getpagesize();
+	/* Tell the guest about the page sizes */
+	sync_global_to_guest(vm, host_page_size);
+	sync_global_to_guest(vm, guest_page_size);
 
 	/* Start the iterations */
-	iteration = addr_gva2hva(vm, (vm_vaddr_t)&guest_iteration);
-	*iteration = 1;
+	iteration = 1;
+	sync_global_to_guest(vm, iteration);
 
-	/* Start dirtying pages */
 	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
 
-	while (*iteration < iterations) {
+	while (iteration < iterations) {
 		/* Give the vcpu thread some time to dirty some pages */
 		usleep(interval * 1000);
 		kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap);
-		vm_dirty_log_verify(bmap, *iteration);
-		(*iteration)++;
+		vm_dirty_log_verify(bmap);
+		iteration++;
+		sync_global_to_guest(vm, iteration);
 	}
 
 	/* Tell the vcpu thread to quit */
@@ -302,6 +308,7 @@ int main(int argc, char *argv[])
 
 	free(bmap);
 	free(host_bmap_track);
+	ucall_uninit(vm);
 	kvm_vm_free(vm);
 
 	return 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 10/13] kvm: selftests: introduce new VM mode for 64K pages
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (8 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 09/13] kvm: selftests: port dirty_log_test to aarch64 Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 11/13] kvm: selftests: dirty_log_test: also test 64K pages on aarch64 Andrew Jones
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Rename VM_MODE_FLAT48PG to be more descriptive of its config and add a
new config that has the same parameters, except with 64K pages.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  7 ++-
 .../selftests/kvm/lib/aarch64/processor.c     |  8 +++-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 47 ++++++++++++-------
 .../selftests/kvm/lib/kvm_util_internal.h     |  1 +
 .../selftests/kvm/lib/x86_64/processor.c      | 10 ++--
 5 files changed, 48 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 752715a78a9b..6e3dff9d9d94 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -34,9 +34,14 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
 #define DEFAULT_STACK_PGS		5
 
 enum vm_guest_mode {
-	VM_MODE_FLAT48PG,
+	VM_MODE_P52V48_4K,
+	VM_MODE_P52V48_64K,
+	NUM_VM_MODES,
 };
 
+#define vm_guest_mode_string(m) vm_guest_mode_string[m]
+extern const char * const vm_guest_mode_string[];
+
 enum vm_mem_backing_src_type {
 	VM_MEM_SRC_ANONYMOUS,
 	VM_MEM_SRC_ANONYMOUS_THP,
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 871fe2173679..b1dfc0d4b68e 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -226,7 +226,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = (extra_mem_pages / ptrs_per_4k_pte) * 2;
 	struct kvm_vm *vm;
 
-	vm = vm_create(VM_MODE_FLAT48PG, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = vm_create(VM_MODE_P52V48_4K, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
 
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 	vm_vcpu_add_default(vm, vcpuid, guest_code);
@@ -267,10 +267,14 @@ void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
 	get_reg(vm, vcpuid, ARM64_SYS_REG(TCR_EL1), &tcr_el1);
 
 	switch (vm->mode) {
-	case VM_MODE_FLAT48PG:
+	case VM_MODE_P52V48_4K:
 		tcr_el1 |= 0ul << 14; /* TG0 = 4KB */
 		tcr_el1 |= 6ul << 32; /* IPS = 52 bits */
 		break;
+	case VM_MODE_P52V48_64K:
+		tcr_el1 |= 1ul << 14; /* TG0 = 64KB */
+		tcr_el1 |= 6ul << 32; /* IPS = 52 bits */
+		break;
 	default:
 		TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", vm->mode);
 	}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b5e9eb7360a2..4d3bb515de17 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -17,7 +17,7 @@
 #include <linux/kernel.h>
 
 #define KVM_UTIL_PGS_PER_HUGEPG 512
-#define KVM_UTIL_MIN_PADDR      0x2000
+#define KVM_UTIL_MIN_PFN	2
 
 /* Aligns x up to the next multiple of size. Size must be a power of 2. */
 static void *align(void *x, size_t size)
@@ -73,11 +73,16 @@ static void vm_open(struct kvm_vm *vm, int perm)
 		"rc: %i errno: %i", vm->fd, errno);
 }
 
+const char * const vm_guest_mode_string[] = {
+	"PA-bits:52, VA-bits:48, 4K pages",
+	"PA-bits:52, VA-bits:48, 64K pages",
+};
+
 /*
  * VM Create
  *
  * Input Args:
- *   mode - VM Mode (e.g. VM_MODE_FLAT48PG)
+ *   mode - VM Mode (e.g. VM_MODE_P52V48_4K)
  *   phy_pages - Physical memory pages
  *   perm - permission
  *
@@ -86,7 +91,7 @@ static void vm_open(struct kvm_vm *vm, int perm)
  * Return:
  *   Pointer to opaque structure that describes the created VM.
  *
- * Creates a VM with the mode specified by mode (e.g. VM_MODE_FLAT48PG).
+ * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
  * When phy_pages is non-zero, a memory region of phy_pages physical pages
  * is created and mapped starting at guest physical address 0.  The file
  * descriptor to control the created VM is created with the permissions
@@ -105,28 +110,35 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 
 	/* Setup mode specific traits. */
 	switch (vm->mode) {
-	case VM_MODE_FLAT48PG:
+	case VM_MODE_P52V48_4K:
 		vm->pgtable_levels = 4;
+		vm->pa_bits = 52;
 		vm->va_bits = 48;
 		vm->page_size = 0x1000;
 		vm->page_shift = 12;
-
-		/* Limit to 48-bit canonical virtual addresses. */
-		vm->vpages_valid = sparsebit_alloc();
-		sparsebit_set_num(vm->vpages_valid,
-			0, (1ULL << (48 - 1)) >> vm->page_shift);
-		sparsebit_set_num(vm->vpages_valid,
-			(~((1ULL << (48 - 1)) - 1)) >> vm->page_shift,
-			(1ULL << (48 - 1)) >> vm->page_shift);
-
-		/* Limit physical addresses to 52-bits. */
-		vm->max_gfn = ((1ULL << 52) >> vm->page_shift) - 1;
 		break;
-
+	case VM_MODE_P52V48_64K:
+		vm->pgtable_levels = 3;
+		vm->pa_bits = 52;
+		vm->va_bits = 48;
+		vm->page_size = 0x10000;
+		vm->page_shift = 16;
+		break;
 	default:
 		TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", mode);
 	}
 
+	/* Limit to VA-bit canonical virtual addresses. */
+	vm->vpages_valid = sparsebit_alloc();
+	sparsebit_set_num(vm->vpages_valid,
+		0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+	sparsebit_set_num(vm->vpages_valid,
+		(~((1ULL << (vm->va_bits - 1)) - 1)) >> vm->page_shift,
+		(1ULL << (vm->va_bits - 1)) >> vm->page_shift);
+
+	/* Limit physical addresses to PA-bits. */
+	vm->max_gfn = ((1ULL << vm->pa_bits) >> vm->page_shift) - 1;
+
 	/* Allocate and setup memory for guest. */
 	vm->vpages_mapped = sparsebit_alloc();
 	if (phy_pages != 0)
@@ -845,7 +857,8 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
 		pages--, vaddr += vm->page_size) {
 		vm_paddr_t paddr;
 
-		paddr = vm_phy_page_alloc(vm, KVM_UTIL_MIN_PADDR, data_memslot);
+		paddr = vm_phy_page_alloc(vm,
+				KVM_UTIL_MIN_PFN * vm->page_size, data_memslot);
 
 		virt_pg_map(vm, vaddr, paddr, pgd_memslot);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index 813145f3e4b9..5e05fb98dc62 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -47,6 +47,7 @@ struct kvm_vm {
 	int kvm_fd;
 	int fd;
 	unsigned int pgtable_levels;
+	unsigned int pa_bits;
 	unsigned int va_bits;
 	unsigned int page_size;
 	unsigned int page_shift;
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index d96b5f9cc344..7092d1389dda 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -231,7 +231,7 @@ void virt_pgd_alloc(struct kvm_vm *vm, uint32_t pgd_memslot)
 {
 	int rc;
 
-	TEST_ASSERT(vm->mode == VM_MODE_FLAT48PG, "Attempt to use "
+	TEST_ASSERT(vm->mode == VM_MODE_P52V48_4K, "Attempt to use "
 		"unknown or unsupported guest mode, mode: 0x%x", vm->mode);
 
 	/* If needed, create page map l4 table. */
@@ -264,7 +264,7 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 	uint16_t index[4];
 	struct pageMapL4Entry *pml4e;
 
-	TEST_ASSERT(vm->mode == VM_MODE_FLAT48PG, "Attempt to use "
+	TEST_ASSERT(vm->mode == VM_MODE_P52V48_4K, "Attempt to use "
 		"unknown or unsupported guest mode, mode: 0x%x", vm->mode);
 
 	TEST_ASSERT((vaddr % vm->page_size) == 0,
@@ -551,7 +551,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 	struct pageTableEntry *pte;
 	void *hva;
 
-	TEST_ASSERT(vm->mode == VM_MODE_FLAT48PG, "Attempt to use "
+	TEST_ASSERT(vm->mode == VM_MODE_P52V48_4K, "Attempt to use "
 		"unknown or unsupported guest mode, mode: 0x%x", vm->mode);
 
 	index[0] = (gva >> 12) & 0x1ffu;
@@ -624,7 +624,7 @@ void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
 	kvm_setup_gdt(vm, &sregs.gdt, gdt_memslot, pgd_memslot);
 
 	switch (vm->mode) {
-	case VM_MODE_FLAT48PG:
+	case VM_MODE_P52V48_4K:
 		sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
 		sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
 		sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
@@ -873,7 +873,7 @@ struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
 	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
 
 	/* Create VM */
-	vm = vm_create(VM_MODE_FLAT48PG,
+	vm = vm_create(VM_MODE_P52V48_4K,
 		       DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
 		       O_RDWR);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 11/13] kvm: selftests: dirty_log_test: also test 64K pages on aarch64
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (9 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 10/13] kvm: selftests: introduce new VM mode for 64K pages Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 12/13] kvm: selftests: stop lying to aarch64 tests about PA-bits Andrew Jones
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 183 ++++++++++++++-----
 1 file changed, 137 insertions(+), 46 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index d90f1091f687..3afc7d607a2e 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -5,6 +5,8 @@
  * Copyright (C) 2018, Red Hat, Inc.
  */
 
+#define _GNU_SOURCE /* for program_invocation_name */
+
 #include <stdio.h>
 #include <stdlib.h>
 #include <unistd.h>
@@ -30,9 +32,6 @@
  */
 #define TEST_MEM_OFFSET			(1ul << 30) /* 1G */
 
-/* Size of the testing memory slot */
-#define TEST_MEM_PAGES			(1ul << 18) /* 1G for 4K pages */
-
 /* How many pages to dirty for each guest loop */
 #define TEST_PAGES_PER_LOOP		1024
 
@@ -50,6 +49,7 @@
  */
 static uint64_t host_page_size;
 static uint64_t guest_page_size;
+static uint64_t guest_num_pages;
 static uint64_t random_array[TEST_PAGES_PER_LOOP];
 static uint64_t iteration;
 
@@ -64,7 +64,7 @@ static void guest_code(void)
 	while (true) {
 		for (i = 0; i < TEST_PAGES_PER_LOOP; i++) {
 			uint64_t addr = TEST_MEM_OFFSET;
-			addr += (READ_ONCE(random_array[i]) % TEST_MEM_PAGES)
+			addr += (READ_ONCE(random_array[i]) % guest_num_pages)
 				* guest_page_size;
 			addr &= ~(host_page_size - 1);
 			*(uint64_t *)addr = READ_ONCE(iteration);
@@ -141,8 +141,10 @@ static void vm_dirty_log_verify(unsigned long *bmap)
 {
 	uint64_t page;
 	uint64_t *value_ptr;
+	uint64_t step = host_page_size >= guest_page_size ? 1 :
+				guest_page_size / host_page_size;
 
-	for (page = 0; page < host_num_pages; page++) {
+	for (page = 0; page < host_num_pages; page += step) {
 		value_ptr = host_test_mem + page * host_page_size;
 
 		/* If this is a special page that we were tracking... */
@@ -203,71 +205,64 @@ static void vm_dirty_log_verify(unsigned long *bmap)
 	}
 }
 
-static void help(char *name)
+static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
+				uint64_t extra_mem_pages, void *guest_code)
 {
-	puts("");
-	printf("usage: %s [-i iterations] [-I interval] [-h]\n", name);
-	puts("");
-	printf(" -i: specify iteration counts (default: %"PRIu64")\n",
-	       TEST_HOST_LOOP_N);
-	printf(" -I: specify interval in ms (default: %"PRIu64" ms)\n",
-	       TEST_HOST_LOOP_INTERVAL);
-	puts("");
-	exit(0);
+	struct kvm_vm *vm;
+	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
+
+	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+#ifdef __x86_64__
+	vm_create_irqchip(vm);
+#endif
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+	return vm;
 }
 
-int main(int argc, char *argv[])
+static void run_test(enum vm_guest_mode mode, unsigned long iterations,
+		     unsigned long interval)
 {
+	unsigned int guest_page_shift;
 	pthread_t vcpu_thread;
 	struct kvm_vm *vm;
-	unsigned long iterations = TEST_HOST_LOOP_N;
-	unsigned long interval = TEST_HOST_LOOP_INTERVAL;
 	unsigned long *bmap;
-	int opt;
 
-	while ((opt = getopt(argc, argv, "hi:I:")) != -1) {
-		switch (opt) {
-		case 'i':
-			iterations = strtol(optarg, NULL, 10);
-			break;
-		case 'I':
-			interval = strtol(optarg, NULL, 10);
-			break;
-		case 'h':
-		default:
-			help(argv[0]);
-			break;
-		}
+	switch (mode) {
+	case VM_MODE_P52V48_4K:
+		guest_page_shift = 12;
+		break;
+	case VM_MODE_P52V48_64K:
+		guest_page_shift = 16;
+		break;
+	default:
+		TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", mode);
 	}
 
-	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
-	TEST_ASSERT(interval > 0, "Interval must be greater than zero");
-
-	DEBUG("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n",
-	      iterations, interval);
-
-	srandom(time(0));
+	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	guest_page_size = 4096;
+	guest_page_size = (1ul << guest_page_shift);
+	/* 1G of guest page sized pages */
+	guest_num_pages = (1ul << (30 - guest_page_shift));
 	host_page_size = getpagesize();
-	host_num_pages = (TEST_MEM_PAGES * guest_page_size) / host_page_size +
-			 !!((TEST_MEM_PAGES * guest_page_size) % host_page_size);
+	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
+			 !!((guest_num_pages * guest_page_size) % host_page_size);
 
 	bmap = bitmap_alloc(host_num_pages);
 	host_bmap_track = bitmap_alloc(host_num_pages);
 
-	vm = vm_create_default(VCPU_ID, TEST_MEM_PAGES, guest_code);
+	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code);
 
 	/* Add an extra memory slot for testing dirty logging */
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
 				    TEST_MEM_OFFSET,
 				    TEST_MEM_SLOT_INDEX,
-				    TEST_MEM_PAGES,
+				    guest_num_pages,
 				    KVM_MEM_LOG_DIRTY_PAGES);
 
 	/* Do 1:1 mapping for the dirty track memory slot */
 	virt_map(vm, TEST_MEM_OFFSET, TEST_MEM_OFFSET,
-		 TEST_MEM_PAGES * guest_page_size, 0);
+		 guest_num_pages * guest_page_size, 0);
 
 	/* Cache the HVA pointer of the region */
 	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)TEST_MEM_OFFSET);
@@ -279,13 +274,18 @@ int main(int argc, char *argv[])
 	ucall_init(vm, UCALL_MMIO, NULL);
 #endif
 
-	/* Tell the guest about the page sizes */
+	/* Export the shared variables to the guest */
 	sync_global_to_guest(vm, host_page_size);
 	sync_global_to_guest(vm, guest_page_size);
+	sync_global_to_guest(vm, guest_num_pages);
 
 	/* Start the iterations */
 	iteration = 1;
 	sync_global_to_guest(vm, iteration);
+	host_quit = false;
+	host_dirty_count = 0;
+	host_clear_count = 0;
+	host_track_next_count = 0;
 
 	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
 
@@ -310,6 +310,97 @@ int main(int argc, char *argv[])
 	free(host_bmap_track);
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
+}
+
+static struct vm_guest_modes {
+	enum vm_guest_mode mode;
+	bool supported;
+	bool enabled;
+} vm_guest_modes[NUM_VM_MODES] = {
+	{ VM_MODE_P52V48_4K,	1, 1, },
+#ifdef __aarch64__
+	{ VM_MODE_P52V48_64K,	1, 1, },
+#else
+	{ VM_MODE_P52V48_64K,	0, 0, },
+#endif
+};
+
+static void help(char *name)
+{
+	int i;
+
+	puts("");
+	printf("usage: %s [-h] [-i iterations] [-I interval] [-m mode]\n", name);
+	puts("");
+	printf(" -i: specify iteration counts (default: %"PRIu64")\n",
+	       TEST_HOST_LOOP_N);
+	printf(" -I: specify interval in ms (default: %"PRIu64" ms)\n",
+	       TEST_HOST_LOOP_INTERVAL);
+	printf(" -m: specify the guest mode ID to test "
+	       "(default: test all supported modes)\n"
+	       "     This option may be used multiple times.\n"
+	       "     Guest mode IDs:\n");
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		printf("         %d:    %s%s\n",
+		       vm_guest_modes[i].mode,
+		       vm_guest_mode_string(vm_guest_modes[i].mode),
+		       vm_guest_modes[i].supported ? " (supported)" : "");
+	}
+	puts("");
+	exit(0);
+}
+
+int main(int argc, char *argv[])
+{
+	unsigned long iterations = TEST_HOST_LOOP_N;
+	unsigned long interval = TEST_HOST_LOOP_INTERVAL;
+	bool mode_selected = false;
+	unsigned int mode;
+	int opt, i;
+
+	while ((opt = getopt(argc, argv, "hi:I:m:")) != -1) {
+		switch (opt) {
+		case 'i':
+			iterations = strtol(optarg, NULL, 10);
+			break;
+		case 'I':
+			interval = strtol(optarg, NULL, 10);
+			break;
+		case 'm':
+			if (!mode_selected) {
+				for (i = 0; i < NUM_VM_MODES; ++i)
+					vm_guest_modes[i].enabled = 0;
+				mode_selected = true;
+			}
+			mode = strtoul(optarg, NULL, 10);
+			TEST_ASSERT(mode < NUM_VM_MODES,
+				    "Guest mode ID %d too big", mode);
+			vm_guest_modes[mode].enabled = 1;
+			break;
+		case 'h':
+		default:
+			help(argv[0]);
+			break;
+		}
+	}
+
+	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
+	TEST_ASSERT(interval > 0, "Interval must be greater than zero");
+
+	DEBUG("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n",
+	      iterations, interval);
+
+	srandom(time(0));
+
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		if (!vm_guest_modes[i].enabled)
+			continue;
+		TEST_ASSERT(vm_guest_modes[i].supported,
+			    "Guest mode ID %d (%s) not supported.",
+			    vm_guest_modes[i].mode,
+			    vm_guest_mode_string(vm_guest_modes[i].mode));
+		run_test(vm_guest_modes[i].mode, iterations, interval);
+	}
 
 	return 0;
 }
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 12/13] kvm: selftests: stop lying to aarch64 tests about PA-bits
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (10 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 11/13] kvm: selftests: dirty_log_test: also test 64K pages on aarch64 Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-18 17:54 ` [PATCH 13/13] kvm: selftests: support high GPAs in dirty_log_test Andrew Jones
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Let's add the 40 PA-bit versions of the VM modes, that AArch64
should have been using, so we can extend the dirty log test without
breaking things.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c     | 13 ++++++++++---
 tools/testing/selftests/kvm/include/kvm_util.h   |  2 ++
 .../selftests/kvm/lib/aarch64/processor.c        |  8 ++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c       | 16 ++++++++++++++++
 4 files changed, 36 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 3afc7d607a2e..61396882ad4e 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -230,9 +230,11 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 
 	switch (mode) {
 	case VM_MODE_P52V48_4K:
+	case VM_MODE_P40V48_4K:
 		guest_page_shift = 12;
 		break;
 	case VM_MODE_P52V48_64K:
+	case VM_MODE_P40V48_64K:
 		guest_page_shift = 16;
 		break;
 	default:
@@ -317,11 +319,16 @@ static struct vm_guest_modes {
 	bool supported;
 	bool enabled;
 } vm_guest_modes[NUM_VM_MODES] = {
+#if defined(__x86_64__)
 	{ VM_MODE_P52V48_4K,	1, 1, },
-#ifdef __aarch64__
-	{ VM_MODE_P52V48_64K,	1, 1, },
-#else
 	{ VM_MODE_P52V48_64K,	0, 0, },
+	{ VM_MODE_P40V48_4K,	0, 0, },
+	{ VM_MODE_P40V48_64K,	0, 0, },
+#elif defined(__aarch64__)
+	{ VM_MODE_P52V48_4K,	0, 0, },
+	{ VM_MODE_P52V48_64K,	0, 0, },
+	{ VM_MODE_P40V48_4K,	1, 1, },
+	{ VM_MODE_P40V48_64K,	1, 1, },
 #endif
 };
 
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 6e3dff9d9d94..d76431322a30 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -36,6 +36,8 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
 enum vm_guest_mode {
 	VM_MODE_P52V48_4K,
 	VM_MODE_P52V48_64K,
+	VM_MODE_P40V48_4K,
+	VM_MODE_P40V48_64K,
 	NUM_VM_MODES,
 };
 
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index b1dfc0d4b68e..b6022e2f116e 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -275,6 +275,14 @@ void vcpu_setup(struct kvm_vm *vm, int vcpuid, int pgd_memslot, int gdt_memslot)
 		tcr_el1 |= 1ul << 14; /* TG0 = 64KB */
 		tcr_el1 |= 6ul << 32; /* IPS = 52 bits */
 		break;
+	case VM_MODE_P40V48_4K:
+		tcr_el1 |= 0ul << 14; /* TG0 = 4KB */
+		tcr_el1 |= 2ul << 32; /* IPS = 40 bits */
+		break;
+	case VM_MODE_P40V48_64K:
+		tcr_el1 |= 1ul << 14; /* TG0 = 64KB */
+		tcr_el1 |= 2ul << 32; /* IPS = 40 bits */
+		break;
 	default:
 		TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", vm->mode);
 	}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 4d3bb515de17..dd8c8ed087b6 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -76,6 +76,8 @@ static void vm_open(struct kvm_vm *vm, int perm)
 const char * const vm_guest_mode_string[] = {
 	"PA-bits:52, VA-bits:48, 4K pages",
 	"PA-bits:52, VA-bits:48, 64K pages",
+	"PA-bits:40, VA-bits:48, 4K pages",
+	"PA-bits:40, VA-bits:48, 64K pages",
 };
 
 /*
@@ -124,6 +126,20 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 		vm->page_size = 0x10000;
 		vm->page_shift = 16;
 		break;
+	case VM_MODE_P40V48_4K:
+		vm->pgtable_levels = 4;
+		vm->pa_bits = 40;
+		vm->va_bits = 48;
+		vm->page_size = 0x1000;
+		vm->page_shift = 12;
+		break;
+	case VM_MODE_P40V48_64K:
+		vm->pgtable_levels = 3;
+		vm->pa_bits = 40;
+		vm->va_bits = 48;
+		vm->page_size = 0x10000;
+		vm->page_shift = 16;
+		break;
 	default:
 		TEST_ASSERT(false, "Unknown guest mode, mode: 0x%x", mode);
 	}
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 13/13] kvm: selftests: support high GPAs in dirty_log_test
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (11 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 12/13] kvm: selftests: stop lying to aarch64 tests about PA-bits Andrew Jones
@ 2018-09-18 17:54 ` Andrew Jones
  2018-09-19 12:37 ` [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
  2018-10-29 17:40 ` Christoffer Dall
  14 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-09-18 17:54 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c | 65 +++++++++++++++-----
 1 file changed, 50 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 61396882ad4e..d59820cc2d39 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -26,11 +26,8 @@
 /* The memory slot index to track dirty pages */
 #define TEST_MEM_SLOT_INDEX		1
 
-/*
- * GPA offset of the testing memory slot. Must be bigger than the
- * default vm mem slot, which is DEFAULT_GUEST_PHY_PAGES.
- */
-#define TEST_MEM_OFFSET			(1ul << 30) /* 1G */
+/* Default guest test memory offset, 1G */
+#define DEFAULT_GUEST_TEST_MEM		0x40000000
 
 /* How many pages to dirty for each guest loop */
 #define TEST_PAGES_PER_LOOP		1024
@@ -53,6 +50,12 @@ static uint64_t guest_num_pages;
 static uint64_t random_array[TEST_PAGES_PER_LOOP];
 static uint64_t iteration;
 
+/*
+ * GPA offset of the testing memory slot. Must be bigger than
+ * DEFAULT_GUEST_PHY_PAGES.
+ */
+static uint64_t guest_test_mem = DEFAULT_GUEST_TEST_MEM;
+
 /*
  * Continuously write to the first 8 bytes of a random pages within
  * the testing memory region.
@@ -63,7 +66,7 @@ static void guest_code(void)
 
 	while (true) {
 		for (i = 0; i < TEST_PAGES_PER_LOOP; i++) {
-			uint64_t addr = TEST_MEM_OFFSET;
+			uint64_t addr = guest_test_mem;
 			addr += (READ_ONCE(random_array[i]) % guest_num_pages)
 				* guest_page_size;
 			addr &= ~(host_page_size - 1);
@@ -221,20 +224,29 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 }
 
 static void run_test(enum vm_guest_mode mode, unsigned long iterations,
-		     unsigned long interval)
+		     unsigned long interval, bool top_offset)
 {
-	unsigned int guest_page_shift;
+	unsigned int guest_pa_bits, guest_page_shift;
 	pthread_t vcpu_thread;
 	struct kvm_vm *vm;
+	uint64_t max_gfn;
 	unsigned long *bmap;
 
 	switch (mode) {
 	case VM_MODE_P52V48_4K:
-	case VM_MODE_P40V48_4K:
+		guest_pa_bits = 52;
 		guest_page_shift = 12;
 		break;
 	case VM_MODE_P52V48_64K:
+		guest_pa_bits = 52;
+		guest_page_shift = 16;
+		break;
+	case VM_MODE_P40V48_4K:
+		guest_pa_bits = 40;
+		guest_page_shift = 12;
+		break;
 	case VM_MODE_P40V48_64K:
+		guest_pa_bits = 40;
 		guest_page_shift = 16;
 		break;
 	default:
@@ -243,6 +255,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 
 	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
+	max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1;
 	guest_page_size = (1ul << guest_page_shift);
 	/* 1G of guest page sized pages */
 	guest_num_pages = (1ul << (30 - guest_page_shift));
@@ -250,6 +263,13 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
 			 !!((guest_num_pages * guest_page_size) % host_page_size);
 
+	if (top_offset) {
+		guest_test_mem = (max_gfn - guest_num_pages) * guest_page_size;
+		guest_test_mem &= ~(host_page_size - 1);
+	}
+
+	DEBUG("guest test mem offset: 0x%lx\n", guest_test_mem);
+
 	bmap = bitmap_alloc(host_num_pages);
 	host_bmap_track = bitmap_alloc(host_num_pages);
 
@@ -257,17 +277,17 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 
 	/* Add an extra memory slot for testing dirty logging */
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-				    TEST_MEM_OFFSET,
+				    guest_test_mem,
 				    TEST_MEM_SLOT_INDEX,
 				    guest_num_pages,
 				    KVM_MEM_LOG_DIRTY_PAGES);
 
 	/* Do 1:1 mapping for the dirty track memory slot */
-	virt_map(vm, TEST_MEM_OFFSET, TEST_MEM_OFFSET,
+	virt_map(vm, guest_test_mem, guest_test_mem,
 		 guest_num_pages * guest_page_size, 0);
 
 	/* Cache the HVA pointer of the region */
-	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)TEST_MEM_OFFSET);
+	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_mem);
 
 #ifdef __x86_64__
 	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
@@ -279,6 +299,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	/* Export the shared variables to the guest */
 	sync_global_to_guest(vm, host_page_size);
 	sync_global_to_guest(vm, guest_page_size);
+	sync_global_to_guest(vm, guest_test_mem);
 	sync_global_to_guest(vm, guest_num_pages);
 
 	/* Start the iterations */
@@ -337,12 +358,17 @@ static void help(char *name)
 	int i;
 
 	puts("");
-	printf("usage: %s [-h] [-i iterations] [-I interval] [-m mode]\n", name);
+	printf("usage: %s [-h] [-i iterations] [-I interval] "
+	       "[-o offset] [-t] [-m mode]\n", name);
 	puts("");
 	printf(" -i: specify iteration counts (default: %"PRIu64")\n",
 	       TEST_HOST_LOOP_N);
 	printf(" -I: specify interval in ms (default: %"PRIu64" ms)\n",
 	       TEST_HOST_LOOP_INTERVAL);
+	printf(" -o: guest test memory offset (default: 0x%lx)\n",
+	       DEFAULT_GUEST_TEST_MEM);
+	printf(" -t: map guest test memory at the top of the allowed "
+	       "physical address range\n");
 	printf(" -m: specify the guest mode ID to test "
 	       "(default: test all supported modes)\n"
 	       "     This option may be used multiple times.\n"
@@ -362,10 +388,11 @@ int main(int argc, char *argv[])
 	unsigned long iterations = TEST_HOST_LOOP_N;
 	unsigned long interval = TEST_HOST_LOOP_INTERVAL;
 	bool mode_selected = false;
+	bool top_offset = false;
 	unsigned int mode;
 	int opt, i;
 
-	while ((opt = getopt(argc, argv, "hi:I:m:")) != -1) {
+	while ((opt = getopt(argc, argv, "hi:I:o:tm:")) != -1) {
 		switch (opt) {
 		case 'i':
 			iterations = strtol(optarg, NULL, 10);
@@ -373,6 +400,12 @@ int main(int argc, char *argv[])
 		case 'I':
 			interval = strtol(optarg, NULL, 10);
 			break;
+		case 'o':
+			guest_test_mem = strtoull(optarg, NULL, 0);
+			break;
+		case 't':
+			top_offset = true;
+			break;
 		case 'm':
 			if (!mode_selected) {
 				for (i = 0; i < NUM_VM_MODES; ++i)
@@ -393,6 +426,8 @@ int main(int argc, char *argv[])
 
 	TEST_ASSERT(iterations > 2, "Iterations must be greater than two");
 	TEST_ASSERT(interval > 0, "Interval must be greater than zero");
+	TEST_ASSERT(!top_offset || guest_test_mem == DEFAULT_GUEST_TEST_MEM,
+		    "Cannot use both -o [offset] and -t at the same time");
 
 	DEBUG("Test iterations: %"PRIu64", interval: %"PRIu64" (ms)\n",
 	      iterations, interval);
@@ -406,7 +441,7 @@ int main(int argc, char *argv[])
 			    "Guest mode ID %d (%s) not supported.",
 			    vm_guest_modes[i].mode,
 			    vm_guest_mode_string(vm_guest_modes[i].mode));
-		run_test(vm_guest_modes[i].mode, iterations, interval);
+		run_test(vm_guest_modes[i].mode, iterations, interval, top_offset);
 	}
 
 	return 0;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (12 preceding siblings ...)
  2018-09-18 17:54 ` [PATCH 13/13] kvm: selftests: support high GPAs in dirty_log_test Andrew Jones
@ 2018-09-19 12:37 ` Andrew Jones
  2018-09-19 13:13   ` Suzuki K Poulose
  2018-10-29 17:40 ` Christoffer Dall
  14 siblings, 1 reply; 21+ messages in thread
From: Andrew Jones @ 2018-09-19 12:37 UTC (permalink / raw)
  To: kvm, kvmarm, suzuki.poulose; +Cc: marc.zyngier, pbonzini

On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> This series provides KVM selftests that test dirty log tracking on
> AArch64 for both 4K and 64K guest page sizes. Additionally the
> framework provides an easy way to test dirty log tracking with the
> recently posted dynamic IPA and 52bit IPA series[1].
> 
> The series breaks down into parts as follows:
> 
>  01-02: generalize guest code to host userspace exit support by
>         introducing "ucalls" - hypercalls to userspace
>  03-05: prepare common code for a new architecture
>  06-07: add virtual memory setup support for AArch64
>     08: add vcpu setup support for AArch64
>     09: port the dirty log test to AArch64
>  10-11: add 64K guest page size support for the dirty log test
>  12-13: prepare the dirty log test to also test > 40-bit guest
>         physical address setups by allowing the test memory
>         region to be placed at the top of physical memory
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
> 

Hi Suzuki,

Here's an [untested] add-on patch that should provide the means to
test dirty logging with a 52-bit guest physical address space.
Hopefully it'll be as easy as compiling and then running with

 $ ./dirty_log_test -t

Thanks,
drew

>From 09ea1d724551a95ef9962d357049ef21ea8c77e8 Mon Sep 17 00:00:00 2001
From: Andrew Jones <drjones@redhat.com>
Date: Wed, 19 Sep 2018 14:23:30 +0200
Subject: [PATCH] kvm: selftests: aarch64: dirty_log_test: test with 52 PA-bits

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 tools/testing/selftests/kvm/dirty_log_test.c  | 24 ++++++++++++++++---
 .../testing/selftests/kvm/include/kvm_util.h  |  2 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 17 +++++++++----
 .../selftests/kvm/lib/kvm_util_internal.h     |  1 +
 4 files changed, 36 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index d59820cc2d39..c11c76e09766 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -82,6 +82,7 @@ static void guest_code(void)
 static bool host_quit;
 
 /* Points to the test VM memory region on which we track dirty logs */
+static uint8_t host_ipa_limit;
 static void *host_test_mem;
 static uint64_t host_num_pages;
 
@@ -209,12 +210,14 @@ static void vm_dirty_log_verify(unsigned long *bmap)
 }
 
 static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
-				uint64_t extra_mem_pages, void *guest_code)
+				uint64_t extra_mem_pages, void *guest_code,
+				unsigned long type)
 {
 	struct kvm_vm *vm;
 	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
 
-	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
+			O_RDWR, type);
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 #ifdef __x86_64__
 	vm_create_irqchip(vm);
@@ -231,15 +234,22 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	struct kvm_vm *vm;
 	uint64_t max_gfn;
 	unsigned long *bmap;
+	unsigned long type = 0;
 
 	switch (mode) {
 	case VM_MODE_P52V48_4K:
 		guest_pa_bits = 52;
 		guest_page_shift = 12;
+#ifdef __aarch64__
+		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
+#endif
 		break;
 	case VM_MODE_P52V48_64K:
 		guest_pa_bits = 52;
 		guest_page_shift = 16;
+#ifdef __aarch64__
+		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
+#endif
 		break;
 	case VM_MODE_P40V48_4K:
 		guest_pa_bits = 40;
@@ -273,7 +283,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
 	bmap = bitmap_alloc(host_num_pages);
 	host_bmap_track = bitmap_alloc(host_num_pages);
 
-	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code);
+	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code, type);
 
 	/* Add an extra memory slot for testing dirty logging */
 	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
@@ -392,6 +402,14 @@ int main(int argc, char *argv[])
 	unsigned int mode;
 	int opt, i;
 
+#ifdef __aarch64__
+	host_ipa_limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
+	if (host_ipa_limit == 52) {
+		vm_guest_modes[VM_MODE_P52V48_4K].supported = 1;
+		vm_guest_modes[VM_MODE_P52V48_64K].supported = 1;
+	}
+#endif
+
 	while ((opt = getopt(argc, argv, "hi:I:o:tm:")) != -1) {
 		switch (opt) {
 		case 'i':
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index d76431322a30..5202fce337e3 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -53,6 +53,8 @@ enum vm_mem_backing_src_type {
 int kvm_check_cap(long cap);
 
 struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
+struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
+			  int perm, unsigned long type);
 void kvm_vm_free(struct kvm_vm *vmp);
 void kvm_vm_restart(struct kvm_vm *vmp, int perm);
 void kvm_vm_release(struct kvm_vm *vmp);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 542336db7b4f..e1805c9e0f39 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -62,13 +62,13 @@ int kvm_check_cap(long cap)
 	return ret;
 }
 
-static void vm_open(struct kvm_vm *vm, int perm)
+static void vm_open(struct kvm_vm *vm, int perm, unsigned long type)
 {
 	vm->kvm_fd = open(KVM_DEV_PATH, perm);
 	if (vm->kvm_fd < 0)
 		exit(KSFT_SKIP);
 
-	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, NULL);
+	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, type);
 	TEST_ASSERT(vm->fd >= 0, "KVM_CREATE_VM ioctl failed, "
 		"rc: %i errno: %i", vm->fd, errno);
 }
@@ -101,7 +101,8 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
  * descriptor to control the created VM is created with the permissions
  * given by perm (e.g. O_RDWR).
  */
-struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
+struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
+			  int perm, unsigned long type)
 {
 	struct kvm_vm *vm;
 	int kvm_fd;
@@ -110,7 +111,8 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	TEST_ASSERT(vm != NULL, "Insufficent Memory");
 
 	vm->mode = mode;
-	vm_open(vm, perm);
+	vm->type = type;
+	vm_open(vm, perm, type);
 
 	/* Setup mode specific traits. */
 	switch (vm->mode) {
@@ -166,6 +168,11 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	return vm;
 }
 
+struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
+{
+	return _vm_create(mode, phy_pages, perm, 0);
+}
+
 /*
  * VM Restart
  *
@@ -183,7 +190,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
 {
 	struct userspace_mem_region *region;
 
-	vm_open(vmp, perm);
+	vm_open(vmp, perm, vmp->type);
 	if (vmp->has_irqchip)
 		vm_create_irqchip(vmp);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index 5e05fb98dc62..51a56102a5c9 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -44,6 +44,7 @@ struct vcpu {
 
 struct kvm_vm {
 	int mode;
+	unsigned long type;
 	int kvm_fd;
 	int fd;
 	unsigned int pgtable_levels;
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-09-19 12:37 ` [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
@ 2018-09-19 13:13   ` Suzuki K Poulose
  0 siblings, 0 replies; 21+ messages in thread
From: Suzuki K Poulose @ 2018-09-19 13:13 UTC (permalink / raw)
  To: Andrew Jones, kvm, kvmarm; +Cc: marc.zyngier, pbonzini

Hi Drew,

On 09/19/2018 01:37 PM, Andrew Jones wrote:
> On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
>> This series provides KVM selftests that test dirty log tracking on
>> AArch64 for both 4K and 64K guest page sizes. Additionally the
>> framework provides an easy way to test dirty log tracking with the
>> recently posted dynamic IPA and 52bit IPA series[1].
>>
>> The series breaks down into parts as follows:
>>
>>   01-02: generalize guest code to host userspace exit support by
>>          introducing "ucalls" - hypercalls to userspace
>>   03-05: prepare common code for a new architecture
>>   06-07: add virtual memory setup support for AArch64
>>      08: add vcpu setup support for AArch64
>>      09: port the dirty log test to AArch64
>>   10-11: add 64K guest page size support for the dirty log test
>>   12-13: prepare the dirty log test to also test > 40-bit guest
>>          physical address setups by allowing the test memory
>>          region to be placed at the top of physical memory
>>
>> [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
>>
> 
> Hi Suzuki,
> 
> Here's an [untested] add-on patch that should provide the means to
> test dirty logging with a 52-bit guest physical address space.
> Hopefully it'll be as easy as compiling and then running with
> 
>   $ ./dirty_log_test -t

I will give this series a spin with 52 IPA and let you know.

Thanks
Suzuki

> 
> Thanks,
> drew
> 
>  From 09ea1d724551a95ef9962d357049ef21ea8c77e8 Mon Sep 17 00:00:00 2001
> From: Andrew Jones <drjones@redhat.com>
> Date: Wed, 19 Sep 2018 14:23:30 +0200
> Subject: [PATCH] kvm: selftests: aarch64: dirty_log_test: test with 52 PA-bits
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>   tools/testing/selftests/kvm/dirty_log_test.c  | 24 ++++++++++++++++---
>   .../testing/selftests/kvm/include/kvm_util.h  |  2 ++
>   tools/testing/selftests/kvm/lib/kvm_util.c    | 17 +++++++++----
>   .../selftests/kvm/lib/kvm_util_internal.h     |  1 +
>   4 files changed, 36 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index d59820cc2d39..c11c76e09766 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -82,6 +82,7 @@ static void guest_code(void)
>   static bool host_quit;
>   
>   /* Points to the test VM memory region on which we track dirty logs */
> +static uint8_t host_ipa_limit;
>   static void *host_test_mem;
>   static uint64_t host_num_pages;
>   
> @@ -209,12 +210,14 @@ static void vm_dirty_log_verify(unsigned long *bmap)
>   }
>   
>   static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
> -				uint64_t extra_mem_pages, void *guest_code)
> +				uint64_t extra_mem_pages, void *guest_code,
> +				unsigned long type)
>   {
>   	struct kvm_vm *vm;
>   	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
>   
> -	vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages,
> +			O_RDWR, type);
>   	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>   #ifdef __x86_64__
>   	vm_create_irqchip(vm);
> @@ -231,15 +234,22 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
>   	struct kvm_vm *vm;
>   	uint64_t max_gfn;
>   	unsigned long *bmap;
> +	unsigned long type = 0;
>   
>   	switch (mode) {
>   	case VM_MODE_P52V48_4K:
>   		guest_pa_bits = 52;
>   		guest_page_shift = 12;
> +#ifdef __aarch64__
> +		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
> +#endif
>   		break;
>   	case VM_MODE_P52V48_64K:
>   		guest_pa_bits = 52;
>   		guest_page_shift = 16;
> +#ifdef __aarch64__
> +		type = KVM_VM_TYPE_ARM_IPA_SIZE(52);
> +#endif
>   		break;
>   	case VM_MODE_P40V48_4K:
>   		guest_pa_bits = 40;
> @@ -273,7 +283,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
>   	bmap = bitmap_alloc(host_num_pages);
>   	host_bmap_track = bitmap_alloc(host_num_pages);
>   
> -	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code);
> +	vm = create_vm(mode, VCPU_ID, guest_num_pages, guest_code, type);
>   
>   	/* Add an extra memory slot for testing dirty logging */
>   	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> @@ -392,6 +402,14 @@ int main(int argc, char *argv[])
>   	unsigned int mode;
>   	int opt, i;
>   
> +#ifdef __aarch64__
> +	host_ipa_limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
> +	if (host_ipa_limit == 52) {
> +		vm_guest_modes[VM_MODE_P52V48_4K].supported = 1;
> +		vm_guest_modes[VM_MODE_P52V48_64K].supported = 1;
> +	}
> +#endif
> +
>   	while ((opt = getopt(argc, argv, "hi:I:o:tm:")) != -1) {
>   		switch (opt) {
>   		case 'i':
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index d76431322a30..5202fce337e3 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -53,6 +53,8 @@ enum vm_mem_backing_src_type {
>   int kvm_check_cap(long cap);
>   
>   struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
> +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
> +			  int perm, unsigned long type);
>   void kvm_vm_free(struct kvm_vm *vmp);
>   void kvm_vm_restart(struct kvm_vm *vmp, int perm);
>   void kvm_vm_release(struct kvm_vm *vmp);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 542336db7b4f..e1805c9e0f39 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -62,13 +62,13 @@ int kvm_check_cap(long cap)
>   	return ret;
>   }
>   
> -static void vm_open(struct kvm_vm *vm, int perm)
> +static void vm_open(struct kvm_vm *vm, int perm, unsigned long type)
>   {
>   	vm->kvm_fd = open(KVM_DEV_PATH, perm);
>   	if (vm->kvm_fd < 0)
>   		exit(KSFT_SKIP);
>   
> -	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, NULL);
> +	vm->fd = ioctl(vm->kvm_fd, KVM_CREATE_VM, type);
>   	TEST_ASSERT(vm->fd >= 0, "KVM_CREATE_VM ioctl failed, "
>   		"rc: %i errno: %i", vm->fd, errno);
>   }
> @@ -101,7 +101,8 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
>    * descriptor to control the created VM is created with the permissions
>    * given by perm (e.g. O_RDWR).
>    */
> -struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages,
> +			  int perm, unsigned long type)
>   {
>   	struct kvm_vm *vm;
>   	int kvm_fd;
> @@ -110,7 +111,8 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>   	TEST_ASSERT(vm != NULL, "Insufficent Memory");
>   
>   	vm->mode = mode;
> -	vm_open(vm, perm);
> +	vm->type = type;
> +	vm_open(vm, perm, type);
>   
>   	/* Setup mode specific traits. */
>   	switch (vm->mode) {
> @@ -166,6 +168,11 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>   	return vm;
>   }
>   
> +struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> +{
> +	return _vm_create(mode, phy_pages, perm, 0);
> +}
> +
>   /*
>    * VM Restart
>    *
> @@ -183,7 +190,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
>   {
>   	struct userspace_mem_region *region;
>   
> -	vm_open(vmp, perm);
> +	vm_open(vmp, perm, vmp->type);
>   	if (vmp->has_irqchip)
>   		vm_create_irqchip(vmp);
>   
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> index 5e05fb98dc62..51a56102a5c9 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> +++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> @@ -44,6 +44,7 @@ struct vcpu {
>   
>   struct kvm_vm {
>   	int mode;
> +	unsigned long type;
>   	int kvm_fd;
>   	int fd;
>   	unsigned int pgtable_levels;
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
                   ` (13 preceding siblings ...)
  2018-09-19 12:37 ` [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
@ 2018-10-29 17:40 ` Christoffer Dall
  2018-10-30 17:38   ` Andrew Jones
  14 siblings, 1 reply; 21+ messages in thread
From: Christoffer Dall @ 2018-10-29 17:40 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, marc.zyngier, pbonzini, kvmarm

Hi Drew,

On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> This series provides KVM selftests that test dirty log tracking on
> AArch64 for both 4K and 64K guest page sizes. Additionally the
> framework provides an easy way to test dirty log tracking with the
> recently posted dynamic IPA and 52bit IPA series[1].

I was trying to parse the commit text of patch 2, and I realized that I
don't understand the 'hypercall to userspace' thing at all, which
probably means I have no idea how the selftests work overall.

I then spent a while reading various bits of documentation in the kernel
tree, LWN, etc., only to realize that I don't understand how this test
framework actually works.

Are the selftests modules, userspace programs, or code that is compiled
with the kernel, and (somehow?) run from userspace.  I thought the
latter, partially based on your explanation at ELC, but then I don't
understand how the "compile and run" make target works.

Can you help me paint the overall picture, or point me to the piece of
documentation/presentation that explains the high-level picture, which I
must have obviously missed somehow?


Thanks!

    Christoffer

> 
> The series breaks down into parts as follows:
> 
>  01-02: generalize guest code to host userspace exit support by
>         introducing "ucalls" - hypercalls to userspace
>  03-05: prepare common code for a new architecture
>  06-07: add virtual memory setup support for AArch64
>     08: add vcpu setup support for AArch64
>     09: port the dirty log test to AArch64
>  10-11: add 64K guest page size support for the dirty log test
>  12-13: prepare the dirty log test to also test > 40-bit guest
>         physical address setups by allowing the test memory
>         region to be placed at the top of physical memory
> 
> [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
> 
> Thanks,
> drew
> 
> 
> Andrew Jones (13):
>   kvm: selftests: vcpu_setup: set cr4.osfxsr
>   kvm: selftests: introduce ucall
>   kvm: selftests: move arch-specific files to arch-specific locations
>   kvm: selftests: add cscope make target
>   kvm: selftests: tidy up kvm_util
>   kvm: selftests: add vm_phy_pages_alloc
>   kvm: selftests: add virt mem support for aarch64
>   kvm: selftests: add vcpu support for aarch64
>   kvm: selftests: port dirty_log_test to aarch64
>   kvm: selftests: introduce new VM mode for 64K pages
>   kvm: selftests: dirty_log_test: also test 64K pages on aarch64
>   kvm: selftests: stop lying to aarch64 tests about PA-bits
>   kvm: selftests: support high GPAs in dirty_log_test
> 
>  tools/testing/selftests/kvm/.gitignore        |  11 +-
>  tools/testing/selftests/kvm/Makefile          |  36 +-
>  tools/testing/selftests/kvm/dirty_log_test.c  | 374 +++++++++----
>  .../selftests/kvm/include/aarch64/processor.h |  55 ++
>  .../testing/selftests/kvm/include/kvm_util.h  | 166 +++---
>  .../testing/selftests/kvm/include/sparsebit.h |   6 +-
>  .../testing/selftests/kvm/include/test_util.h |   6 +-
>  .../kvm/include/{x86.h => x86_64/processor.h} |  24 +-
>  .../selftests/kvm/include/{ => x86_64}/vmx.h  |   6 +-
>  .../selftests/kvm/lib/aarch64/processor.c     | 311 +++++++++++
>  tools/testing/selftests/kvm/lib/assert.c      |   2 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 499 +++++++-----------
>  .../selftests/kvm/lib/kvm_util_internal.h     |  33 +-
>  tools/testing/selftests/kvm/lib/ucall.c       | 144 +++++
>  .../kvm/lib/{x86.c => x86_64/processor.c}     | 197 ++++++-
>  .../selftests/kvm/lib/{ => x86_64}/vmx.c      |   4 +-
>  .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  14 +-
>  .../kvm/{ => x86_64}/set_sregs_test.c         |   2 +-
>  .../selftests/kvm/{ => x86_64}/state_test.c   |  25 +-
>  .../kvm/{ => x86_64}/sync_regs_test.c         |   2 +-
>  .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  23 +-
>  21 files changed, 1329 insertions(+), 611 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h
>  rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (98%)
>  rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
>  create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c
>  create mode 100644 tools/testing/selftests/kvm/lib/ucall.c
>  rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (85%)
>  rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (91%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (90%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
>  rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (91%)
> 
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-10-29 17:40 ` Christoffer Dall
@ 2018-10-30 17:38   ` Andrew Jones
  2018-10-30 17:50     ` Paolo Bonzini
  2018-11-01  9:08     ` Christoffer Dall
  0 siblings, 2 replies; 21+ messages in thread
From: Andrew Jones @ 2018-10-30 17:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, marc.zyngier, pbonzini, kvmarm


Hi Christoffer,

Thanks for your interest in these tests. There isn't any documentation
that I know of, but it's a good idea to have some. I'll write something
up soon. I'll also try to answer your questions now.

On Mon, Oct 29, 2018 at 06:40:02PM +0100, Christoffer Dall wrote:
> Hi Drew,
> 
> On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> > This series provides KVM selftests that test dirty log tracking on
> > AArch64 for both 4K and 64K guest page sizes. Additionally the
> > framework provides an easy way to test dirty log tracking with the
> > recently posted dynamic IPA and 52bit IPA series[1].
> 
> I was trying to parse the commit text of patch 2, and I realized that I
> don't understand the 'hypercall to userspace' thing at all, which
> probably means I have no idea how the selftests work overall.

There are three parts to a kvm selftest: 1) the test code which runs in
host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
thread code which executes KVM_RUN for the guest code and possibly also
some host userspace test code, and 3) the guest code, which is naturally
run in the vcpu thread, but in guest mode.

The need for a "ucall" arises for 2's "possibly also some host userspace
test code". In that case the guest code needs to invoke an exit from guest
mode, not just to kvm, but all the way to kvm userspace. For AArch64, as
you know, this can be done with an MMIO access. The reason patch 2
generalizes the concept is because for x86 this can and is done with a
PIO access.

> 
> I then spent a while reading various bits of documentation in the kernel
> tree, LWN, etc., only to realize that I don't understand how this test
> framework actually works.
> 
> Are the selftests modules, userspace programs, or code that is compiled
> with the kernel, and (somehow?) run from userspace.  I thought the
> latter, partially based on your explanation at ELC, but then I don't
> understand how the "compile and run" make target works.

The tests are standalone userspace programs which are compiled separately,
but have dependencies on kernel headers. As stated above, for kvm, each
selftest is a kvm userspace (including its vcpu thread code) and guest
code combined. While there's a lot of complexity in the framework,
particularly for memory management, and a bit for vcpu setup, most of that
can be shared among tests using the kvm_util.h and test_util.h APIs,
allowing a given test to only have a relatively simple main(), vcpu thread
"vcpu_worker()" function, and "guest_code()" function. Guest mode code can
easily share code with the kvm userspace test code (assuming the guest
page tables are set up in the default way) and even data can be shared as
long the accesses are done with the appropriate mappings (gva vs. hva).
There's a small API to help with that as well.

> 
> Can you help me paint the overall picture, or point me to the piece of
> documentation/presentation that explains the high-level picture, which I
> must have obviously missed somehow?

We definitely need the documentation and, in hindsight, it looks like it
would have been a good BoF topic last week too.

I think this framework has a lot of potential for KVM API testing and
even for quick & dirty guest code instruction sequence tests (although
instruction sequences would also fit kvm-unit-tests). I hope I can help
get you and anyone else interested started.

Thanks,
drew

> 
> 
> Thanks!
> 
>     Christoffer
> 
> > 
> > The series breaks down into parts as follows:
> > 
> >  01-02: generalize guest code to host userspace exit support by
> >         introducing "ucalls" - hypercalls to userspace
> >  03-05: prepare common code for a new architecture
> >  06-07: add virtual memory setup support for AArch64
> >     08: add vcpu setup support for AArch64
> >     09: port the dirty log test to AArch64
> >  10-11: add 64K guest page size support for the dirty log test
> >  12-13: prepare the dirty log test to also test > 40-bit guest
> >         physical address setups by allowing the test memory
> >         region to be placed at the top of physical memory
> > 
> > [1] https://www.spinics.net/lists/arm-kernel/msg676819.html
> > 
> > Thanks,
> > drew
> > 
> > 
> > Andrew Jones (13):
> >   kvm: selftests: vcpu_setup: set cr4.osfxsr
> >   kvm: selftests: introduce ucall
> >   kvm: selftests: move arch-specific files to arch-specific locations
> >   kvm: selftests: add cscope make target
> >   kvm: selftests: tidy up kvm_util
> >   kvm: selftests: add vm_phy_pages_alloc
> >   kvm: selftests: add virt mem support for aarch64
> >   kvm: selftests: add vcpu support for aarch64
> >   kvm: selftests: port dirty_log_test to aarch64
> >   kvm: selftests: introduce new VM mode for 64K pages
> >   kvm: selftests: dirty_log_test: also test 64K pages on aarch64
> >   kvm: selftests: stop lying to aarch64 tests about PA-bits
> >   kvm: selftests: support high GPAs in dirty_log_test
> > 
> >  tools/testing/selftests/kvm/.gitignore        |  11 +-
> >  tools/testing/selftests/kvm/Makefile          |  36 +-
> >  tools/testing/selftests/kvm/dirty_log_test.c  | 374 +++++++++----
> >  .../selftests/kvm/include/aarch64/processor.h |  55 ++
> >  .../testing/selftests/kvm/include/kvm_util.h  | 166 +++---
> >  .../testing/selftests/kvm/include/sparsebit.h |   6 +-
> >  .../testing/selftests/kvm/include/test_util.h |   6 +-
> >  .../kvm/include/{x86.h => x86_64/processor.h} |  24 +-
> >  .../selftests/kvm/include/{ => x86_64}/vmx.h  |   6 +-
> >  .../selftests/kvm/lib/aarch64/processor.c     | 311 +++++++++++
> >  tools/testing/selftests/kvm/lib/assert.c      |   2 +-
> >  tools/testing/selftests/kvm/lib/kvm_util.c    | 499 +++++++-----------
> >  .../selftests/kvm/lib/kvm_util_internal.h     |  33 +-
> >  tools/testing/selftests/kvm/lib/ucall.c       | 144 +++++
> >  .../kvm/lib/{x86.c => x86_64/processor.c}     | 197 ++++++-
> >  .../selftests/kvm/lib/{ => x86_64}/vmx.c      |   4 +-
> >  .../kvm/{ => x86_64}/cr4_cpuid_sync_test.c    |  14 +-
> >  .../kvm/{ => x86_64}/set_sregs_test.c         |   2 +-
> >  .../selftests/kvm/{ => x86_64}/state_test.c   |  25 +-
> >  .../kvm/{ => x86_64}/sync_regs_test.c         |   2 +-
> >  .../kvm/{ => x86_64}/vmx_tsc_adjust_test.c    |  23 +-
> >  21 files changed, 1329 insertions(+), 611 deletions(-)
> >  create mode 100644 tools/testing/selftests/kvm/include/aarch64/processor.h
> >  rename tools/testing/selftests/kvm/include/{x86.h => x86_64/processor.h} (98%)
> >  rename tools/testing/selftests/kvm/include/{ => x86_64}/vmx.h (99%)
> >  create mode 100644 tools/testing/selftests/kvm/lib/aarch64/processor.c
> >  create mode 100644 tools/testing/selftests/kvm/lib/ucall.c
> >  rename tools/testing/selftests/kvm/lib/{x86.c => x86_64/processor.c} (85%)
> >  rename tools/testing/selftests/kvm/lib/{ => x86_64}/vmx.c (99%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/cr4_cpuid_sync_test.c (91%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/set_sregs_test.c (98%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/state_test.c (90%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/sync_regs_test.c (99%)
> >  rename tools/testing/selftests/kvm/{ => x86_64}/vmx_tsc_adjust_test.c (91%)
> > 
> > -- 
> > 2.17.1
> > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-10-30 17:38   ` Andrew Jones
@ 2018-10-30 17:50     ` Paolo Bonzini
  2018-11-01  9:08     ` Christoffer Dall
  1 sibling, 0 replies; 21+ messages in thread
From: Paolo Bonzini @ 2018-10-30 17:50 UTC (permalink / raw)
  To: Andrew Jones, Christoffer Dall; +Cc: kvm, marc.zyngier, kvmarm

On 30/10/2018 18:38, Andrew Jones wrote:
> There are three parts to a kvm selftest: 1) the test code which runs in
> host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
> thread code which executes KVM_RUN for the guest code and possibly also
> some host userspace test code, and 3) the guest code, which is naturally
> run in the vcpu thread, but in guest mode.

Note that the vcpu thread is a specialty of this test.  Usually it's not
needed and 1+2 are the same.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-10-30 17:38   ` Andrew Jones
  2018-10-30 17:50     ` Paolo Bonzini
@ 2018-11-01  9:08     ` Christoffer Dall
  2018-11-01  9:31       ` Andrew Jones
  1 sibling, 1 reply; 21+ messages in thread
From: Christoffer Dall @ 2018-11-01  9:08 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, marc.zyngier, pbonzini, kvmarm

On Tue, Oct 30, 2018 at 06:38:20PM +0100, Andrew Jones wrote:
> 
> Hi Christoffer,
> 
> Thanks for your interest in these tests. There isn't any documentation
> that I know of, but it's a good idea to have some. I'll write something
> up soon. I'll also try to answer your questions now.
> 

That sounds great, thanks!

> On Mon, Oct 29, 2018 at 06:40:02PM +0100, Christoffer Dall wrote:
> > Hi Drew,
> > 
> > On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> > > This series provides KVM selftests that test dirty log tracking on
> > > AArch64 for both 4K and 64K guest page sizes. Additionally the
> > > framework provides an easy way to test dirty log tracking with the
> > > recently posted dynamic IPA and 52bit IPA series[1].
> > 
> > I was trying to parse the commit text of patch 2, and I realized that I
> > don't understand the 'hypercall to userspace' thing at all, which
> > probably means I have no idea how the selftests work overall.
> 
> There are three parts to a kvm selftest: 1) the test code which runs in
> host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
> thread code which executes KVM_RUN for the guest code and possibly also
> some host userspace test code, and 3) the guest code, which is naturally
> run in the vcpu thread, but in guest mode.
> 
> The need for a "ucall" arises for 2's "possibly also some host userspace
> test code". In that case the guest code needs to invoke an exit from guest
> mode, not just to kvm, but all the way to kvm userspace. For AArch64, as
> you know, this can be done with an MMIO access. The reason patch 2
> generalizes the concept is because for x86 this can and is done with a
> PIO access.
> 

So in the world of normal KVM userspace, (2) would be a thread in the
same process as (1), sharing its mm.  Is this a different setup somehow,
why?

> > 
> > I then spent a while reading various bits of documentation in the kernel
> > tree, LWN, etc., only to realize that I don't understand how this test
> > framework actually works.
> > 
> > Are the selftests modules, userspace programs, or code that is compiled
> > with the kernel, and (somehow?) run from userspace.  I thought the
> > latter, partially based on your explanation at ELC, but then I don't
> > understand how the "compile and run" make target works.
> 
> The tests are standalone userspace programs which are compiled separately,
> but have dependencies on kernel headers. As stated above, for kvm, each
> selftest is a kvm userspace (including its vcpu thread code) and guest
> code combined. While there's a lot of complexity in the framework,
> particularly for memory management, and a bit for vcpu setup, most of that
> can be shared among tests using the kvm_util.h and test_util.h APIs,
> allowing a given test to only have a relatively simple main(), vcpu thread
> "vcpu_worker()" function, and "guest_code()" function. Guest mode code can
> easily share code with the kvm userspace test code (assuming the guest
> page tables are set up in the default way) and even data can be shared as
> long the accesses are done with the appropriate mappings (gva vs. hva).
> There's a small API to help with that as well.
> 

Sounds cool.  Beware of the attributes of the mappings such that both
the guest and host have mapped the memory cacheable etc., but I'm sure
you've thought of that already.

> > 
> > Can you help me paint the overall picture, or point me to the piece of
> > documentation/presentation that explains the high-level picture, which I
> > must have obviously missed somehow?
> 
> We definitely need the documentation and, in hindsight, it looks like it
> would have been a good BoF topic last week too.

An overview of the different testing approaches would be a good KVM
Forum talk for next year, IMHO.  When should you use kvm-unit-tests, and
when should you use kselftests, some examples, etc.  Just saying ;)

> 
> I think this framework has a lot of potential for KVM API testing and
> even for quick & dirty guest code instruction sequence tests (although
> instruction sequences would also fit kvm-unit-tests). I hope I can help
> get you and anyone else interested started.
> 

I'll have a look at this series and glance at the code some more, it
would be interesting to consider if using some of this for nested virt
tests makes sense.


Thanks,

    Christoffer

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty
  2018-11-01  9:08     ` Christoffer Dall
@ 2018-11-01  9:31       ` Andrew Jones
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Jones @ 2018-11-01  9:31 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, marc.zyngier, pbonzini, kvmarm

On Thu, Nov 01, 2018 at 10:08:25AM +0100, Christoffer Dall wrote:
> On Tue, Oct 30, 2018 at 06:38:20PM +0100, Andrew Jones wrote:
> > 
> > Hi Christoffer,
> > 
> > Thanks for your interest in these tests. There isn't any documentation
> > that I know of, but it's a good idea to have some. I'll write something
> > up soon. I'll also try to answer your questions now.
> > 
> 
> That sounds great, thanks!
> 
> > On Mon, Oct 29, 2018 at 06:40:02PM +0100, Christoffer Dall wrote:
> > > Hi Drew,
> > > 
> > > On Tue, Sep 18, 2018 at 07:54:23PM +0200, Andrew Jones wrote:
> > > > This series provides KVM selftests that test dirty log tracking on
> > > > AArch64 for both 4K and 64K guest page sizes. Additionally the
> > > > framework provides an easy way to test dirty log tracking with the
> > > > recently posted dynamic IPA and 52bit IPA series[1].
> > > 
> > > I was trying to parse the commit text of patch 2, and I realized that I
> > > don't understand the 'hypercall to userspace' thing at all, which
> > > probably means I have no idea how the selftests work overall.
> > 
> > There are three parts to a kvm selftest: 1) the test code which runs in
> > host userspace and _is_ the kvm userspace used with kvm, 2) the vcpu
> > thread code which executes KVM_RUN for the guest code and possibly also
> > some host userspace test code, and 3) the guest code, which is naturally
> > run in the vcpu thread, but in guest mode.
> > 
> > The need for a "ucall" arises for 2's "possibly also some host userspace
> > test code". In that case the guest code needs to invoke an exit from guest
> > mode, not just to kvm, but all the way to kvm userspace. For AArch64, as
> > you know, this can be done with an MMIO access. The reason patch 2
> > generalizes the concept is because for x86 this can and is done with a
> > PIO access.
> > 
> 
> So in the world of normal KVM userspace, (2) would be a thread in the
> same process as (1), sharing its mm.  Is this a different setup somehow,
> why?

It's the same setup. Actually the only difference is what Paolo pointed
out in his reply. There's no need for an independent vcpu thread to be
spawned when only one vcpu thread is needed and no additional main thread
is needed. I.e. the main test / kvm userspace code can call KVM_RUN
itself.

> 
> > > 
> > > I then spent a while reading various bits of documentation in the kernel
> > > tree, LWN, etc., only to realize that I don't understand how this test
> > > framework actually works.
> > > 
> > > Are the selftests modules, userspace programs, or code that is compiled
> > > with the kernel, and (somehow?) run from userspace.  I thought the
> > > latter, partially based on your explanation at ELC, but then I don't
> > > understand how the "compile and run" make target works.
> > 
> > The tests are standalone userspace programs which are compiled separately,
> > but have dependencies on kernel headers. As stated above, for kvm, each
> > selftest is a kvm userspace (including its vcpu thread code) and guest
> > code combined. While there's a lot of complexity in the framework,
> > particularly for memory management, and a bit for vcpu setup, most of that
> > can be shared among tests using the kvm_util.h and test_util.h APIs,
> > allowing a given test to only have a relatively simple main(), vcpu thread
> > "vcpu_worker()" function, and "guest_code()" function. Guest mode code can
> > easily share code with the kvm userspace test code (assuming the guest
> > page tables are set up in the default way) and even data can be shared as
> > long the accesses are done with the appropriate mappings (gva vs. hva).
> > There's a small API to help with that as well.
> > 
> 
> Sounds cool.  Beware of the attributes of the mappings such that both
> the guest and host have mapped the memory cacheable etc., but I'm sure
> you've thought of that already.

Right. If you look at virt_pg_map(), then you'll see that I have the
default set to NORMAL memory. It can be overridden by calling
_virt_pg_map() directly - which might be nice to do to specifically
test stage1/stage2 mapping combinations.

> 
> > > 
> > > Can you help me paint the overall picture, or point me to the piece of
> > > documentation/presentation that explains the high-level picture, which I
> > > must have obviously missed somehow?
> > 
> > We definitely need the documentation and, in hindsight, it looks like it
> > would have been a good BoF topic last week too.
> 
> An overview of the different testing approaches would be a good KVM
> Forum talk for next year, IMHO.  When should you use kvm-unit-tests, and
> when should you use kselftests, some examples, etc.  Just saying ;)

:-)

> 
> > 
> > I think this framework has a lot of potential for KVM API testing and
> > even for quick & dirty guest code instruction sequence tests (although
> > instruction sequences would also fit kvm-unit-tests). I hope I can help
> > get you and anyone else interested started.
> > 
> 
> I'll have a look at this series and glance at the code some more, it
> would be interesting to consider if using some of this for nested virt
> tests makes sense.

Yes. x86 has many nested tests. I think the framework was originally
created with that in mind.

Thanks,
drew

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-11-01  9:31 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-18 17:54 [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
2018-09-18 17:54 ` [PATCH 01/13] kvm: selftests: vcpu_setup: set cr4.osfxsr Andrew Jones
2018-09-18 17:54 ` [PATCH 02/13] kvm: selftests: introduce ucall Andrew Jones
2018-09-18 17:54 ` [PATCH 03/13] kvm: selftests: move arch-specific files to arch-specific locations Andrew Jones
2018-09-18 17:54 ` [PATCH 04/13] kvm: selftests: add cscope make target Andrew Jones
2018-09-18 17:54 ` [PATCH 05/13] kvm: selftests: tidy up kvm_util Andrew Jones
2018-09-18 17:54 ` [PATCH 06/13] kvm: selftests: add vm_phy_pages_alloc Andrew Jones
2018-09-18 17:54 ` [PATCH 07/13] kvm: selftests: add virt mem support for aarch64 Andrew Jones
2018-09-18 17:54 ` [PATCH 08/13] kvm: selftests: add vcpu " Andrew Jones
2018-09-18 17:54 ` [PATCH 09/13] kvm: selftests: port dirty_log_test to aarch64 Andrew Jones
2018-09-18 17:54 ` [PATCH 10/13] kvm: selftests: introduce new VM mode for 64K pages Andrew Jones
2018-09-18 17:54 ` [PATCH 11/13] kvm: selftests: dirty_log_test: also test 64K pages on aarch64 Andrew Jones
2018-09-18 17:54 ` [PATCH 12/13] kvm: selftests: stop lying to aarch64 tests about PA-bits Andrew Jones
2018-09-18 17:54 ` [PATCH 13/13] kvm: selftests: support high GPAs in dirty_log_test Andrew Jones
2018-09-19 12:37 ` [PATCH 00/13] kvm: selftests: add aarch64 framework and dirty Andrew Jones
2018-09-19 13:13   ` Suzuki K Poulose
2018-10-29 17:40 ` Christoffer Dall
2018-10-30 17:38   ` Andrew Jones
2018-10-30 17:50     ` Paolo Bonzini
2018-11-01  9:08     ` Christoffer Dall
2018-11-01  9:31       ` Andrew Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.