linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests
@ 2020-04-10 23:16 Sean Christopherson
  2020-04-10 23:16 ` [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm() Sean Christopherson
                   ` (10 more replies)
  0 siblings, 11 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:16 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

This is v2-ish of my series to add a "delete" testcase[1], and v5.1 of
Wainer's series to add a "max" testcase[2].

I've only tested on x86_64.  I fudged compile testing on !x86_64 by
inverting the ifdefs, e.g. to squish unused var warnings, but by no means
is the code actually tested on other architectures.

I kept Andrew's review for the "max" test.  Other than the 1MB->2MB
change (see below), it was basically a straight copy-paste of code.

v1->v2 of delete:
  - Drop patch to expose primary memslot. [Peter]
  - Add explicit synchronization to MOVE and DELETE tests. [Peter]
  - Use list.h instead of open coding linked lists. [Peter]
  - Clean up the code and separate the testcases into separate functions.
  - Expand GUEST_ASSERT() to allow passing a value back to the host for
    printing.
  - Move to common KVM, with ifdefs to hide the x86_64-only stuff (which
    is a lot at this point).
  - Do KVM_SET_NR_MMU_PAGES in the "zero" testcase to get less obscure
    behavior for KVM_RUN. [Christian]

v5.1 of max:
  - Fix a whitespace issue in vm_get_fd(). [checkpatch]
  - Move the code to set_memory_region_test.  The only _intended_
    functional change is to create 2MB regions instead of 1MB regions.
    The only motivation for doing so was to reuse an existing define in
    set_memory_region_test.

[1] https://lkml.kernel.org/r/20200320205546.2396-1-sean.j.christopherson@intel.com
[2] https://lkml.kernel.org/r/20200409220905.26573-1-wainersm@redhat.com

Sean Christopherson (8):
  KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  KVM: selftests: Use kernel's list instead of homebrewed replacement
  KVM: selftests: Add util to delete memory region
  KVM: selftests: Add GUEST_ASSERT variants to pass values to host
  KVM: sefltests: Add explicit synchronization to move mem region test
  KVM: selftests: Add "delete" testcase to set_memory_region_test
  KVM: selftests: Add "zero" testcase to set_memory_region_test
  KVM: selftests: Make set_memory_region_test common to all
    architectures

Wainer dos Santos Moschetta (2):
  selftests: kvm: Add vm_get_fd() in kvm_util
  selftests: kvm: Add testcase for creating max number of memslots

 tools/testing/selftests/kvm/.gitignore        |   2 +-
 tools/testing/selftests/kvm/Makefile          |   4 +-
 .../testing/selftests/kvm/include/kvm_util.h  |  28 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    | 154 +++----
 .../selftests/kvm/lib/kvm_util_internal.h     |   8 +-
 .../selftests/kvm/lib/s390x/processor.c       |   5 +-
 .../selftests/kvm/set_memory_region_test.c    | 403 ++++++++++++++++++
 .../kvm/x86_64/set_memory_region_test.c       | 141 ------
 8 files changed, 520 insertions(+), 225 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/set_memory_region_test.c
 delete mode 100644 tools/testing/selftests/kvm/x86_64/set_memory_region_test.c

-- 
2.26.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
@ 2020-04-10 23:16 ` Sean Christopherson
  2020-04-13 18:26   ` Wainer dos Santos Moschetta
  2020-04-14 16:02   ` Andrew Jones
  2020-04-10 23:16 ` [PATCH 02/10] KVM: selftests: Use kernel's list instead of homebrewed replacement Sean Christopherson
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:16 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
directly instead of doing an extra lookup.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8a3523d4434f..9a783c20dd26 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
  *
  * Input Args:
  *   vm - Virtual Machine
- *   vcpuid - VCPU ID
+ *   vcpu - VCPU to remove
  *
  * Output Args: None
  *
@@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
  *
  * Within the VM specified by vm, removes the VCPU given by vcpuid.
  */
-static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
+static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
 {
-	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
 	int ret;
 
 	ret = munmap(vcpu->state, sizeof(*vcpu->state));
@@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
 	int ret;
 
 	while (vmp->vcpu_head)
-		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
+		vm_vcpu_rm(vmp, vmp->vcpu_head);
 
 	ret = close(vmp->fd);
 	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 02/10] KVM: selftests: Use kernel's list instead of homebrewed replacement
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
  2020-04-10 23:16 ` [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm() Sean Christopherson
@ 2020-04-10 23:16 ` Sean Christopherson
  2020-04-14 16:03   ` Andrew Jones
  2020-04-10 23:17 ` [PATCH 03/10] KVM: selftests: Add util to delete memory region Sean Christopherson
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:16 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Replace the KVM selftests' homebrewed linked lists for vCPUs and memory
regions with the kernel's 'struct list_head'.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 94 ++++++++-----------
 .../selftests/kvm/lib/kvm_util_internal.h     |  8 +-
 .../selftests/kvm/lib/s390x/processor.c       |  5 +-
 4 files changed, 48 insertions(+), 60 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index a99b875f50d2..2f329e785c58 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -10,6 +10,7 @@
 #include "test_util.h"
 
 #include "asm/kvm.h"
+#include "linux/list.h"
 #include "linux/kvm.h"
 #include <sys/ioctl.h>
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 9a783c20dd26..105ee9bc09f0 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -161,6 +161,9 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 	vm = calloc(1, sizeof(*vm));
 	TEST_ASSERT(vm != NULL, "Insufficient Memory");
 
+	INIT_LIST_HEAD(&vm->vcpus);
+	INIT_LIST_HEAD(&vm->userspace_mem_regions);
+
 	vm->mode = mode;
 	vm->type = 0;
 
@@ -258,8 +261,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
 	if (vmp->has_irqchip)
 		vm_create_irqchip(vmp);
 
-	for (region = vmp->userspace_mem_region_head; region;
-		region = region->next) {
+	list_for_each_entry(region, &vmp->userspace_mem_regions, list) {
 		int ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION, &region->region);
 		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n"
 			    "  rc: %i errno: %i\n"
@@ -319,8 +321,7 @@ userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end)
 {
 	struct userspace_mem_region *region;
 
-	for (region = vm->userspace_mem_region_head; region;
-		region = region->next) {
+	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
 		uint64_t existing_start = region->region.guest_phys_addr;
 		uint64_t existing_end = region->region.guest_phys_addr
 			+ region->region.memory_size - 1;
@@ -378,11 +379,11 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start,
  */
 struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
 {
-	struct vcpu *vcpup;
+	struct vcpu *vcpu;
 
-	for (vcpup = vm->vcpu_head; vcpup; vcpup = vcpup->next) {
-		if (vcpup->id == vcpuid)
-			return vcpup;
+	list_for_each_entry(vcpu, &vm->vcpus, list) {
+		if (vcpu->id == vcpuid)
+			return vcpu;
 	}
 
 	return NULL;
@@ -392,16 +393,15 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
  * VM VCPU Remove
  *
  * Input Args:
- *   vm - Virtual Machine
  *   vcpu - VCPU to remove
  *
  * Output Args: None
  *
  * Return: None, TEST_ASSERT failures for all error conditions
  *
- * Within the VM specified by vm, removes the VCPU given by vcpuid.
+ * Removes a vCPU from a VM and frees its resources.
  */
-static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
+static void vm_vcpu_rm(struct vcpu *vcpu)
 {
 	int ret;
 
@@ -412,21 +412,17 @@ static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
 	TEST_ASSERT(ret == 0, "Close of VCPU fd failed, rc: %i "
 		"errno: %i", ret, errno);
 
-	if (vcpu->next)
-		vcpu->next->prev = vcpu->prev;
-	if (vcpu->prev)
-		vcpu->prev->next = vcpu->next;
-	else
-		vm->vcpu_head = vcpu->next;
+	list_del(&vcpu->list);
 	free(vcpu);
 }
 
 void kvm_vm_release(struct kvm_vm *vmp)
 {
+	struct vcpu *vcpu, *tmp;
 	int ret;
 
-	while (vmp->vcpu_head)
-		vm_vcpu_rm(vmp, vmp->vcpu_head);
+	list_for_each_entry_safe(vcpu, tmp, &vmp->vcpus, list)
+		vm_vcpu_rm(vcpu);
 
 	ret = close(vmp->fd);
 	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
@@ -442,15 +438,15 @@ void kvm_vm_release(struct kvm_vm *vmp)
  */
 void kvm_vm_free(struct kvm_vm *vmp)
 {
+	struct userspace_mem_region *region, *tmp;
 	int ret;
 
 	if (vmp == NULL)
 		return;
 
 	/* Free userspace_mem_regions. */
-	while (vmp->userspace_mem_region_head) {
-		struct userspace_mem_region *region
-			= vmp->userspace_mem_region_head;
+	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list) {
+		list_del(&region->list);
 
 		region->region.memory_size = 0;
 		ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION,
@@ -458,7 +454,6 @@ void kvm_vm_free(struct kvm_vm *vmp)
 		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
 			"rc: %i errno: %i", ret, errno);
 
-		vmp->userspace_mem_region_head = region->next;
 		sparsebit_free(&region->unused_phy_pages);
 		ret = munmap(region->mmap_start, region->mmap_size);
 		TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i",
@@ -611,12 +606,10 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 			(uint64_t) region->region.memory_size);
 
 	/* Confirm no region with the requested slot already exists. */
-	for (region = vm->userspace_mem_region_head; region;
-		region = region->next) {
-		if (region->region.slot == slot)
-			break;
-	}
-	if (region != NULL)
+	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
+		if (region->region.slot != slot)
+			continue;
+
 		TEST_FAIL("A mem region with the requested slot "
 			"already exists.\n"
 			"  requested slot: %u paddr: 0x%lx npages: 0x%lx\n"
@@ -625,6 +618,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 			region->region.slot,
 			(uint64_t) region->region.guest_phys_addr,
 			(uint64_t) region->region.memory_size);
+	}
 
 	/* Allocate and initialize new mem region structure. */
 	region = calloc(1, sizeof(*region));
@@ -685,10 +679,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		guest_paddr, (uint64_t) region->region.memory_size);
 
 	/* Add to linked-list of memory regions. */
-	if (vm->userspace_mem_region_head)
-		vm->userspace_mem_region_head->prev = region;
-	region->next = vm->userspace_mem_region_head;
-	vm->userspace_mem_region_head = region;
+	list_add(&region->list, &vm->userspace_mem_regions);
 }
 
 /*
@@ -711,20 +702,17 @@ memslot2region(struct kvm_vm *vm, uint32_t memslot)
 {
 	struct userspace_mem_region *region;
 
-	for (region = vm->userspace_mem_region_head; region;
-		region = region->next) {
+	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
 		if (region->region.slot == memslot)
-			break;
-	}
-	if (region == NULL) {
-		fprintf(stderr, "No mem region with the requested slot found,\n"
-			"  requested slot: %u\n", memslot);
-		fputs("---- vm dump ----\n", stderr);
-		vm_dump(stderr, vm, 2);
-		TEST_FAIL("Mem region not found");
+			return region;
 	}
 
-	return region;
+	fprintf(stderr, "No mem region with the requested slot found,\n"
+		"  requested slot: %u\n", memslot);
+	fputs("---- vm dump ----\n", stderr);
+	vm_dump(stderr, vm, 2);
+	TEST_FAIL("Mem region not found");
+	return NULL;
 }
 
 /*
@@ -862,10 +850,7 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid)
 		"vcpu id: %u errno: %i", vcpuid, errno);
 
 	/* Add to linked-list of VCPUs. */
-	if (vm->vcpu_head)
-		vm->vcpu_head->prev = vcpu;
-	vcpu->next = vm->vcpu_head;
-	vm->vcpu_head = vcpu;
+	list_add(&vcpu->list, &vm->vcpus);
 }
 
 /*
@@ -1058,8 +1043,8 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
 {
 	struct userspace_mem_region *region;
-	for (region = vm->userspace_mem_region_head; region;
-	     region = region->next) {
+
+	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
 		if ((gpa >= region->region.guest_phys_addr)
 			&& (gpa <= (region->region.guest_phys_addr
 				+ region->region.memory_size - 1)))
@@ -1091,8 +1076,8 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
 vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
 {
 	struct userspace_mem_region *region;
-	for (region = vm->userspace_mem_region_head; region;
-	     region = region->next) {
+
+	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
 		if ((hva >= region->host_mem)
 			&& (hva <= (region->host_mem
 				+ region->region.memory_size - 1)))
@@ -1519,8 +1504,7 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 	fprintf(stream, "%*sfd: %i\n", indent, "", vm->fd);
 	fprintf(stream, "%*spage_size: 0x%x\n", indent, "", vm->page_size);
 	fprintf(stream, "%*sMem Regions:\n", indent, "");
-	for (region = vm->userspace_mem_region_head; region;
-		region = region->next) {
+	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
 		fprintf(stream, "%*sguest_phys: 0x%lx size: 0x%lx "
 			"host_virt: %p\n", indent + 2, "",
 			(uint64_t) region->region.guest_phys_addr,
@@ -1539,7 +1523,7 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 		virt_dump(stream, vm, indent + 4);
 	}
 	fprintf(stream, "%*sVCPUs:\n", indent, "");
-	for (vcpu = vm->vcpu_head; vcpu; vcpu = vcpu->next)
+	list_for_each_entry(vcpu, &vm->vcpus, list)
 		vcpu_dump(stream, vm, vcpu->id, indent + 2);
 }
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index ca56a0133127..2ef446520748 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -13,7 +13,6 @@
 #define KVM_DEV_PATH		"/dev/kvm"
 
 struct userspace_mem_region {
-	struct userspace_mem_region *next, *prev;
 	struct kvm_userspace_memory_region region;
 	struct sparsebit *unused_phy_pages;
 	int fd;
@@ -21,10 +20,11 @@ struct userspace_mem_region {
 	void *host_mem;
 	void *mmap_start;
 	size_t mmap_size;
+	struct list_head list;
 };
 
 struct vcpu {
-	struct vcpu *next, *prev;
+	struct list_head list;
 	uint32_t id;
 	int fd;
 	struct kvm_run *state;
@@ -41,8 +41,8 @@ struct kvm_vm {
 	unsigned int pa_bits;
 	unsigned int va_bits;
 	uint64_t max_gfn;
-	struct vcpu *vcpu_head;
-	struct userspace_mem_region *userspace_mem_region_head;
+	struct list_head vcpus;
+	struct list_head userspace_mem_regions;
 	struct sparsebit *vpages_valid;
 	struct sparsebit *vpages_mapped;
 	bool has_irqchip;
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
index 8d94961bd046..a88c5d665725 100644
--- a/tools/testing/selftests/kvm/lib/s390x/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -233,7 +233,10 @@ void vcpu_args_set(struct kvm_vm *vm, uint32_t vcpuid, unsigned int num, ...)
 
 void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
 {
-	struct vcpu *vcpu = vm->vcpu_head;
+	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
+
+	if (!vcpu)
+		return;
 
 	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
 		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 03/10] KVM: selftests: Add util to delete memory region
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
  2020-04-10 23:16 ` [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm() Sean Christopherson
  2020-04-10 23:16 ` [PATCH 02/10] KVM: selftests: Use kernel's list instead of homebrewed replacement Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-13 18:52   ` Wainer dos Santos Moschetta
  2020-04-14 16:04   ` Andrew Jones
  2020-04-10 23:17 ` [PATCH 04/10] KVM: selftests: Add GUEST_ASSERT variants to pass values to host Sean Christopherson
                   ` (7 subsequent siblings)
  10 siblings, 2 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Add a utility to delete a memory region, it will be used by x86's
set_memory_region_test.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 56 +++++++++++++------
 2 files changed, 40 insertions(+), 17 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 2f329e785c58..d4c3e4d9cd92 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,6 +114,7 @@ int _vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, unsigned long ioctl,
 void vm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
 void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
 void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
+void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
 			  uint32_t data_memslot, uint32_t pgd_memslot);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 105ee9bc09f0..ab5b7ea60f4b 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -433,34 +433,38 @@ void kvm_vm_release(struct kvm_vm *vmp)
 		"  vmp->kvm_fd: %i rc: %i errno: %i", vmp->kvm_fd, ret, errno);
 }
 
+static void __vm_mem_region_delete(struct kvm_vm *vm,
+				   struct userspace_mem_region *region)
+{
+	int ret;
+
+	list_del(&region->list);
+
+	region->region.memory_size = 0;
+	ret = ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, &region->region);
+	TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
+		    "rc: %i errno: %i", ret, errno);
+
+	sparsebit_free(&region->unused_phy_pages);
+	ret = munmap(region->mmap_start, region->mmap_size);
+	TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno);
+
+	free(region);
+}
+
 /*
  * Destroys and frees the VM pointed to by vmp.
  */
 void kvm_vm_free(struct kvm_vm *vmp)
 {
 	struct userspace_mem_region *region, *tmp;
-	int ret;
 
 	if (vmp == NULL)
 		return;
 
 	/* Free userspace_mem_regions. */
-	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list) {
-		list_del(&region->list);
-
-		region->region.memory_size = 0;
-		ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION,
-			&region->region);
-		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
-			"rc: %i errno: %i", ret, errno);
-
-		sparsebit_free(&region->unused_phy_pages);
-		ret = munmap(region->mmap_start, region->mmap_size);
-		TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i",
-			    ret, errno);
-
-		free(region);
-	}
+	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list)
+		__vm_mem_region_delete(vmp, region);
 
 	/* Free sparsebit arrays. */
 	sparsebit_free(&vmp->vpages_valid);
@@ -775,6 +779,24 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
 		    ret, errno, slot, new_gpa);
 }
 
+/*
+ * VM Memory Region Delete
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   slot - Slot of the memory region to delete
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Delete a memory region.
+ */
+void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
+{
+	__vm_mem_region_delete(vm, memslot2region(vm, slot));
+}
+
 /*
  * VCPU mmap Size
  *
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 04/10] KVM: selftests: Add GUEST_ASSERT variants to pass values to host
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (2 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 03/10] KVM: selftests: Add util to delete memory region Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-14 16:02   ` Andrew Jones
  2020-04-10 23:17 ` [PATCH 05/10] KVM: sefltests: Add explicit synchronization to move mem region test Sean Christopherson
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Add variants of GUEST_ASSERT to pass values back to the host, e.g. to
help debug/understand a failure when the the cause of the assert isn't
necessarily binary.

It'd probably be possible to auto-calculate the number of arguments and
just have a single GUEST_ASSERT, but there are a limited number of
variants and silently eating arguments could lead to subtle code bugs.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  | 25 +++++++++++++++----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index d4c3e4d9cd92..e38d91bd8ec1 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -313,11 +313,26 @@ uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc);
 
 #define GUEST_SYNC(stage)	ucall(UCALL_SYNC, 2, "hello", stage)
 #define GUEST_DONE()		ucall(UCALL_DONE, 0)
-#define GUEST_ASSERT(_condition) do {			\
-	if (!(_condition))				\
-		ucall(UCALL_ABORT, 2,			\
-			"Failed guest assert: "		\
-			#_condition, __LINE__);		\
+#define __GUEST_ASSERT(_condition, _nargs, _args...) do {	\
+	if (!(_condition))					\
+		ucall(UCALL_ABORT, 2 + _nargs,			\
+			"Failed guest assert: "			\
+			#_condition, __LINE__, _args);		\
 } while (0)
 
+#define GUEST_ASSERT(_condition) \
+	__GUEST_ASSERT((_condition), 0, 0)
+
+#define GUEST_ASSERT_1(_condition, arg1) \
+	__GUEST_ASSERT((_condition), 1, (arg1))
+
+#define GUEST_ASSERT_2(_condition, arg1, arg2) \
+	__GUEST_ASSERT((_condition), 2, (arg1), (arg2))
+
+#define GUEST_ASSERT_3(_condition, arg1, arg2, arg3) \
+	__GUEST_ASSERT((_condition), 3, (arg1), (arg2), (arg3))
+
+#define GUEST_ASSERT_4(_condition, arg1, arg2, arg3, arg4) \
+	__GUEST_ASSERT((_condition), 4, (arg1), (arg2), (arg3), (arg4))
+
 #endif /* SELFTEST_KVM_UTIL_H */
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 05/10] KVM: sefltests: Add explicit synchronization to move mem region test
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (3 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 04/10] KVM: selftests: Add GUEST_ASSERT variants to pass values to host Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-10 23:17 ` [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test Sean Christopherson
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Use sem_post() and sem_timedwait() to synchronize test stages between
the vCPU thread and the main thread instead of using usleep() to wait
for the vCPU thread and hoping for the best.

Opportunistically refactor the code to make it suck less in general,
and to prepare for adding more testcases.

Suggested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../kvm/x86_64/set_memory_region_test.c       | 117 +++++++++++++++---
 1 file changed, 99 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
index c6691cff4e19..629dd8579b73 100644
--- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
@@ -3,6 +3,7 @@
 #include <fcntl.h>
 #include <pthread.h>
 #include <sched.h>
+#include <semaphore.h>
 #include <signal.h>
 #include <stdio.h>
 #include <stdlib.h>
@@ -26,18 +27,20 @@
 #define MEM_REGION_SIZE		0x200000
 #define MEM_REGION_SLOT		10
 
-static void guest_code(void)
+static const uint64_t MMIO_VAL = 0xbeefull;
+
+static sem_t vcpu_ready;
+
+static inline uint64_t guest_spin_on_val(uint64_t spin_val)
 {
 	uint64_t val;
 
 	do {
 		val = READ_ONCE(*((uint64_t *)MEM_REGION_GPA));
-	} while (!val);
+	} while (val == spin_val);
 
-	if (val != 1)
-		ucall(UCALL_ABORT, 1, val);
-
-	GUEST_DONE();
+	GUEST_SYNC(0);
+	return val;
 }
 
 static void *vcpu_worker(void *data)
@@ -49,25 +52,60 @@ static void *vcpu_worker(void *data)
 
 	/*
 	 * Loop until the guest is done.  Re-enter the guest on all MMIO exits,
-	 * which will occur if the guest attempts to access a memslot while it
-	 * is being moved.
+	 * which will occur if the guest attempts to access a memslot after it
+	 * has been deleted or while it is being moved .
 	 */
 	run = vcpu_state(vm, VCPU_ID);
-	do {
+
+	while (1) {
 		vcpu_run(vm, VCPU_ID);
-	} while (run->exit_reason == KVM_EXIT_MMIO);
 
-	TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
-		    "Unexpected exit reason = %d", run->exit_reason);
+		if (run->exit_reason == KVM_EXIT_IO) {
+			cmd = get_ucall(vm, VCPU_ID, &uc);
+			if (cmd != UCALL_SYNC)
+				break;
+
+			sem_post(&vcpu_ready);
+			continue;
+		}
+
+		if (run->exit_reason != KVM_EXIT_MMIO)
+			break;
+
+		TEST_ASSERT(!run->mmio.is_write, "Unexpected exit mmio write");
+		TEST_ASSERT(run->mmio.len == 8,
+			    "Unexpected exit mmio size = %u", run->mmio.len);
+
+		TEST_ASSERT(run->mmio.phys_addr == MEM_REGION_GPA,
+			    "Unexpected exit mmio address = 0x%llx",
+			    run->mmio.phys_addr);
+		memcpy(run->mmio.data, &MMIO_VAL, 8);
+	}
+
+	if (run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT)
+		TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0],
+			  __FILE__, uc.args[1], uc.args[2]);
 
-	cmd = get_ucall(vm, VCPU_ID, &uc);
-	TEST_ASSERT(cmd == UCALL_DONE, "Unexpected val in guest = %lu", uc.args[0]);
 	return NULL;
 }
 
-static void test_move_memory_region(void)
+static void wait_for_vcpu(void)
+{
+	struct timespec ts;
+
+	TEST_ASSERT(!clock_gettime(CLOCK_REALTIME, &ts),
+		    "clock_gettime() failed: %d\n", errno);
+
+	ts.tv_sec += 2;
+	TEST_ASSERT(!sem_timedwait(&vcpu_ready, &ts),
+		    "sem_timedwait() failed: %d\n", errno);
+
+	/* Wait for the vCPU thread to reenter the guest. */
+	usleep(100000);
+}
+
+static struct kvm_vm *spawn_vm(pthread_t *vcpu_thread, void *guest_code)
 {
-	pthread_t vcpu_thread;
 	struct kvm_vm *vm;
 	uint64_t *hva;
 	uint64_t gpa;
@@ -93,10 +131,45 @@ static void test_move_memory_region(void)
 	hva = addr_gpa2hva(vm, MEM_REGION_GPA);
 	memset(hva, 0, 2 * 4096);
 
-	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
+	pthread_create(vcpu_thread, NULL, vcpu_worker, vm);
 
 	/* Ensure the guest thread is spun up. */
-	usleep(100000);
+	wait_for_vcpu();
+
+	return vm;
+}
+
+
+static void guest_code_move_memory_region(void)
+{
+	uint64_t val;
+
+	GUEST_SYNC(0);
+
+	/*
+	 * Spin until the memory region is moved to a misaligned address.  This
+	 * may or may not trigger MMIO, as the window where the memslot is
+	 * invalid is quite small.
+	 */
+	val = guest_spin_on_val(0);
+	GUEST_ASSERT_1(val == 1 || val == MMIO_VAL, val);
+
+	/* Spin until the memory region is realigned. */
+	val = guest_spin_on_val(MMIO_VAL);
+	GUEST_ASSERT_1(val == 1, val);
+
+	GUEST_DONE();
+}
+
+static void test_move_memory_region(void)
+{
+	pthread_t vcpu_thread;
+	struct kvm_vm *vm;
+	uint64_t *hva;
+
+	vm = spawn_vm(&vcpu_thread, guest_code_move_memory_region);
+
+	hva = addr_gpa2hva(vm, MEM_REGION_GPA);
 
 	/*
 	 * Shift the region's base GPA.  The guest should not see "2" as the
@@ -106,6 +179,11 @@ static void test_move_memory_region(void)
 	vm_mem_region_move(vm, MEM_REGION_SLOT, MEM_REGION_GPA - 4096);
 	WRITE_ONCE(*hva, 2);
 
+	/*
+	 * The guest _might_ see an invalid memslot and trigger MMIO, but it's
+	 * a tiny window.  Spin and defer the sync until the memslot is
+	 * restored and guest behavior is once again deterministic.
+	 */
 	usleep(100000);
 
 	/*
@@ -116,6 +194,9 @@ static void test_move_memory_region(void)
 
 	/* Restore the original base, the guest should see "1". */
 	vm_mem_region_move(vm, MEM_REGION_SLOT, MEM_REGION_GPA);
+	wait_for_vcpu();
+	/* Defered sync from when the memslot was misaligned (above). */
+	wait_for_vcpu();
 
 	pthread_join(vcpu_thread, NULL);
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (4 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 05/10] KVM: sefltests: Add explicit synchronization to move mem region test Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-14 16:19   ` Andrew Jones
  2020-04-10 23:17 ` [PATCH 07/10] selftests: kvm: Add vm_get_fd() in kvm_util Sean Christopherson
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Add a testcase for deleting memslots while the guest is running.
Like the "move" testcase, this is x86_64-only as it relies on MMIO
happening when a non-existent memslot is encountered.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../kvm/x86_64/set_memory_region_test.c       | 91 +++++++++++++++++++
 1 file changed, 91 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
index 629dd8579b73..b556024af683 100644
--- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
@@ -29,6 +29,9 @@
 
 static const uint64_t MMIO_VAL = 0xbeefull;
 
+extern const uint64_t final_rip_start;
+extern const uint64_t final_rip_end;
+
 static sem_t vcpu_ready;
 
 static inline uint64_t guest_spin_on_val(uint64_t spin_val)
@@ -203,6 +206,89 @@ static void test_move_memory_region(void)
 	kvm_vm_free(vm);
 }
 
+static void guest_code_delete_memory_region(void)
+{
+	uint64_t val;
+
+	GUEST_SYNC(0);
+
+	/* Spin until the memory region is deleted. */
+	val = guest_spin_on_val(0);
+	GUEST_ASSERT_1(val == MMIO_VAL, val);
+
+	/* Spin until the memory region is recreated. */
+	val = guest_spin_on_val(MMIO_VAL);
+	GUEST_ASSERT_1(val == 0, val);
+
+	/* Spin until the memory region is deleted. */
+	val = guest_spin_on_val(0);
+	GUEST_ASSERT_1(val == MMIO_VAL, val);
+
+	asm("1:\n\t"
+	    ".pushsection .rodata\n\t"
+	    ".global final_rip_start\n\t"
+	    "final_rip_start: .quad 1b\n\t"
+	    ".popsection");
+
+	/* Spin indefinitely (until the code memslot is deleted). */
+	guest_spin_on_val(MMIO_VAL);
+
+	asm("1:\n\t"
+	    ".pushsection .rodata\n\t"
+	    ".global final_rip_end\n\t"
+	    "final_rip_end: .quad 1b\n\t"
+	    ".popsection");
+
+	GUEST_ASSERT_1(0, 0);
+}
+
+static void test_delete_memory_region(void)
+{
+	pthread_t vcpu_thread;
+	struct kvm_regs regs;
+	struct kvm_run *run;
+	struct kvm_vm *vm;
+
+	vm = spawn_vm(&vcpu_thread, guest_code_delete_memory_region);
+
+	/* Delete the memory region, the guest should not die. */
+	vm_mem_region_delete(vm, MEM_REGION_SLOT);
+	wait_for_vcpu();
+
+	/* Recreate the memory region.  The guest should see "0". */
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_THP,
+				    MEM_REGION_GPA, MEM_REGION_SLOT,
+				    MEM_REGION_SIZE / getpagesize(), 0);
+	wait_for_vcpu();
+
+	/* Delete the region again so that there's only one memslot left. */
+	vm_mem_region_delete(vm, MEM_REGION_SLOT);
+	wait_for_vcpu();
+
+	/*
+	 * Delete the primary memslot.  This should cause an emulation error or
+	 * shutdown due to the page tables getting nuked.
+	 */
+	vm_mem_region_delete(vm, 0);
+
+	pthread_join(vcpu_thread, NULL);
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN ||
+		    run->exit_reason == KVM_EXIT_INTERNAL_ERROR,
+		    "Unexpected exit reason = %d", run->exit_reason);
+
+	vcpu_regs_get(vm, VCPU_ID, &regs);
+
+	TEST_ASSERT(regs.rip >= final_rip_start &&
+		    regs.rip < final_rip_end,
+		    "Bad rip, expected 0x%lx - 0x%lx, got 0x%llx\n",
+		    final_rip_start, final_rip_end, regs.rip);
+
+	kvm_vm_free(vm);
+}
+
 int main(int argc, char *argv[])
 {
 	int i, loops;
@@ -215,8 +301,13 @@ int main(int argc, char *argv[])
 	else
 		loops = 10;
 
+	pr_info("Testing MOVE of in-use region, %d loops\n", loops);
 	for (i = 0; i < loops; i++)
 		test_move_memory_region();
 
+	pr_info("Testing DELETE of in-use region, %d loops\n", loops);
+	for (i = 0; i < loops; i++)
+		test_delete_memory_region();
+
 	return 0;
 }
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 07/10] selftests: kvm: Add vm_get_fd() in kvm_util
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (5 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-10 23:17 ` [PATCH 08/10] KVM: selftests: Add "zero" testcase to set_memory_region_test Sean Christopherson
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

From: Wainer dos Santos Moschetta <wainersm@redhat.com>

Introduces the vm_get_fd() function in kvm_util which returns
the VM file descriptor.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 1 +
 tools/testing/selftests/kvm/lib/kvm_util.c     | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index e38d91bd8ec1..53b11d725d81 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -256,6 +256,7 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm);
 unsigned int vm_get_page_size(struct kvm_vm *vm);
 unsigned int vm_get_page_shift(struct kvm_vm *vm);
 unsigned int vm_get_max_gfn(struct kvm_vm *vm);
+int vm_get_fd(struct kvm_vm *vm);
 
 unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
 unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ab5b7ea60f4b..33ab0a36d230 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1739,6 +1739,11 @@ unsigned int vm_get_max_gfn(struct kvm_vm *vm)
 	return vm->max_gfn;
 }
 
+int vm_get_fd(struct kvm_vm *vm)
+{
+	return vm->fd;
+}
+
 static unsigned int vm_calc_num_pages(unsigned int num_pages,
 				      unsigned int page_shift,
 				      unsigned int new_page_shift,
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 08/10] KVM: selftests: Add "zero" testcase to set_memory_region_test
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (6 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 07/10] selftests: kvm: Add vm_get_fd() in kvm_util Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-10 23:17 ` [PATCH 09/10] KVM: selftests: Make set_memory_region_test common to all architectures Sean Christopherson
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Add a testcase for running a guest with no memslots to the memory region
test.  The expected result on x86_64 is that the guest will trigger an
internal KVM error due to the initial code fetch encountering a
non-existent memslot and resulting in an emulation failure.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../kvm/x86_64/set_memory_region_test.c       | 24 +++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
index b556024af683..c274ce6b4ba2 100644
--- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
@@ -289,6 +289,28 @@ static void test_delete_memory_region(void)
 	kvm_vm_free(vm);
 }
 
+static void test_zero_memory_regions(void)
+{
+	struct kvm_run *run;
+	struct kvm_vm *vm;
+
+	pr_info("Testing KVM_RUN with zero added memory regions\n");
+
+	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+	vm_vcpu_add(vm, VCPU_ID);
+
+	TEST_ASSERT(!ioctl(vm_get_fd(vm), KVM_SET_NR_MMU_PAGES, 64),
+		    "KVM_SET_NR_MMU_PAGES failed, errno = %d\n", errno);
+
+	vcpu_run(vm, VCPU_ID);
+
+	run = vcpu_state(vm, VCPU_ID);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_INTERNAL_ERROR,
+		    "Unexpected exit_reason = %u\n", run->exit_reason);
+
+	kvm_vm_free(vm);
+}
+
 int main(int argc, char *argv[])
 {
 	int i, loops;
@@ -296,6 +318,8 @@ int main(int argc, char *argv[])
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
 
+	test_zero_memory_regions();
+
 	if (argc > 1)
 		loops = atoi(argv[1]);
 	else
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 09/10] KVM: selftests: Make set_memory_region_test common to all architectures
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (7 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 08/10] KVM: selftests: Add "zero" testcase to set_memory_region_test Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-14 14:43   ` Wainer dos Santos Moschetta
  2020-04-10 23:17 ` [PATCH 10/10] selftests: kvm: Add testcase for creating max number of memslots Sean Christopherson
  2020-04-15 15:40 ` [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Paolo Bonzini
  10 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

Make set_memory_region_test available on all architectures by wrapping
the bits that are x86-specific in ifdefs.  All architectures can do
no-harm testing of running with zero memslots, and a future testcase
to create the maximum number of memslots will also be architecture
agnostic.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 tools/testing/selftests/kvm/.gitignore              |  2 +-
 tools/testing/selftests/kvm/Makefile                |  4 +++-
 .../kvm/{x86_64 => }/set_memory_region_test.c       | 13 ++++++++++++-
 3 files changed, 16 insertions(+), 3 deletions(-)
 rename tools/testing/selftests/kvm/{x86_64 => }/set_memory_region_test.c (97%)

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 16877c3daabf..5947cc119abc 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -6,7 +6,6 @@
 /x86_64/hyperv_cpuid
 /x86_64/mmio_warning_test
 /x86_64/platform_info_test
-/x86_64/set_memory_region_test
 /x86_64/set_sregs_test
 /x86_64/smm_test
 /x86_64/state_test
@@ -21,4 +20,5 @@
 /demand_paging_test
 /dirty_log_test
 /kvm_create_max_vcpus
+/set_memory_region_test
 /steal_time
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 712a2ddd2a27..7af62030c12f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -17,7 +17,6 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
 TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
 TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
-TEST_GEN_PROGS_x86_64 += x86_64/set_memory_region_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
 TEST_GEN_PROGS_x86_64 += x86_64/state_test
@@ -32,12 +31,14 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
 TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
+TEST_GEN_PROGS_x86_64 += set_memory_region_test
 TEST_GEN_PROGS_x86_64 += steal_time
 
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
 TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
+TEST_GEN_PROGS_aarch64 += set_memory_region_test
 TEST_GEN_PROGS_aarch64 += steal_time
 
 TEST_GEN_PROGS_s390x = s390x/memop
@@ -46,6 +47,7 @@ TEST_GEN_PROGS_s390x += s390x/sync_regs_test
 TEST_GEN_PROGS_s390x += demand_paging_test
 TEST_GEN_PROGS_s390x += dirty_log_test
 TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
+TEST_GEN_PROGS_s390x += set_memory_region_test
 
 TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
 LIBKVM += $(LIBKVM_$(UNAME_M))
diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
similarity index 97%
rename from tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
rename to tools/testing/selftests/kvm/set_memory_region_test.c
index c274ce6b4ba2..0f36941ebb96 100644
--- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -18,6 +18,7 @@
 
 #define VCPU_ID 0
 
+#ifdef __x86_64__
 /*
  * Somewhat arbitrary location and slot, intended to not overlap anything.  The
  * location and size are specifically 2mb sized/aligned so that the initial
@@ -288,6 +289,7 @@ static void test_delete_memory_region(void)
 
 	kvm_vm_free(vm);
 }
+#endif /* __x86_64__ */
 
 static void test_zero_memory_regions(void)
 {
@@ -299,13 +301,18 @@ static void test_zero_memory_regions(void)
 	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
 	vm_vcpu_add(vm, VCPU_ID);
 
+#ifdef __x86_64__
 	TEST_ASSERT(!ioctl(vm_get_fd(vm), KVM_SET_NR_MMU_PAGES, 64),
 		    "KVM_SET_NR_MMU_PAGES failed, errno = %d\n", errno);
-
+#endif
 	vcpu_run(vm, VCPU_ID);
 
 	run = vcpu_state(vm, VCPU_ID);
+#ifdef __x86_64__
 	TEST_ASSERT(run->exit_reason == KVM_EXIT_INTERNAL_ERROR,
+#else
+	TEST_ASSERT(run->exit_reason != KVM_EXIT_UNKNOWN,
+#endif
 		    "Unexpected exit_reason = %u\n", run->exit_reason);
 
 	kvm_vm_free(vm);
@@ -313,13 +320,16 @@ static void test_zero_memory_regions(void)
 
 int main(int argc, char *argv[])
 {
+#ifdef __x86_64__
 	int i, loops;
+#endif
 
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
 
 	test_zero_memory_regions();
 
+#ifdef __x86_64__
 	if (argc > 1)
 		loops = atoi(argv[1]);
 	else
@@ -332,6 +342,7 @@ int main(int argc, char *argv[])
 	pr_info("Testing DELETE of in-use region, %d loops\n", loops);
 	for (i = 0; i < loops; i++)
 		test_delete_memory_region();
+#endif
 
 	return 0;
 }
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 10/10] selftests: kvm: Add testcase for creating max number of memslots
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (8 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 09/10] KVM: selftests: Make set_memory_region_test common to all architectures Sean Christopherson
@ 2020-04-10 23:17 ` Sean Christopherson
  2020-04-13 13:22   ` Wainer dos Santos Moschetta
  2020-04-15 15:40 ` [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Paolo Bonzini
  10 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2020-04-10 23:17 UTC (permalink / raw)
  To: Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel,
	Sean Christopherson, Peter Xu, Andrew Jones,
	Wainer dos Santos Moschetta

From: Wainer dos Santos Moschetta <wainersm@redhat.com>

This patch introduces test_add_max_memory_regions(), which checks
that a VM can have added memory slots up to the limit defined in
KVM_CAP_NR_MEMSLOTS. Then attempt to add one more slot to
verify it fails as expected.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 .../selftests/kvm/set_memory_region_test.c    | 65 +++++++++++++++++--
 1 file changed, 60 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 0f36941ebb96..cdf5024b2452 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -9,6 +9,7 @@
 #include <stdlib.h>
 #include <string.h>
 #include <sys/ioctl.h>
+#include <sys/mman.h>
 
 #include <linux/compiler.h>
 
@@ -18,14 +19,18 @@
 
 #define VCPU_ID 0
 
-#ifdef __x86_64__
 /*
- * Somewhat arbitrary location and slot, intended to not overlap anything.  The
- * location and size are specifically 2mb sized/aligned so that the initial
- * region corresponds to exactly one large page.
+ * s390x needs at least 1MB alignment, and the x86_64 MOVE/DELETE tests need a
+ * 2MB sized and aligned region so that the initial region corresponds to
+ * exactly one large page.
  */
-#define MEM_REGION_GPA		0xc0000000
 #define MEM_REGION_SIZE		0x200000
+
+#ifdef __x86_64__
+/*
+ * Somewhat arbitrary location and slot, intended to not overlap anything.
+ */
+#define MEM_REGION_GPA		0xc0000000
 #define MEM_REGION_SLOT		10
 
 static const uint64_t MMIO_VAL = 0xbeefull;
@@ -318,6 +323,54 @@ static void test_zero_memory_regions(void)
 	kvm_vm_free(vm);
 }
 
+/*
+ * Test it can be added memory slots up to KVM_CAP_NR_MEMSLOTS, then any
+ * tentative to add further slots should fail.
+ */
+static void test_add_max_memory_regions(void)
+{
+	int ret;
+	struct kvm_vm *vm;
+	uint32_t max_mem_slots;
+	uint32_t slot;
+	uint64_t guest_addr = 0x0;
+	uint64_t mem_reg_npages;
+	void *mem;
+
+	max_mem_slots = kvm_check_cap(KVM_CAP_NR_MEMSLOTS);
+	TEST_ASSERT(max_mem_slots > 0,
+		    "KVM_CAP_NR_MEMSLOTS should be greater than 0");
+	pr_info("Allowed number of memory slots: %i\n", max_mem_slots);
+
+	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+
+	mem_reg_npages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, MEM_REGION_SIZE);
+
+	/* Check it can be added memory slots up to the maximum allowed */
+	pr_info("Adding slots 0..%i, each memory region with %dK size\n",
+		(max_mem_slots - 1), MEM_REGION_SIZE >> 10);
+	for (slot = 0; slot < max_mem_slots; slot++) {
+		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+					    guest_addr, slot, mem_reg_npages,
+					    0);
+		guest_addr += MEM_REGION_SIZE;
+	}
+
+	/* Check it cannot be added memory slots beyond the limit */
+	mem = mmap(NULL, MEM_REGION_SIZE, PROT_READ | PROT_WRITE,
+		   MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host");
+
+	ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION,
+		    &(struct kvm_userspace_memory_region) {slot, 0, guest_addr,
+		    MEM_REGION_SIZE, (uint64_t) mem});
+	TEST_ASSERT(ret == -1 && errno == EINVAL,
+		    "Adding one more memory slot should fail with EINVAL");
+
+	munmap(mem, MEM_REGION_SIZE);
+	kvm_vm_free(vm);
+}
+
 int main(int argc, char *argv[])
 {
 #ifdef __x86_64__
@@ -329,6 +382,8 @@ int main(int argc, char *argv[])
 
 	test_zero_memory_regions();
 
+	test_add_max_memory_regions();
+
 #ifdef __x86_64__
 	if (argc > 1)
 		loops = atoi(argv[1]);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 10/10] selftests: kvm: Add testcase for creating max number of memslots
  2020-04-10 23:17 ` [PATCH 10/10] selftests: kvm: Add testcase for creating max number of memslots Sean Christopherson
@ 2020-04-13 13:22   ` Wainer dos Santos Moschetta
  0 siblings, 0 replies; 26+ messages in thread
From: Wainer dos Santos Moschetta @ 2020-04-13 13:22 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Andrew Jones


On 4/10/20 8:17 PM, Sean Christopherson wrote:
> From: Wainer dos Santos Moschetta <wainersm@redhat.com>
>
> This patch introduces test_add_max_memory_regions(), which checks
> that a VM can have added memory slots up to the limit defined in
> KVM_CAP_NR_MEMSLOTS. Then attempt to add one more slot to
> verify it fails as expected.
>
> Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
> Reviewed-by: Andrew Jones <drjones@redhat.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   .../selftests/kvm/set_memory_region_test.c    | 65 +++++++++++++++++--
>   1 file changed, 60 insertions(+), 5 deletions(-)

Putting the memory region related tests together into a single test file 
makes sense to me.

Acked-by: Wainer dos Santos Moschetta <wainersm@redhat.com>

>
> diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
> index 0f36941ebb96..cdf5024b2452 100644
> --- a/tools/testing/selftests/kvm/set_memory_region_test.c
> +++ b/tools/testing/selftests/kvm/set_memory_region_test.c
> @@ -9,6 +9,7 @@
>   #include <stdlib.h>
>   #include <string.h>
>   #include <sys/ioctl.h>
> +#include <sys/mman.h>
>   
>   #include <linux/compiler.h>
>   
> @@ -18,14 +19,18 @@
>   
>   #define VCPU_ID 0
>   
> -#ifdef __x86_64__
>   /*
> - * Somewhat arbitrary location and slot, intended to not overlap anything.  The
> - * location and size are specifically 2mb sized/aligned so that the initial
> - * region corresponds to exactly one large page.
> + * s390x needs at least 1MB alignment, and the x86_64 MOVE/DELETE tests need a
> + * 2MB sized and aligned region so that the initial region corresponds to
> + * exactly one large page.
>    */
> -#define MEM_REGION_GPA		0xc0000000
>   #define MEM_REGION_SIZE		0x200000
> +
> +#ifdef __x86_64__
> +/*
> + * Somewhat arbitrary location and slot, intended to not overlap anything.
> + */
> +#define MEM_REGION_GPA		0xc0000000
>   #define MEM_REGION_SLOT		10
>   
>   static const uint64_t MMIO_VAL = 0xbeefull;
> @@ -318,6 +323,54 @@ static void test_zero_memory_regions(void)
>   	kvm_vm_free(vm);
>   }
>   
> +/*
> + * Test it can be added memory slots up to KVM_CAP_NR_MEMSLOTS, then any
> + * tentative to add further slots should fail.
> + */
> +static void test_add_max_memory_regions(void)
> +{
> +	int ret;
> +	struct kvm_vm *vm;
> +	uint32_t max_mem_slots;
> +	uint32_t slot;
> +	uint64_t guest_addr = 0x0;
> +	uint64_t mem_reg_npages;
> +	void *mem;
> +
> +	max_mem_slots = kvm_check_cap(KVM_CAP_NR_MEMSLOTS);
> +	TEST_ASSERT(max_mem_slots > 0,
> +		    "KVM_CAP_NR_MEMSLOTS should be greater than 0");
> +	pr_info("Allowed number of memory slots: %i\n", max_mem_slots);
> +
> +	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> +
> +	mem_reg_npages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, MEM_REGION_SIZE);
> +
> +	/* Check it can be added memory slots up to the maximum allowed */
> +	pr_info("Adding slots 0..%i, each memory region with %dK size\n",
> +		(max_mem_slots - 1), MEM_REGION_SIZE >> 10);
> +	for (slot = 0; slot < max_mem_slots; slot++) {
> +		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> +					    guest_addr, slot, mem_reg_npages,
> +					    0);
> +		guest_addr += MEM_REGION_SIZE;
> +	}
> +
> +	/* Check it cannot be added memory slots beyond the limit */
> +	mem = mmap(NULL, MEM_REGION_SIZE, PROT_READ | PROT_WRITE,
> +		   MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> +	TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host");
> +
> +	ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION,
> +		    &(struct kvm_userspace_memory_region) {slot, 0, guest_addr,
> +		    MEM_REGION_SIZE, (uint64_t) mem});
> +	TEST_ASSERT(ret == -1 && errno == EINVAL,
> +		    "Adding one more memory slot should fail with EINVAL");
> +
> +	munmap(mem, MEM_REGION_SIZE);
> +	kvm_vm_free(vm);
> +}
> +
>   int main(int argc, char *argv[])
>   {
>   #ifdef __x86_64__
> @@ -329,6 +382,8 @@ int main(int argc, char *argv[])
>   
>   	test_zero_memory_regions();
>   
> +	test_add_max_memory_regions();
> +
>   #ifdef __x86_64__
>   	if (argc > 1)
>   		loops = atoi(argv[1]);


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-10 23:16 ` [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm() Sean Christopherson
@ 2020-04-13 18:26   ` Wainer dos Santos Moschetta
  2020-04-13 21:26     ` Sean Christopherson
  2020-04-14 16:02   ` Andrew Jones
  1 sibling, 1 reply; 26+ messages in thread
From: Wainer dos Santos Moschetta @ 2020-04-13 18:26 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Andrew Jones


On 4/10/20 8:16 PM, Sean Christopherson wrote:
> The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
> directly instead of doing an extra lookup.


Most of (if not all) vcpu related functions in kvm_util.c receives an 
id, so this change creates an inconsistency.

Disregarding the above comment, the changes look good to me. So:

Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>


>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
>   1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 8a3523d4434f..9a783c20dd26 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>    *
>    * Input Args:
>    *   vm - Virtual Machine
> - *   vcpuid - VCPU ID
> + *   vcpu - VCPU to remove
>    *
>    * Output Args: None
>    *
> @@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>    *
>    * Within the VM specified by vm, removes the VCPU given by vcpuid.
>    */
> -static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
> +static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
>   {
> -	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>   	int ret;
>   
>   	ret = munmap(vcpu->state, sizeof(*vcpu->state));
> @@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
>   	int ret;
>   
>   	while (vmp->vcpu_head)
> -		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
> +		vm_vcpu_rm(vmp, vmp->vcpu_head);
>   
>   	ret = close(vmp->fd);
>   	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 03/10] KVM: selftests: Add util to delete memory region
  2020-04-10 23:17 ` [PATCH 03/10] KVM: selftests: Add util to delete memory region Sean Christopherson
@ 2020-04-13 18:52   ` Wainer dos Santos Moschetta
  2020-04-14 16:04   ` Andrew Jones
  1 sibling, 0 replies; 26+ messages in thread
From: Wainer dos Santos Moschetta @ 2020-04-13 18:52 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Andrew Jones


On 4/10/20 8:17 PM, Sean Christopherson wrote:
> Add a utility to delete a memory region, it will be used by x86's
> set_memory_region_test.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   .../testing/selftests/kvm/include/kvm_util.h  |  1 +
>   tools/testing/selftests/kvm/lib/kvm_util.c    | 56 +++++++++++++------
>   2 files changed, 40 insertions(+), 17 deletions(-)

LGTM.

Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>

>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 2f329e785c58..d4c3e4d9cd92 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,6 +114,7 @@ int _vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, unsigned long ioctl,
>   void vm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
>   void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
>   void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
> +void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
>   void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid);
>   vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
>   			  uint32_t data_memslot, uint32_t pgd_memslot);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 105ee9bc09f0..ab5b7ea60f4b 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -433,34 +433,38 @@ void kvm_vm_release(struct kvm_vm *vmp)
>   		"  vmp->kvm_fd: %i rc: %i errno: %i", vmp->kvm_fd, ret, errno);
>   }
>   
> +static void __vm_mem_region_delete(struct kvm_vm *vm,
> +				   struct userspace_mem_region *region)
> +{
> +	int ret;
> +
> +	list_del(&region->list);
> +
> +	region->region.memory_size = 0;
> +	ret = ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, &region->region);
> +	TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
> +		    "rc: %i errno: %i", ret, errno);
> +
> +	sparsebit_free(&region->unused_phy_pages);
> +	ret = munmap(region->mmap_start, region->mmap_size);
> +	TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno);
> +
> +	free(region);
> +}
> +
>   /*
>    * Destroys and frees the VM pointed to by vmp.
>    */
>   void kvm_vm_free(struct kvm_vm *vmp)
>   {
>   	struct userspace_mem_region *region, *tmp;
> -	int ret;
>   
>   	if (vmp == NULL)
>   		return;
>   
>   	/* Free userspace_mem_regions. */
> -	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list) {
> -		list_del(&region->list);
> -
> -		region->region.memory_size = 0;
> -		ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION,
> -			&region->region);
> -		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
> -			"rc: %i errno: %i", ret, errno);
> -
> -		sparsebit_free(&region->unused_phy_pages);
> -		ret = munmap(region->mmap_start, region->mmap_size);
> -		TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i",
> -			    ret, errno);
> -
> -		free(region);
> -	}
> +	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list)
> +		__vm_mem_region_delete(vmp, region);
>   
>   	/* Free sparsebit arrays. */
>   	sparsebit_free(&vmp->vpages_valid);
> @@ -775,6 +779,24 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
>   		    ret, errno, slot, new_gpa);
>   }
>   
> +/*
> + * VM Memory Region Delete
> + *
> + * Input Args:
> + *   vm - Virtual Machine
> + *   slot - Slot of the memory region to delete
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Delete a memory region.
> + */
> +void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
> +{
> +	__vm_mem_region_delete(vm, memslot2region(vm, slot));
> +}
> +
>   /*
>    * VCPU mmap Size
>    *


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-13 18:26   ` Wainer dos Santos Moschetta
@ 2020-04-13 21:26     ` Sean Christopherson
  2020-04-14  8:25       ` Andrew Jones
  2020-04-15 15:11       ` Paolo Bonzini
  0 siblings, 2 replies; 26+ messages in thread
From: Sean Christopherson @ 2020-04-13 21:26 UTC (permalink / raw)
  To: Wainer dos Santos Moschetta
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Andrew Jones

On Mon, Apr 13, 2020 at 03:26:55PM -0300, Wainer dos Santos Moschetta wrote:
> 
> On 4/10/20 8:16 PM, Sean Christopherson wrote:
> >The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
> >directly instead of doing an extra lookup.
> 
> 
> Most of (if not all) vcpu related functions in kvm_util.c receives an id, so
> this change creates an inconsistency.

Ya, but taking the id is done out of "necessity", as everything is public
and for whatever reason the design of the selftest framework is to not
expose 'struct vcpu' outside of the utils.  vm_vcpu_rm() is internal only,
IMO pulling the id out of the vcpu just to lookup the same vcpu is a waste
of time.

FWIW, I think the whole vcpuid thing is a bad interface, almost all the
tests end up defining an arbitrary number for the sole VCPU_ID, i.e. the
vcpuid interface just adds a pointless layer of obfuscation.  I haven't
looked through all the tests, but returning the vcpu and making the struct
opaque, same as kvm_vm, seems like it would yield more readable code with
less overhead.

While I'm on a soapbox, hiding 'struct vcpu' and 'struct kvm_vm' also seems
rather silly, but at least that doesn't directly lead to funky code.

> Disregarding the above comment, the changes look good to me. So:
> 
> Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
> 
> 
> >
> >Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> >---
> >  tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> >diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> >index 8a3523d4434f..9a783c20dd26 100644
> >--- a/tools/testing/selftests/kvm/lib/kvm_util.c
> >+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> >@@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
> >   *
> >   * Input Args:
> >   *   vm - Virtual Machine
> >- *   vcpuid - VCPU ID
> >+ *   vcpu - VCPU to remove
> >   *
> >   * Output Args: None
> >   *
> >@@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
> >   *
> >   * Within the VM specified by vm, removes the VCPU given by vcpuid.
> >   */
> >-static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
> >+static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
> >  {
> >-	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> >  	int ret;
> >  	ret = munmap(vcpu->state, sizeof(*vcpu->state));
> >@@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
> >  	int ret;
> >  	while (vmp->vcpu_head)
> >-		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
> >+		vm_vcpu_rm(vmp, vmp->vcpu_head);
> >  	ret = close(vmp->fd);
> >  	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-13 21:26     ` Sean Christopherson
@ 2020-04-14  8:25       ` Andrew Jones
  2020-04-14 13:02         ` Wainer dos Santos Moschetta
  2020-04-15 15:11       ` Paolo Bonzini
  1 sibling, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2020-04-14  8:25 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Wainer dos Santos Moschetta, Paolo Bonzini,
	Christian Borntraeger, Janosch Frank, David Hildenbrand,
	Cornelia Huck, kvm, linux-kernel, Peter Xu

On Mon, Apr 13, 2020 at 02:26:59PM -0700, Sean Christopherson wrote:
> On Mon, Apr 13, 2020 at 03:26:55PM -0300, Wainer dos Santos Moschetta wrote:
> > 
> > On 4/10/20 8:16 PM, Sean Christopherson wrote:
> > >The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
> > >directly instead of doing an extra lookup.
> > 
> > 
> > Most of (if not all) vcpu related functions in kvm_util.c receives an id, so
> > this change creates an inconsistency.
> 
> Ya, but taking the id is done out of "necessity", as everything is public
> and for whatever reason the design of the selftest framework is to not
> expose 'struct vcpu' outside of the utils.  vm_vcpu_rm() is internal only,
> IMO pulling the id out of the vcpu just to lookup the same vcpu is a waste
> of time.

Agreed

> 
> FWIW, I think the whole vcpuid thing is a bad interface, almost all the
> tests end up defining an arbitrary number for the sole VCPU_ID, i.e. the
> vcpuid interface just adds a pointless layer of obfuscation.  I haven't
> looked through all the tests, but returning the vcpu and making the struct
> opaque, same as kvm_vm, seems like it would yield more readable code with
> less overhead.

Agreed

> 
> While I'm on a soapbox, hiding 'struct vcpu' and 'struct kvm_vm' also seems
> rather silly, but at least that doesn't directly lead to funky code.

Agreed. While the concept has been slowly growing on me, I think accessor
functions for each of the structs members are growing even faster...

Thanks,
drew

> 
> > Disregarding the above comment, the changes look good to me. So:
> > 
> > Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
> > 
> > 
> > >
> > >Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > >---
> > >  tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
> > >  1 file changed, 3 insertions(+), 4 deletions(-)
> > >
> > >diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > >index 8a3523d4434f..9a783c20dd26 100644
> > >--- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > >+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > >@@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
> > >   *
> > >   * Input Args:
> > >   *   vm - Virtual Machine
> > >- *   vcpuid - VCPU ID
> > >+ *   vcpu - VCPU to remove
> > >   *
> > >   * Output Args: None
> > >   *
> > >@@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
> > >   *
> > >   * Within the VM specified by vm, removes the VCPU given by vcpuid.
> > >   */
> > >-static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
> > >+static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
> > >  {
> > >-	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> > >  	int ret;
> > >  	ret = munmap(vcpu->state, sizeof(*vcpu->state));
> > >@@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
> > >  	int ret;
> > >  	while (vmp->vcpu_head)
> > >-		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
> > >+		vm_vcpu_rm(vmp, vmp->vcpu_head);
> > >  	ret = close(vmp->fd);
> > >  	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-14  8:25       ` Andrew Jones
@ 2020-04-14 13:02         ` Wainer dos Santos Moschetta
  0 siblings, 0 replies; 26+ messages in thread
From: Wainer dos Santos Moschetta @ 2020-04-14 13:02 UTC (permalink / raw)
  To: Andrew Jones, Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu


On 4/14/20 5:25 AM, Andrew Jones wrote:
> On Mon, Apr 13, 2020 at 02:26:59PM -0700, Sean Christopherson wrote:
>> On Mon, Apr 13, 2020 at 03:26:55PM -0300, Wainer dos Santos Moschetta wrote:
>>> On 4/10/20 8:16 PM, Sean Christopherson wrote:
>>>> The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
>>>> directly instead of doing an extra lookup.
>>>
>>> Most of (if not all) vcpu related functions in kvm_util.c receives an id, so
>>> this change creates an inconsistency.
>> Ya, but taking the id is done out of "necessity", as everything is public
>> and for whatever reason the design of the selftest framework is to not
>> expose 'struct vcpu' outside of the utils.  vm_vcpu_rm() is internal only,
>> IMO pulling the id out of the vcpu just to lookup the same vcpu is a waste
>> of time.
> Agreed


Thanks Sean and Andrew for your comments. I'm not in position to 
change/propose any design of kvm selftests but even though I aimed to 
foster this discussion.

So, please, consider my Reviewed-by...

- Wainer


>
>> FWIW, I think the whole vcpuid thing is a bad interface, almost all the
>> tests end up defining an arbitrary number for the sole VCPU_ID, i.e. the
>> vcpuid interface just adds a pointless layer of obfuscation.  I haven't
>> looked through all the tests, but returning the vcpu and making the struct
>> opaque, same as kvm_vm, seems like it would yield more readable code with
>> less overhead.
> Agreed
>
>> While I'm on a soapbox, hiding 'struct vcpu' and 'struct kvm_vm' also seems
>> rather silly, but at least that doesn't directly lead to funky code.
> Agreed. While the concept has been slowly growing on me, I think accessor
> functions for each of the structs members are growing even faster...
>
> Thanks,
> drew
>
>>> Disregarding the above comment, the changes look good to me. So:
>>>
>>> Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
>>>
>>>
>>>> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
>>>> ---
>>>>   tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
>>>>   1 file changed, 3 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
>>>> index 8a3523d4434f..9a783c20dd26 100644
>>>> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
>>>> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
>>>> @@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>>>>    *
>>>>    * Input Args:
>>>>    *   vm - Virtual Machine
>>>> - *   vcpuid - VCPU ID
>>>> + *   vcpu - VCPU to remove
>>>>    *
>>>>    * Output Args: None
>>>>    *
>>>> @@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>>>>    *
>>>>    * Within the VM specified by vm, removes the VCPU given by vcpuid.
>>>>    */
>>>> -static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
>>>> +static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
>>>>   {
>>>> -	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>>>>   	int ret;
>>>>   	ret = munmap(vcpu->state, sizeof(*vcpu->state));
>>>> @@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
>>>>   	int ret;
>>>>   	while (vmp->vcpu_head)
>>>> -		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
>>>> +		vm_vcpu_rm(vmp, vmp->vcpu_head);
>>>>   	ret = close(vmp->fd);
>>>>   	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 09/10] KVM: selftests: Make set_memory_region_test common to all architectures
  2020-04-10 23:17 ` [PATCH 09/10] KVM: selftests: Make set_memory_region_test common to all architectures Sean Christopherson
@ 2020-04-14 14:43   ` Wainer dos Santos Moschetta
  0 siblings, 0 replies; 26+ messages in thread
From: Wainer dos Santos Moschetta @ 2020-04-14 14:43 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Andrew Jones

Hi Sean,

On 4/10/20 8:17 PM, Sean Christopherson wrote:
> Make set_memory_region_test available on all architectures by wrapping
> the bits that are x86-specific in ifdefs.  All architectures can do
> no-harm testing of running with zero memslots, and a future testcase
> to create the maximum number of memslots will also be architecture
> agnostic.

I got this series successfully compiled in aarch64 and s390x. However 
the zero memslot test fails on both arches on vcpu_run().

The machines I borrowed got RHEL-8.1.0 installed (kernel 4.18.0-147). 
Perhaps I am using a too old kernel? Anyway, trying to get at least an 
aarch64 box with newer kernel to double check.

The error on aarch64:

Testing KVM_RUN with zero added memory regions
==== Test Assertion Failure ====
   lib/kvm_util.c:1179: ret == 0
   pid=83625 tid=83625 - Exec format error
      1    0x000000000040114f: test_zero_memory_regions at 
set_memory_region_test.c:313
      2     (inlined by) main at set_memory_region_test.c:383
      3    0x0000ffff92e70d63: ?? ??:0
      4    0x0000000000401367: _start at :?
   KVM_RUN IOCTL failed, rc: -1 errno: 8

And on s390x:

Testing KVM_RUN with zero added memory regions
==== Test Assertion Failure ====
   lib/kvm_util.c:1179: ret == 0
   pid=41263 tid=-1 - Invalid argument
      1    0x00000000010029b5: vcpu_run at kvm_util.c:1178
      2    0x0000000001001563: test_zero_memory_regions at 
set_memory_region_test.c:313
      3     (inlined by) main at set_memory_region_test.c:383
      4    0x000003ffb80a3611: ?? ??:0
      5    0x00000000010017bd: .annobin_init.c.hot at crt1.o:?
      6    0xffffffffffffffff: ?? ??:0
   KVM_RUN IOCTL failed, rc: -1 errno: 14

Thanks,

Wainer

>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>   tools/testing/selftests/kvm/.gitignore              |  2 +-
>   tools/testing/selftests/kvm/Makefile                |  4 +++-
>   .../kvm/{x86_64 => }/set_memory_region_test.c       | 13 ++++++++++++-
>   3 files changed, 16 insertions(+), 3 deletions(-)
>   rename tools/testing/selftests/kvm/{x86_64 => }/set_memory_region_test.c (97%)
>
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 16877c3daabf..5947cc119abc 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -6,7 +6,6 @@
>   /x86_64/hyperv_cpuid
>   /x86_64/mmio_warning_test
>   /x86_64/platform_info_test
> -/x86_64/set_memory_region_test
>   /x86_64/set_sregs_test
>   /x86_64/smm_test
>   /x86_64/state_test
> @@ -21,4 +20,5 @@
>   /demand_paging_test
>   /dirty_log_test
>   /kvm_create_max_vcpus
> +/set_memory_region_test
>   /steal_time
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 712a2ddd2a27..7af62030c12f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -17,7 +17,6 @@ TEST_GEN_PROGS_x86_64 += x86_64/evmcs_test
>   TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid
>   TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
>   TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
> -TEST_GEN_PROGS_x86_64 += x86_64/set_memory_region_test
>   TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
>   TEST_GEN_PROGS_x86_64 += x86_64/smm_test
>   TEST_GEN_PROGS_x86_64 += x86_64/state_test
> @@ -32,12 +31,14 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
>   TEST_GEN_PROGS_x86_64 += demand_paging_test
>   TEST_GEN_PROGS_x86_64 += dirty_log_test
>   TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
> +TEST_GEN_PROGS_x86_64 += set_memory_region_test
>   TEST_GEN_PROGS_x86_64 += steal_time
>   
>   TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
>   TEST_GEN_PROGS_aarch64 += demand_paging_test
>   TEST_GEN_PROGS_aarch64 += dirty_log_test
>   TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
> +TEST_GEN_PROGS_aarch64 += set_memory_region_test
>   TEST_GEN_PROGS_aarch64 += steal_time
>   
>   TEST_GEN_PROGS_s390x = s390x/memop
> @@ -46,6 +47,7 @@ TEST_GEN_PROGS_s390x += s390x/sync_regs_test
>   TEST_GEN_PROGS_s390x += demand_paging_test
>   TEST_GEN_PROGS_s390x += dirty_log_test
>   TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
> +TEST_GEN_PROGS_s390x += set_memory_region_test
>   
>   TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
>   LIBKVM += $(LIBKVM_$(UNAME_M))
> diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
> similarity index 97%
> rename from tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> rename to tools/testing/selftests/kvm/set_memory_region_test.c
> index c274ce6b4ba2..0f36941ebb96 100644
> --- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> +++ b/tools/testing/selftests/kvm/set_memory_region_test.c
> @@ -18,6 +18,7 @@
>   
>   #define VCPU_ID 0
>   
> +#ifdef __x86_64__
>   /*
>    * Somewhat arbitrary location and slot, intended to not overlap anything.  The
>    * location and size are specifically 2mb sized/aligned so that the initial
> @@ -288,6 +289,7 @@ static void test_delete_memory_region(void)
>   
>   	kvm_vm_free(vm);
>   }
> +#endif /* __x86_64__ */
>   
>   static void test_zero_memory_regions(void)
>   {
> @@ -299,13 +301,18 @@ static void test_zero_memory_regions(void)
>   	vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
>   	vm_vcpu_add(vm, VCPU_ID);
>   
> +#ifdef __x86_64__
>   	TEST_ASSERT(!ioctl(vm_get_fd(vm), KVM_SET_NR_MMU_PAGES, 64),
>   		    "KVM_SET_NR_MMU_PAGES failed, errno = %d\n", errno);
> -
> +#endif
>   	vcpu_run(vm, VCPU_ID);
>   
>   	run = vcpu_state(vm, VCPU_ID);
> +#ifdef __x86_64__
>   	TEST_ASSERT(run->exit_reason == KVM_EXIT_INTERNAL_ERROR,
> +#else
> +	TEST_ASSERT(run->exit_reason != KVM_EXIT_UNKNOWN,
> +#endif
>   		    "Unexpected exit_reason = %u\n", run->exit_reason);
>   
>   	kvm_vm_free(vm);
> @@ -313,13 +320,16 @@ static void test_zero_memory_regions(void)
>   
>   int main(int argc, char *argv[])
>   {
> +#ifdef __x86_64__
>   	int i, loops;
> +#endif
>   
>   	/* Tell stdout not to buffer its content */
>   	setbuf(stdout, NULL);
>   
>   	test_zero_memory_regions();
>   
> +#ifdef __x86_64__
>   	if (argc > 1)
>   		loops = atoi(argv[1]);
>   	else
> @@ -332,6 +342,7 @@ int main(int argc, char *argv[])
>   	pr_info("Testing DELETE of in-use region, %d loops\n", loops);
>   	for (i = 0; i < loops; i++)
>   		test_delete_memory_region();
> +#endif
>   
>   	return 0;
>   }


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 04/10] KVM: selftests: Add GUEST_ASSERT variants to pass values to host
  2020-04-10 23:17 ` [PATCH 04/10] KVM: selftests: Add GUEST_ASSERT variants to pass values to host Sean Christopherson
@ 2020-04-14 16:02   ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-04-14 16:02 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Wainer dos Santos Moschetta

On Fri, Apr 10, 2020 at 04:17:01PM -0700, Sean Christopherson wrote:
> Add variants of GUEST_ASSERT to pass values back to the host, e.g. to
> help debug/understand a failure when the the cause of the assert isn't
> necessarily binary.
> 
> It'd probably be possible to auto-calculate the number of arguments and
> just have a single GUEST_ASSERT, but there are a limited number of
> variants and silently eating arguments could lead to subtle code bugs.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  .../testing/selftests/kvm/include/kvm_util.h  | 25 +++++++++++++++----
>  1 file changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index d4c3e4d9cd92..e38d91bd8ec1 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -313,11 +313,26 @@ uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc);
>  
>  #define GUEST_SYNC(stage)	ucall(UCALL_SYNC, 2, "hello", stage)
>  #define GUEST_DONE()		ucall(UCALL_DONE, 0)
> -#define GUEST_ASSERT(_condition) do {			\
> -	if (!(_condition))				\
> -		ucall(UCALL_ABORT, 2,			\
> -			"Failed guest assert: "		\
> -			#_condition, __LINE__);		\
> +#define __GUEST_ASSERT(_condition, _nargs, _args...) do {	\
> +	if (!(_condition))					\
> +		ucall(UCALL_ABORT, 2 + _nargs,			\

Need () around _nargs

> +			"Failed guest assert: "			\
> +			#_condition, __LINE__, _args);		\
>  } while (0)

We can free up another arg and add __FILE__. Something like the following
(untested):

 #include "linux/stringify.h"

 #define __GUEST_ASSERT(_condition, _nargs, _args...) do {                            \
       if (!(_condition))                                                             \
               ucall(UCALL_ABORT, (_nargs) + 1,                                       \
                     "Failed guest assert: "                                          \
                     #_condition " at " __FILE__ ":" __stringify(__LINE__), _args);   \
 } while (0)

>  
> +#define GUEST_ASSERT(_condition) \
> +	__GUEST_ASSERT((_condition), 0, 0)
> +
> +#define GUEST_ASSERT_1(_condition, arg1) \
> +	__GUEST_ASSERT((_condition), 1, (arg1))
> +
> +#define GUEST_ASSERT_2(_condition, arg1, arg2) \
> +	__GUEST_ASSERT((_condition), 2, (arg1), (arg2))
> +
> +#define GUEST_ASSERT_3(_condition, arg1, arg2, arg3) \
> +	__GUEST_ASSERT((_condition), 3, (arg1), (arg2), (arg3))
> +
> +#define GUEST_ASSERT_4(_condition, arg1, arg2, arg3, arg4) \
> +	__GUEST_ASSERT((_condition), 4, (arg1), (arg2), (arg3), (arg4))
> +

nit: don't need the () around any of the macro params above

>  #endif /* SELFTEST_KVM_UTIL_H */
> -- 
> 2.26.0
> 

We could instead add test specific ucalls. To do so, we should first add
UCALL_TEST_SPECIFIC = 32 to the ucall enum, and then tests could extend it

 enum {
  MY_UCALL_1 = UCALL_TEST_SPECIFIC,
  MY_UCALL_2,
 };

With appropriately named test specific ucalls it may allow for clearer
code. At least GUEST_ASSERT_<N> wouldn't take on new meanings for each
test.

Thanks,
drew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-10 23:16 ` [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm() Sean Christopherson
  2020-04-13 18:26   ` Wainer dos Santos Moschetta
@ 2020-04-14 16:02   ` Andrew Jones
  1 sibling, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-04-14 16:02 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Wainer dos Santos Moschetta

On Fri, Apr 10, 2020 at 04:16:58PM -0700, Sean Christopherson wrote:
> The sole caller of vm_vcpu_rm() already has the vcpu pointer, take it
> directly instead of doing an extra lookup.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 8a3523d4434f..9a783c20dd26 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -393,7 +393,7 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>   *
>   * Input Args:
>   *   vm - Virtual Machine
> - *   vcpuid - VCPU ID
> + *   vcpu - VCPU to remove
>   *
>   * Output Args: None
>   *
> @@ -401,9 +401,8 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>   *
>   * Within the VM specified by vm, removes the VCPU given by vcpuid.
>   */
> -static void vm_vcpu_rm(struct kvm_vm *vm, uint32_t vcpuid)
> +static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
>  {
> -	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
>  	int ret;
>  
>  	ret = munmap(vcpu->state, sizeof(*vcpu->state));
> @@ -427,7 +426,7 @@ void kvm_vm_release(struct kvm_vm *vmp)
>  	int ret;
>  
>  	while (vmp->vcpu_head)
> -		vm_vcpu_rm(vmp, vmp->vcpu_head->id);
> +		vm_vcpu_rm(vmp, vmp->vcpu_head);
>  
>  	ret = close(vmp->fd);
>  	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
> -- 
> 2.26.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 02/10] KVM: selftests: Use kernel's list instead of homebrewed replacement
  2020-04-10 23:16 ` [PATCH 02/10] KVM: selftests: Use kernel's list instead of homebrewed replacement Sean Christopherson
@ 2020-04-14 16:03   ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-04-14 16:03 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Wainer dos Santos Moschetta

On Fri, Apr 10, 2020 at 04:16:59PM -0700, Sean Christopherson wrote:
> Replace the KVM selftests' homebrewed linked lists for vCPUs and memory
> regions with the kernel's 'struct list_head'.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  .../testing/selftests/kvm/include/kvm_util.h  |  1 +
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 94 ++++++++-----------
>  .../selftests/kvm/lib/kvm_util_internal.h     |  8 +-
>  .../selftests/kvm/lib/s390x/processor.c       |  5 +-
>  4 files changed, 48 insertions(+), 60 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index a99b875f50d2..2f329e785c58 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -10,6 +10,7 @@
>  #include "test_util.h"
>  
>  #include "asm/kvm.h"
> +#include "linux/list.h"
>  #include "linux/kvm.h"
>  #include <sys/ioctl.h>
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 9a783c20dd26..105ee9bc09f0 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -161,6 +161,9 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>  	vm = calloc(1, sizeof(*vm));
>  	TEST_ASSERT(vm != NULL, "Insufficient Memory");
>  
> +	INIT_LIST_HEAD(&vm->vcpus);
> +	INIT_LIST_HEAD(&vm->userspace_mem_regions);
> +
>  	vm->mode = mode;
>  	vm->type = 0;
>  
> @@ -258,8 +261,7 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
>  	if (vmp->has_irqchip)
>  		vm_create_irqchip(vmp);
>  
> -	for (region = vmp->userspace_mem_region_head; region;
> -		region = region->next) {
> +	list_for_each_entry(region, &vmp->userspace_mem_regions, list) {
>  		int ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION, &region->region);
>  		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n"
>  			    "  rc: %i errno: %i\n"
> @@ -319,8 +321,7 @@ userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end)
>  {
>  	struct userspace_mem_region *region;
>  
> -	for (region = vm->userspace_mem_region_head; region;
> -		region = region->next) {
> +	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
>  		uint64_t existing_start = region->region.guest_phys_addr;
>  		uint64_t existing_end = region->region.guest_phys_addr
>  			+ region->region.memory_size - 1;
> @@ -378,11 +379,11 @@ kvm_userspace_memory_region_find(struct kvm_vm *vm, uint64_t start,
>   */
>  struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>  {
> -	struct vcpu *vcpup;
> +	struct vcpu *vcpu;
>  
> -	for (vcpup = vm->vcpu_head; vcpup; vcpup = vcpup->next) {
> -		if (vcpup->id == vcpuid)
> -			return vcpup;
> +	list_for_each_entry(vcpu, &vm->vcpus, list) {
> +		if (vcpu->id == vcpuid)
> +			return vcpu;
>  	}
>  
>  	return NULL;
> @@ -392,16 +393,15 @@ struct vcpu *vcpu_find(struct kvm_vm *vm, uint32_t vcpuid)
>   * VM VCPU Remove
>   *
>   * Input Args:
> - *   vm - Virtual Machine
>   *   vcpu - VCPU to remove
>   *
>   * Output Args: None
>   *
>   * Return: None, TEST_ASSERT failures for all error conditions
>   *
> - * Within the VM specified by vm, removes the VCPU given by vcpuid.
> + * Removes a vCPU from a VM and frees its resources.
>   */
> -static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
> +static void vm_vcpu_rm(struct vcpu *vcpu)
>  {
>  	int ret;
>  
> @@ -412,21 +412,17 @@ static void vm_vcpu_rm(struct kvm_vm *vm, struct vcpu *vcpu)
>  	TEST_ASSERT(ret == 0, "Close of VCPU fd failed, rc: %i "
>  		"errno: %i", ret, errno);
>  
> -	if (vcpu->next)
> -		vcpu->next->prev = vcpu->prev;
> -	if (vcpu->prev)
> -		vcpu->prev->next = vcpu->next;
> -	else
> -		vm->vcpu_head = vcpu->next;
> +	list_del(&vcpu->list);
>  	free(vcpu);
>  }
>  
>  void kvm_vm_release(struct kvm_vm *vmp)
>  {
> +	struct vcpu *vcpu, *tmp;
>  	int ret;
>  
> -	while (vmp->vcpu_head)
> -		vm_vcpu_rm(vmp, vmp->vcpu_head);
> +	list_for_each_entry_safe(vcpu, tmp, &vmp->vcpus, list)
> +		vm_vcpu_rm(vcpu);
>  
>  	ret = close(vmp->fd);
>  	TEST_ASSERT(ret == 0, "Close of vm fd failed,\n"
> @@ -442,15 +438,15 @@ void kvm_vm_release(struct kvm_vm *vmp)
>   */
>  void kvm_vm_free(struct kvm_vm *vmp)
>  {
> +	struct userspace_mem_region *region, *tmp;
>  	int ret;
>  
>  	if (vmp == NULL)
>  		return;
>  
>  	/* Free userspace_mem_regions. */
> -	while (vmp->userspace_mem_region_head) {
> -		struct userspace_mem_region *region
> -			= vmp->userspace_mem_region_head;
> +	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list) {
> +		list_del(&region->list);
>  
>  		region->region.memory_size = 0;
>  		ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION,
> @@ -458,7 +454,6 @@ void kvm_vm_free(struct kvm_vm *vmp)
>  		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
>  			"rc: %i errno: %i", ret, errno);
>  
> -		vmp->userspace_mem_region_head = region->next;
>  		sparsebit_free(&region->unused_phy_pages);
>  		ret = munmap(region->mmap_start, region->mmap_size);
>  		TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i",
> @@ -611,12 +606,10 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  			(uint64_t) region->region.memory_size);
>  
>  	/* Confirm no region with the requested slot already exists. */
> -	for (region = vm->userspace_mem_region_head; region;
> -		region = region->next) {
> -		if (region->region.slot == slot)
> -			break;
> -	}
> -	if (region != NULL)
> +	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
> +		if (region->region.slot != slot)
> +			continue;
> +
>  		TEST_FAIL("A mem region with the requested slot "
>  			"already exists.\n"
>  			"  requested slot: %u paddr: 0x%lx npages: 0x%lx\n"
> @@ -625,6 +618,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  			region->region.slot,
>  			(uint64_t) region->region.guest_phys_addr,
>  			(uint64_t) region->region.memory_size);
> +	}
>  
>  	/* Allocate and initialize new mem region structure. */
>  	region = calloc(1, sizeof(*region));
> @@ -685,10 +679,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>  		guest_paddr, (uint64_t) region->region.memory_size);
>  
>  	/* Add to linked-list of memory regions. */
> -	if (vm->userspace_mem_region_head)
> -		vm->userspace_mem_region_head->prev = region;
> -	region->next = vm->userspace_mem_region_head;
> -	vm->userspace_mem_region_head = region;
> +	list_add(&region->list, &vm->userspace_mem_regions);
>  }
>  
>  /*
> @@ -711,20 +702,17 @@ memslot2region(struct kvm_vm *vm, uint32_t memslot)
>  {
>  	struct userspace_mem_region *region;
>  
> -	for (region = vm->userspace_mem_region_head; region;
> -		region = region->next) {
> +	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
>  		if (region->region.slot == memslot)
> -			break;
> -	}
> -	if (region == NULL) {
> -		fprintf(stderr, "No mem region with the requested slot found,\n"
> -			"  requested slot: %u\n", memslot);
> -		fputs("---- vm dump ----\n", stderr);
> -		vm_dump(stderr, vm, 2);
> -		TEST_FAIL("Mem region not found");
> +			return region;
>  	}
>  
> -	return region;
> +	fprintf(stderr, "No mem region with the requested slot found,\n"
> +		"  requested slot: %u\n", memslot);
> +	fputs("---- vm dump ----\n", stderr);
> +	vm_dump(stderr, vm, 2);
> +	TEST_FAIL("Mem region not found");
> +	return NULL;
>  }
>  
>  /*
> @@ -862,10 +850,7 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid)
>  		"vcpu id: %u errno: %i", vcpuid, errno);
>  
>  	/* Add to linked-list of VCPUs. */
> -	if (vm->vcpu_head)
> -		vm->vcpu_head->prev = vcpu;
> -	vcpu->next = vm->vcpu_head;
> -	vm->vcpu_head = vcpu;
> +	list_add(&vcpu->list, &vm->vcpus);
>  }
>  
>  /*
> @@ -1058,8 +1043,8 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
>  void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
>  {
>  	struct userspace_mem_region *region;
> -	for (region = vm->userspace_mem_region_head; region;
> -	     region = region->next) {
> +
> +	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
>  		if ((gpa >= region->region.guest_phys_addr)
>  			&& (gpa <= (region->region.guest_phys_addr
>  				+ region->region.memory_size - 1)))
> @@ -1091,8 +1076,8 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
>  vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
>  {
>  	struct userspace_mem_region *region;
> -	for (region = vm->userspace_mem_region_head; region;
> -	     region = region->next) {
> +
> +	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
>  		if ((hva >= region->host_mem)
>  			&& (hva <= (region->host_mem
>  				+ region->region.memory_size - 1)))
> @@ -1519,8 +1504,7 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
>  	fprintf(stream, "%*sfd: %i\n", indent, "", vm->fd);
>  	fprintf(stream, "%*spage_size: 0x%x\n", indent, "", vm->page_size);
>  	fprintf(stream, "%*sMem Regions:\n", indent, "");
> -	for (region = vm->userspace_mem_region_head; region;
> -		region = region->next) {
> +	list_for_each_entry(region, &vm->userspace_mem_regions, list) {
>  		fprintf(stream, "%*sguest_phys: 0x%lx size: 0x%lx "
>  			"host_virt: %p\n", indent + 2, "",
>  			(uint64_t) region->region.guest_phys_addr,
> @@ -1539,7 +1523,7 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
>  		virt_dump(stream, vm, indent + 4);
>  	}
>  	fprintf(stream, "%*sVCPUs:\n", indent, "");
> -	for (vcpu = vm->vcpu_head; vcpu; vcpu = vcpu->next)
> +	list_for_each_entry(vcpu, &vm->vcpus, list)
>  		vcpu_dump(stream, vm, vcpu->id, indent + 2);
>  }
>  
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> index ca56a0133127..2ef446520748 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> +++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
> @@ -13,7 +13,6 @@
>  #define KVM_DEV_PATH		"/dev/kvm"
>  
>  struct userspace_mem_region {
> -	struct userspace_mem_region *next, *prev;
>  	struct kvm_userspace_memory_region region;
>  	struct sparsebit *unused_phy_pages;
>  	int fd;
> @@ -21,10 +20,11 @@ struct userspace_mem_region {
>  	void *host_mem;
>  	void *mmap_start;
>  	size_t mmap_size;
> +	struct list_head list;
>  };
>  
>  struct vcpu {
> -	struct vcpu *next, *prev;
> +	struct list_head list;
>  	uint32_t id;
>  	int fd;
>  	struct kvm_run *state;
> @@ -41,8 +41,8 @@ struct kvm_vm {
>  	unsigned int pa_bits;
>  	unsigned int va_bits;
>  	uint64_t max_gfn;
> -	struct vcpu *vcpu_head;
> -	struct userspace_mem_region *userspace_mem_region_head;
> +	struct list_head vcpus;
> +	struct list_head userspace_mem_regions;
>  	struct sparsebit *vpages_valid;
>  	struct sparsebit *vpages_mapped;
>  	bool has_irqchip;
> diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
> index 8d94961bd046..a88c5d665725 100644
> --- a/tools/testing/selftests/kvm/lib/s390x/processor.c
> +++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
> @@ -233,7 +233,10 @@ void vcpu_args_set(struct kvm_vm *vm, uint32_t vcpuid, unsigned int num, ...)
>  
>  void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
>  {
> -	struct vcpu *vcpu = vm->vcpu_head;
> +	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
> +
> +	if (!vcpu)
> +		return;
>  
>  	fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
>  		indent, "", vcpu->state->psw_mask, vcpu->state->psw_addr);
> -- 
> 2.26.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 03/10] KVM: selftests: Add util to delete memory region
  2020-04-10 23:17 ` [PATCH 03/10] KVM: selftests: Add util to delete memory region Sean Christopherson
  2020-04-13 18:52   ` Wainer dos Santos Moschetta
@ 2020-04-14 16:04   ` Andrew Jones
  1 sibling, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-04-14 16:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Wainer dos Santos Moschetta

On Fri, Apr 10, 2020 at 04:17:00PM -0700, Sean Christopherson wrote:
> Add a utility to delete a memory region, it will be used by x86's
> set_memory_region_test.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  .../testing/selftests/kvm/include/kvm_util.h  |  1 +
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 56 +++++++++++++------
>  2 files changed, 40 insertions(+), 17 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 2f329e785c58..d4c3e4d9cd92 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -114,6 +114,7 @@ int _vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, unsigned long ioctl,
>  void vm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
>  void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
>  void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
> +void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
>  void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid);
>  vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
>  			  uint32_t data_memslot, uint32_t pgd_memslot);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 105ee9bc09f0..ab5b7ea60f4b 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -433,34 +433,38 @@ void kvm_vm_release(struct kvm_vm *vmp)
>  		"  vmp->kvm_fd: %i rc: %i errno: %i", vmp->kvm_fd, ret, errno);
>  }
>  
> +static void __vm_mem_region_delete(struct kvm_vm *vm,
> +				   struct userspace_mem_region *region)
> +{
> +	int ret;
> +
> +	list_del(&region->list);
> +
> +	region->region.memory_size = 0;
> +	ret = ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, &region->region);
> +	TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
> +		    "rc: %i errno: %i", ret, errno);
> +
> +	sparsebit_free(&region->unused_phy_pages);
> +	ret = munmap(region->mmap_start, region->mmap_size);
> +	TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i", ret, errno);
> +
> +	free(region);
> +}
> +
>  /*
>   * Destroys and frees the VM pointed to by vmp.
>   */
>  void kvm_vm_free(struct kvm_vm *vmp)
>  {
>  	struct userspace_mem_region *region, *tmp;
> -	int ret;
>  
>  	if (vmp == NULL)
>  		return;
>  
>  	/* Free userspace_mem_regions. */
> -	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list) {
> -		list_del(&region->list);
> -
> -		region->region.memory_size = 0;
> -		ret = ioctl(vmp->fd, KVM_SET_USER_MEMORY_REGION,
> -			&region->region);
> -		TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed, "
> -			"rc: %i errno: %i", ret, errno);
> -
> -		sparsebit_free(&region->unused_phy_pages);
> -		ret = munmap(region->mmap_start, region->mmap_size);
> -		TEST_ASSERT(ret == 0, "munmap failed, rc: %i errno: %i",
> -			    ret, errno);
> -
> -		free(region);
> -	}
> +	list_for_each_entry_safe(region, tmp, &vmp->userspace_mem_regions, list)
> +		__vm_mem_region_delete(vmp, region);
>  
>  	/* Free sparsebit arrays. */
>  	sparsebit_free(&vmp->vpages_valid);
> @@ -775,6 +779,24 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
>  		    ret, errno, slot, new_gpa);
>  }
>  
> +/*
> + * VM Memory Region Delete
> + *
> + * Input Args:
> + *   vm - Virtual Machine
> + *   slot - Slot of the memory region to delete
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Delete a memory region.
> + */
> +void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
> +{
> +	__vm_mem_region_delete(vm, memslot2region(vm, slot));
> +}
> +
>  /*
>   * VCPU mmap Size
>   *
> -- 
> 2.26.0
>

Reviewed-by: Andrew Jones <drjones@redhat.com>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test
  2020-04-10 23:17 ` [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test Sean Christopherson
@ 2020-04-14 16:19   ` Andrew Jones
  2020-04-14 16:29     ` Andrew Jones
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2020-04-14 16:19 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Wainer dos Santos Moschetta

On Fri, Apr 10, 2020 at 04:17:03PM -0700, Sean Christopherson wrote:
> Add a testcase for deleting memslots while the guest is running.
> Like the "move" testcase, this is x86_64-only as it relies on MMIO
> happening when a non-existent memslot is encountered.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  .../kvm/x86_64/set_memory_region_test.c       | 91 +++++++++++++++++++
>  1 file changed, 91 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> index 629dd8579b73..b556024af683 100644
> --- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> @@ -29,6 +29,9 @@
>  
>  static const uint64_t MMIO_VAL = 0xbeefull;
>  
> +extern const uint64_t final_rip_start;
> +extern const uint64_t final_rip_end;
> +
>  static sem_t vcpu_ready;
>  
>  static inline uint64_t guest_spin_on_val(uint64_t spin_val)

We don't have guest_spin_on_val(), so it looks like this patch applies
on the older version of this series? But I don't know where
wait_for_vcpu() called below comes from.

Thanks,
drew


> @@ -203,6 +206,89 @@ static void test_move_memory_region(void)
>  	kvm_vm_free(vm);
>  }
>  
> +static void guest_code_delete_memory_region(void)
> +{
> +	uint64_t val;
> +
> +	GUEST_SYNC(0);
> +
> +	/* Spin until the memory region is deleted. */
> +	val = guest_spin_on_val(0);
> +	GUEST_ASSERT_1(val == MMIO_VAL, val);
> +
> +	/* Spin until the memory region is recreated. */
> +	val = guest_spin_on_val(MMIO_VAL);
> +	GUEST_ASSERT_1(val == 0, val);
> +
> +	/* Spin until the memory region is deleted. */
> +	val = guest_spin_on_val(0);
> +	GUEST_ASSERT_1(val == MMIO_VAL, val);
> +
> +	asm("1:\n\t"
> +	    ".pushsection .rodata\n\t"
> +	    ".global final_rip_start\n\t"
> +	    "final_rip_start: .quad 1b\n\t"
> +	    ".popsection");
> +
> +	/* Spin indefinitely (until the code memslot is deleted). */
> +	guest_spin_on_val(MMIO_VAL);
> +
> +	asm("1:\n\t"
> +	    ".pushsection .rodata\n\t"
> +	    ".global final_rip_end\n\t"
> +	    "final_rip_end: .quad 1b\n\t"
> +	    ".popsection");
> +
> +	GUEST_ASSERT_1(0, 0);
> +}
> +
> +static void test_delete_memory_region(void)
> +{
> +	pthread_t vcpu_thread;
> +	struct kvm_regs regs;
> +	struct kvm_run *run;
> +	struct kvm_vm *vm;
> +
> +	vm = spawn_vm(&vcpu_thread, guest_code_delete_memory_region);
> +
> +	/* Delete the memory region, the guest should not die. */
> +	vm_mem_region_delete(vm, MEM_REGION_SLOT);
> +	wait_for_vcpu();
> +
> +	/* Recreate the memory region.  The guest should see "0". */
> +	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_THP,
> +				    MEM_REGION_GPA, MEM_REGION_SLOT,
> +				    MEM_REGION_SIZE / getpagesize(), 0);
> +	wait_for_vcpu();
> +
> +	/* Delete the region again so that there's only one memslot left. */
> +	vm_mem_region_delete(vm, MEM_REGION_SLOT);
> +	wait_for_vcpu();
> +
> +	/*
> +	 * Delete the primary memslot.  This should cause an emulation error or
> +	 * shutdown due to the page tables getting nuked.
> +	 */
> +	vm_mem_region_delete(vm, 0);
> +
> +	pthread_join(vcpu_thread, NULL);
> +
> +	run = vcpu_state(vm, VCPU_ID);
> +
> +	TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN ||
> +		    run->exit_reason == KVM_EXIT_INTERNAL_ERROR,
> +		    "Unexpected exit reason = %d", run->exit_reason);
> +
> +	vcpu_regs_get(vm, VCPU_ID, &regs);
> +
> +	TEST_ASSERT(regs.rip >= final_rip_start &&
> +		    regs.rip < final_rip_end,
> +		    "Bad rip, expected 0x%lx - 0x%lx, got 0x%llx\n",
> +		    final_rip_start, final_rip_end, regs.rip);
> +
> +	kvm_vm_free(vm);
> +}
> +
>  int main(int argc, char *argv[])
>  {
>  	int i, loops;
> @@ -215,8 +301,13 @@ int main(int argc, char *argv[])
>  	else
>  		loops = 10;
>  
> +	pr_info("Testing MOVE of in-use region, %d loops\n", loops);
>  	for (i = 0; i < loops; i++)
>  		test_move_memory_region();
>  
> +	pr_info("Testing DELETE of in-use region, %d loops\n", loops);
> +	for (i = 0; i < loops; i++)
> +		test_delete_memory_region();
> +
>  	return 0;
>  }
> -- 
> 2.26.0
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test
  2020-04-14 16:19   ` Andrew Jones
@ 2020-04-14 16:29     ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-04-14 16:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, Christian Borntraeger, Janosch Frank,
	David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Wainer dos Santos Moschetta

On Tue, Apr 14, 2020 at 06:19:00PM +0200, Andrew Jones wrote:
> On Fri, Apr 10, 2020 at 04:17:03PM -0700, Sean Christopherson wrote:
> > Add a testcase for deleting memslots while the guest is running.
> > Like the "move" testcase, this is x86_64-only as it relies on MMIO
> > happening when a non-existent memslot is encountered.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  .../kvm/x86_64/set_memory_region_test.c       | 91 +++++++++++++++++++
> >  1 file changed, 91 insertions(+)
> > 
> > diff --git a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> > index 629dd8579b73..b556024af683 100644
> > --- a/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> > +++ b/tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> > @@ -29,6 +29,9 @@
> >  
> >  static const uint64_t MMIO_VAL = 0xbeefull;
> >  
> > +extern const uint64_t final_rip_start;
> > +extern const uint64_t final_rip_end;
> > +
> >  static sem_t vcpu_ready;
> >  
> >  static inline uint64_t guest_spin_on_val(uint64_t spin_val)
> 
> We don't have guest_spin_on_val(), so it looks like this patch applies
> on the older version of this series? But I don't know where
> wait_for_vcpu() called below comes from.

Eh, never mind. I skipped over the patch where these were introduced by
accidentally marking it as read.

I'll look tomorrow.

Thanks,
drew

> 
> Thanks,
> drew
> 
> 
> > @@ -203,6 +206,89 @@ static void test_move_memory_region(void)
> >  	kvm_vm_free(vm);
> >  }
> >  
> > +static void guest_code_delete_memory_region(void)
> > +{
> > +	uint64_t val;
> > +
> > +	GUEST_SYNC(0);
> > +
> > +	/* Spin until the memory region is deleted. */
> > +	val = guest_spin_on_val(0);
> > +	GUEST_ASSERT_1(val == MMIO_VAL, val);
> > +
> > +	/* Spin until the memory region is recreated. */
> > +	val = guest_spin_on_val(MMIO_VAL);
> > +	GUEST_ASSERT_1(val == 0, val);
> > +
> > +	/* Spin until the memory region is deleted. */
> > +	val = guest_spin_on_val(0);
> > +	GUEST_ASSERT_1(val == MMIO_VAL, val);
> > +
> > +	asm("1:\n\t"
> > +	    ".pushsection .rodata\n\t"
> > +	    ".global final_rip_start\n\t"
> > +	    "final_rip_start: .quad 1b\n\t"
> > +	    ".popsection");
> > +
> > +	/* Spin indefinitely (until the code memslot is deleted). */
> > +	guest_spin_on_val(MMIO_VAL);
> > +
> > +	asm("1:\n\t"
> > +	    ".pushsection .rodata\n\t"
> > +	    ".global final_rip_end\n\t"
> > +	    "final_rip_end: .quad 1b\n\t"
> > +	    ".popsection");
> > +
> > +	GUEST_ASSERT_1(0, 0);
> > +}
> > +
> > +static void test_delete_memory_region(void)
> > +{
> > +	pthread_t vcpu_thread;
> > +	struct kvm_regs regs;
> > +	struct kvm_run *run;
> > +	struct kvm_vm *vm;
> > +
> > +	vm = spawn_vm(&vcpu_thread, guest_code_delete_memory_region);
> > +
> > +	/* Delete the memory region, the guest should not die. */
> > +	vm_mem_region_delete(vm, MEM_REGION_SLOT);
> > +	wait_for_vcpu();
> > +
> > +	/* Recreate the memory region.  The guest should see "0". */
> > +	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_THP,
> > +				    MEM_REGION_GPA, MEM_REGION_SLOT,
> > +				    MEM_REGION_SIZE / getpagesize(), 0);
> > +	wait_for_vcpu();
> > +
> > +	/* Delete the region again so that there's only one memslot left. */
> > +	vm_mem_region_delete(vm, MEM_REGION_SLOT);
> > +	wait_for_vcpu();
> > +
> > +	/*
> > +	 * Delete the primary memslot.  This should cause an emulation error or
> > +	 * shutdown due to the page tables getting nuked.
> > +	 */
> > +	vm_mem_region_delete(vm, 0);
> > +
> > +	pthread_join(vcpu_thread, NULL);
> > +
> > +	run = vcpu_state(vm, VCPU_ID);
> > +
> > +	TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN ||
> > +		    run->exit_reason == KVM_EXIT_INTERNAL_ERROR,
> > +		    "Unexpected exit reason = %d", run->exit_reason);
> > +
> > +	vcpu_regs_get(vm, VCPU_ID, &regs);
> > +
> > +	TEST_ASSERT(regs.rip >= final_rip_start &&
> > +		    regs.rip < final_rip_end,
> > +		    "Bad rip, expected 0x%lx - 0x%lx, got 0x%llx\n",
> > +		    final_rip_start, final_rip_end, regs.rip);
> > +
> > +	kvm_vm_free(vm);
> > +}
> > +
> >  int main(int argc, char *argv[])
> >  {
> >  	int i, loops;
> > @@ -215,8 +301,13 @@ int main(int argc, char *argv[])
> >  	else
> >  		loops = 10;
> >  
> > +	pr_info("Testing MOVE of in-use region, %d loops\n", loops);
> >  	for (i = 0; i < loops; i++)
> >  		test_move_memory_region();
> >  
> > +	pr_info("Testing DELETE of in-use region, %d loops\n", loops);
> > +	for (i = 0; i < loops; i++)
> > +		test_delete_memory_region();
> > +
> >  	return 0;
> >  }
> > -- 
> > 2.26.0
> > 
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
  2020-04-13 21:26     ` Sean Christopherson
  2020-04-14  8:25       ` Andrew Jones
@ 2020-04-15 15:11       ` Paolo Bonzini
  1 sibling, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2020-04-15 15:11 UTC (permalink / raw)
  To: Sean Christopherson, Wainer dos Santos Moschetta
  Cc: Christian Borntraeger, Janosch Frank, David Hildenbrand,
	Cornelia Huck, kvm, linux-kernel, Peter Xu, Andrew Jones

On 13/04/20 23:26, Sean Christopherson wrote:
> FWIW, I think the whole vcpuid thing is a bad interface, almost all the
> tests end up defining an arbitrary number for the sole VCPU_ID, i.e. the
> vcpuid interface just adds a pointless layer of obfuscation.  I haven't
> looked through all the tests, but returning the vcpu and making the struct
> opaque, same as kvm_vm, seems like it would yield more readable code with
> less overhead.

Yes, I agree.  This was in the original Google submission, I didn't like
it either but I didn't feel like changing it and I wouldn't mind if
someone does the work...

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests
  2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
                   ` (9 preceding siblings ...)
  2020-04-10 23:17 ` [PATCH 10/10] selftests: kvm: Add testcase for creating max number of memslots Sean Christopherson
@ 2020-04-15 15:40 ` Paolo Bonzini
  10 siblings, 0 replies; 26+ messages in thread
From: Paolo Bonzini @ 2020-04-15 15:40 UTC (permalink / raw)
  To: Sean Christopherson, Christian Borntraeger, Janosch Frank
  Cc: David Hildenbrand, Cornelia Huck, kvm, linux-kernel, Peter Xu,
	Andrew Jones, Wainer dos Santos Moschetta

On 11/04/20 01:16, Sean Christopherson wrote:
> This is v2-ish of my series to add a "delete" testcase[1], and v5.1 of
> Wainer's series to add a "max" testcase[2].
> 
> I've only tested on x86_64.  I fudged compile testing on !x86_64 by
> inverting the ifdefs, e.g. to squish unused var warnings, but by no means
> is the code actually tested on other architectures.
> 
> I kept Andrew's review for the "max" test.  Other than the 1MB->2MB
> change (see below), it was basically a straight copy-paste of code.
> 
> v1->v2 of delete:
>   - Drop patch to expose primary memslot. [Peter]
>   - Add explicit synchronization to MOVE and DELETE tests. [Peter]
>   - Use list.h instead of open coding linked lists. [Peter]
>   - Clean up the code and separate the testcases into separate functions.
>   - Expand GUEST_ASSERT() to allow passing a value back to the host for
>     printing.
>   - Move to common KVM, with ifdefs to hide the x86_64-only stuff (which
>     is a lot at this point).
>   - Do KVM_SET_NR_MMU_PAGES in the "zero" testcase to get less obscure
>     behavior for KVM_RUN. [Christian]
> 
> v5.1 of max:
>   - Fix a whitespace issue in vm_get_fd(). [checkpatch]
>   - Move the code to set_memory_region_test.  The only _intended_
>     functional change is to create 2MB regions instead of 1MB regions.
>     The only motivation for doing so was to reuse an existing define in
>     set_memory_region_test.
> 
> [1] https://lkml.kernel.org/r/20200320205546.2396-1-sean.j.christopherson@intel.com
> [2] https://lkml.kernel.org/r/20200409220905.26573-1-wainersm@redhat.com
> 
> Sean Christopherson (8):
>   KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm()
>   KVM: selftests: Use kernel's list instead of homebrewed replacement
>   KVM: selftests: Add util to delete memory region
>   KVM: selftests: Add GUEST_ASSERT variants to pass values to host
>   KVM: sefltests: Add explicit synchronization to move mem region test
>   KVM: selftests: Add "delete" testcase to set_memory_region_test
>   KVM: selftests: Add "zero" testcase to set_memory_region_test
>   KVM: selftests: Make set_memory_region_test common to all
>     architectures
> 
> Wainer dos Santos Moschetta (2):
>   selftests: kvm: Add vm_get_fd() in kvm_util
>   selftests: kvm: Add testcase for creating max number of memslots
> 
>  tools/testing/selftests/kvm/.gitignore        |   2 +-
>  tools/testing/selftests/kvm/Makefile          |   4 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  28 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 154 +++----
>  .../selftests/kvm/lib/kvm_util_internal.h     |   8 +-
>  .../selftests/kvm/lib/s390x/processor.c       |   5 +-
>  .../selftests/kvm/set_memory_region_test.c    | 403 ++++++++++++++++++
>  .../kvm/x86_64/set_memory_region_test.c       | 141 ------
>  8 files changed, 520 insertions(+), 225 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/set_memory_region_test.c
>  delete mode 100644 tools/testing/selftests/kvm/x86_64/set_memory_region_test.c
> 

Queued, thanks -- while keeping the zero region test x86-only.

Paolo


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2020-04-15 15:41 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-10 23:16 [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Sean Christopherson
2020-04-10 23:16 ` [PATCH 01/10] KVM: selftests: Take vcpu pointer instead of id in vm_vcpu_rm() Sean Christopherson
2020-04-13 18:26   ` Wainer dos Santos Moschetta
2020-04-13 21:26     ` Sean Christopherson
2020-04-14  8:25       ` Andrew Jones
2020-04-14 13:02         ` Wainer dos Santos Moschetta
2020-04-15 15:11       ` Paolo Bonzini
2020-04-14 16:02   ` Andrew Jones
2020-04-10 23:16 ` [PATCH 02/10] KVM: selftests: Use kernel's list instead of homebrewed replacement Sean Christopherson
2020-04-14 16:03   ` Andrew Jones
2020-04-10 23:17 ` [PATCH 03/10] KVM: selftests: Add util to delete memory region Sean Christopherson
2020-04-13 18:52   ` Wainer dos Santos Moschetta
2020-04-14 16:04   ` Andrew Jones
2020-04-10 23:17 ` [PATCH 04/10] KVM: selftests: Add GUEST_ASSERT variants to pass values to host Sean Christopherson
2020-04-14 16:02   ` Andrew Jones
2020-04-10 23:17 ` [PATCH 05/10] KVM: sefltests: Add explicit synchronization to move mem region test Sean Christopherson
2020-04-10 23:17 ` [PATCH 06/10] KVM: selftests: Add "delete" testcase to set_memory_region_test Sean Christopherson
2020-04-14 16:19   ` Andrew Jones
2020-04-14 16:29     ` Andrew Jones
2020-04-10 23:17 ` [PATCH 07/10] selftests: kvm: Add vm_get_fd() in kvm_util Sean Christopherson
2020-04-10 23:17 ` [PATCH 08/10] KVM: selftests: Add "zero" testcase to set_memory_region_test Sean Christopherson
2020-04-10 23:17 ` [PATCH 09/10] KVM: selftests: Make set_memory_region_test common to all architectures Sean Christopherson
2020-04-14 14:43   ` Wainer dos Santos Moschetta
2020-04-10 23:17 ` [PATCH 10/10] selftests: kvm: Add testcase for creating max number of memslots Sean Christopherson
2020-04-13 13:22   ` Wainer dos Santos Moschetta
2020-04-15 15:40 ` [PATCH 00/10] KVM: selftests: Add KVM_SET_MEMORY_REGION tests Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).