linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/8] Create a userfaultfd demand paging test
@ 2019-12-16 21:38 Ben Gardon
  2019-12-16 21:38 ` [PATCH v3 1/8] KVM: selftests: Create a " Ben Gardon
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

When handling page faults for many vCPUs during demand paging, KVM's MMU
lock becomes highly contended. This series creates a test with a naive
userfaultfd based demand paging implementation to demonstrate that
contention. This test serves both as a functional test of userfaultfd
and a microbenchmark of demand paging performance with a variable number
of vCPUs and memory per vCPU.

The test creates N userfaultfd threads, N vCPUs, and a region of memory
with M pages per vCPU. The N userfaultfd polling threads are each set up
to serve faults on a region of memory corresponding to one of the vCPUs.
Each of the vCPUs is then started, and touches each page of its disjoint
memory region, sequentially. In response to faults, the userfaultfd
threads copy a static buffer into the guest's memory. This creates a
worst case for MMU lock contention as we have removed most of the
contention between the userfaultfd threads and there is no time required
to fetch the contents of guest memory.

This test was run successfully on Intel Haswell, Broadwell, and
Cascadelake hosts with a variety of vCPU counts and memory sizes.

This test was adapted from the dirty_log_test.

The series can also be viewed in Gerrit here:
https://linux-review.googlesource.com/c/virt/kvm/kvm/+/1464
(Thanks to Dmitry Vyukov <dvyukov@google.com> for setting up the Gerrit
instance)

Ben Gardon (9):
  KVM: selftests: Create a demand paging test
  KVM: selftests: Add demand paging content to the demand paging test
  KVM: selftests: Add memory size parameter to the demand paging test
  KVM: selftests: Pass args to vCPU instead of using globals
  KVM: selftests: Support multiple vCPUs in demand paging test
  KVM: selftests: Time guest demand paging
  KVM: selftests: Add parameter to _vm_create for memslot 0 base paddr
  KVM: selftests: Support large VMs in demand paging test
  Add static flag

 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   4 +-
 .../selftests/kvm/demand_paging_test.c        | 610 ++++++++++++++++++
 tools/testing/selftests/kvm/dirty_log_test.c  |   2 +-
 .../testing/selftests/kvm/include/kvm_util.h  |   3 +-
 tools/testing/selftests/kvm/lib/kvm_util.c    |   7 +-
 6 files changed, 621 insertions(+), 6 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c

-- 
2.23.0.444.g18eeb5a265-goog


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/8] KVM: selftests: Create a demand paging test
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
@ 2019-12-16 21:38 ` Ben Gardon
  2020-01-07 14:33   ` Peter Xu
  2019-12-16 21:38 ` [PATCH v3 2/8] KVM: selftests: Add demand paging content to the " Ben Gardon
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

While userfaultfd, KVM's demand paging implementation, is not specific
to KVM, having a benchmark for its performance will be useful for
guiding performance improvements to KVM. As a first step towards creating
a userfaultfd demand paging test, create a simple memory access test,
based on dirty_log_test.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/demand_paging_test.c        | 268 ++++++++++++++++++
 3 files changed, 270 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 30072c3f52fbe..9619d96e15c41 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -17,3 +17,4 @@
 /clear_dirty_log_test
 /dirty_log_test
 /kvm_create_max_vcpus
+/demand_paging_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 3138a916574a9..8c412cdd527e6 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -28,6 +28,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test
 TEST_GEN_PROGS_x86_64 += x86_64/xss_msr_test
 TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
 TEST_GEN_PROGS_x86_64 += dirty_log_test
+TEST_GEN_PROGS_x86_64 += demand_paging_test
 TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
 
 TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
new file mode 100644
index 0000000000000..36e12db5da56b
--- /dev/null
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -0,0 +1,268 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KVM demand paging test
+ * Adapted from dirty_log_test.c
+ *
+ * Copyright (C) 2018, Red Hat, Inc.
+ * Copyright (C) 2019, Google, Inc.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_name */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <time.h>
+#include <pthread.h>
+#include <linux/bitmap.h>
+#include <linux/bitops.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+
+#define VCPU_ID				1
+
+/* The memory slot index demand page */
+#define TEST_MEM_SLOT_INDEX		1
+
+/* Default guest test virtual memory offset */
+#define DEFAULT_GUEST_TEST_MEM		0xc0000000
+
+/*
+ * Guest/Host shared variables. Ensure addr_gva2hva() and/or
+ * sync_global_to/from_guest() are used when accessing from
+ * the host. READ/WRITE_ONCE() should also be used with anything
+ * that may change.
+ */
+static uint64_t host_page_size;
+static uint64_t guest_page_size;
+static uint64_t guest_num_pages;
+
+/*
+ * Guest physical memory offset of the testing memory slot.
+ * This will be set to the topmost valid physical address minus
+ * the test memory size.
+ */
+static uint64_t guest_test_phys_mem;
+
+/*
+ * Guest virtual memory offset of the testing memory slot.
+ * Must not conflict with identity mapped test code.
+ */
+static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+
+/*
+ * Continuously write to the first 8 bytes of each page in the demand paging
+ * memory region.
+ */
+static void guest_code(void)
+{
+	int i;
+
+	for (i = 0; i < guest_num_pages; i++) {
+		uint64_t addr = guest_test_virt_mem;
+
+		addr += i * guest_page_size;
+		addr &= ~(host_page_size - 1);
+		*(uint64_t *)addr = 0x0123456789ABCDEF;
+	}
+
+	GUEST_SYNC(1);
+}
+
+/* Points to the test VM memory region on which we are doing demand paging */
+static void *host_test_mem;
+static uint64_t host_num_pages;
+
+static void *vcpu_worker(void *data)
+{
+	int ret;
+	struct kvm_vm *vm = data;
+	struct kvm_run *run;
+
+	run = vcpu_state(vm, VCPU_ID);
+
+	/* Let the guest access its memory */
+	ret = _vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
+	if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) {
+		TEST_ASSERT(false,
+			    "Invalid guest sync status: exit_reason=%s\n",
+			    exit_reason_str(run->exit_reason));
+	}
+
+	return NULL;
+}
+
+static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
+				uint64_t extra_mem_pages, void *guest_code)
+{
+	struct kvm_vm *vm;
+	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
+
+	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
+#ifdef __x86_64__
+	vm_create_irqchip(vm);
+#endif
+	vm_vcpu_add_default(vm, vcpuid, guest_code);
+	return vm;
+}
+
+#define GUEST_MEM_SHIFT 30 /* 1G */
+#define PAGE_SHIFT_4K  12
+
+static void run_test(enum vm_guest_mode mode)
+{
+	pthread_t vcpu_thread;
+	struct kvm_vm *vm;
+
+	/*
+	 * We reserve page table for 2 times of extra dirty mem which
+	 * will definitely cover the original (1G+) test range.  Here
+	 * we do the calculation with 4K page size which is the
+	 * smallest so the page number will be enough for all archs
+	 * (e.g., 64K page size guest will need even less memory for
+	 * page tables).
+	 */
+	vm = create_vm(mode, VCPU_ID,
+		       2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K),
+		       guest_code);
+
+	guest_page_size = vm_get_page_size(vm);
+	/*
+	 * A little more than 1G of guest page sized pages.  Cover the
+	 * case where the size is not aligned to 64 pages.
+	 */
+	guest_num_pages = (1ul << (GUEST_MEM_SHIFT -
+				   vm_get_page_shift(vm))) + 16;
+#ifdef __s390x__
+	/* Round up to multiple of 1M (segment size) */
+	guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL;
+#endif
+
+	host_page_size = getpagesize();
+	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
+			 !!((guest_num_pages * guest_page_size) %
+			    host_page_size);
+
+	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
+			      guest_page_size;
+	guest_test_phys_mem &= ~(host_page_size - 1);
+
+#ifdef __s390x__
+	/* Align to 1M (segment size) */
+	guest_test_phys_mem &= ~((1 << 20) - 1);
+#endif
+
+	DEBUG("guest physical test memory offset: 0x%lx\n",
+	      guest_test_phys_mem);
+
+
+	/* Add an extra memory slot for testing demand paging */
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+				    guest_test_phys_mem,
+				    TEST_MEM_SLOT_INDEX,
+				    guest_num_pages, 0);
+
+	/* Do mapping for the demand paging memory slot */
+	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem,
+		 guest_num_pages * guest_page_size, 0);
+
+	/* Cache the HVA pointer of the region */
+	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+
+#ifdef __x86_64__
+	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+#endif
+
+	/* Export the shared variables to the guest */
+	sync_global_to_guest(vm, host_page_size);
+	sync_global_to_guest(vm, guest_page_size);
+	sync_global_to_guest(vm, guest_test_virt_mem);
+	sync_global_to_guest(vm, guest_num_pages);
+
+	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
+
+	/* Wait for the vcpu thread to quit */
+	pthread_join(vcpu_thread, NULL);
+
+	ucall_uninit(vm);
+	kvm_vm_free(vm);
+}
+
+struct vm_guest_mode_params {
+	bool supported;
+	bool enabled;
+};
+struct vm_guest_mode_params vm_guest_mode_params[NUM_VM_MODES];
+
+#define vm_guest_mode_params_init(mode, supported, enabled)		     \
+({									     \
+	vm_guest_mode_params[mode] =					     \
+			(struct vm_guest_mode_params){ supported, enabled }; \
+})
+
+static void help(char *name)
+{
+	int i;
+
+	puts("");
+	printf("usage: %s [-h] [-m mode]\n", name);
+	printf(" -m: specify the guest mode ID to test\n"
+	       "     (default: test all supported modes)\n"
+	       "     This option may be used multiple times.\n"
+	       "     Guest mode IDs:\n");
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
+		       vm_guest_mode_params[i].supported ? " (supported)" : "");
+	}
+	puts("");
+	exit(0);
+}
+
+int main(int argc, char *argv[])
+{
+	bool mode_selected = false;
+	unsigned int mode;
+	int opt, i;
+
+#ifdef __x86_64__
+	vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true);
+#endif
+#ifdef __s390x__
+	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
+#endif
+
+	while ((opt = getopt(argc, argv, "hm:")) != -1) {
+		switch (opt) {
+		case 'm':
+			if (!mode_selected) {
+				for (i = 0; i < NUM_VM_MODES; ++i)
+					vm_guest_mode_params[i].enabled = false;
+				mode_selected = true;
+			}
+			mode = strtoul(optarg, NULL, 10);
+			TEST_ASSERT(mode < NUM_VM_MODES,
+				    "Guest mode ID %d too big", mode);
+			vm_guest_mode_params[mode].enabled = true;
+			break;
+		case 'h':
+		default:
+			help(argv[0]);
+			break;
+		}
+	}
+
+	for (i = 0; i < NUM_VM_MODES; ++i) {
+		if (!vm_guest_mode_params[i].enabled)
+			continue;
+		TEST_ASSERT(vm_guest_mode_params[i].supported,
+			    "Guest mode ID %d (%s) not supported.",
+			    i, vm_guest_mode_string(i));
+		run_test(i);
+	}
+
+	return 0;
+}
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 2/8] KVM: selftests: Add demand paging content to the demand paging test
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
  2019-12-16 21:38 ` [PATCH v3 1/8] KVM: selftests: Create a " Ben Gardon
@ 2019-12-16 21:38 ` Ben Gardon
  2020-01-07 16:00   ` Peter Xu
  2019-12-16 21:38 ` [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay Ben Gardon
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

The demand paging test is currently a simple page access test which, while
potentially useful, doesn't add much versus the existing dirty logging
test. To improve the demand paging test, add a basic userfaultfd demand
paging implementation.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 177 +++++++++++++++++-
 1 file changed, 173 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 36e12db5da56b..a8f775dab7d4a 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -11,11 +11,14 @@
 
 #include <stdio.h>
 #include <stdlib.h>
+#include <sys/syscall.h>
 #include <unistd.h>
 #include <time.h>
+#include <poll.h>
 #include <pthread.h>
 #include <linux/bitmap.h>
 #include <linux/bitops.h>
+#include <linux/userfaultfd.h>
 
 #include "test_util.h"
 #include "kvm_util.h"
@@ -39,6 +42,8 @@ static uint64_t host_page_size;
 static uint64_t guest_page_size;
 static uint64_t guest_num_pages;
 
+static char *guest_data_prototype;
+
 /*
  * Guest physical memory offset of the testing memory slot.
  * This will be set to the topmost valid physical address minus
@@ -110,13 +115,158 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
 	return vm;
 }
 
+static int handle_uffd_page_request(int uffd, uint64_t addr)
+{
+	pid_t tid;
+	struct uffdio_copy copy;
+	int r;
+
+	tid = syscall(__NR_gettid);
+
+	copy.src = (uint64_t)guest_data_prototype;
+	copy.dst = addr;
+	copy.len = host_page_size;
+	copy.mode = 0;
+
+	r = ioctl(uffd, UFFDIO_COPY, &copy);
+	if (r == -1) {
+		DEBUG("Failed Paged in 0x%lx from thread %d with errno: %d\n",
+		      addr, tid, errno);
+		return r;
+	}
+
+	return 0;
+}
+
+bool quit_uffd_thread;
+
+struct uffd_handler_args {
+	int uffd;
+};
+
+static void *uffd_handler_thread_fn(void *arg)
+{
+	struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg;
+	int uffd = uffd_args->uffd;
+	int64_t pages = 0;
+
+	while (!quit_uffd_thread) {
+		struct uffd_msg msg;
+		struct pollfd pollfd[1];
+		int r;
+		uint64_t addr;
+
+		pollfd[0].fd = uffd;
+		pollfd[0].events = POLLIN;
+
+		/*
+		 * TODO this introduces a 0.5sec delay at the end of the test.
+		 * Reduce the timeout or eliminate it following the example in
+		 * tools/testing/selftests/vm/userfaultfd.c
+		 */
+		r = poll(pollfd, 1, 500);
+		switch (r) {
+		case -1:
+			DEBUG("poll err");
+			continue;
+		case 0:
+			continue;
+		case 1:
+			break;
+		default:
+			DEBUG("Polling uffd returned %d", r);
+			return NULL;
+		}
+
+		if (pollfd[0].revents & POLLERR) {
+			DEBUG("uffd revents has POLLERR");
+			return NULL;
+		}
+
+		if (!pollfd[0].revents & POLLIN)
+			continue;
+
+		r = read(uffd, &msg, sizeof(msg));
+		if (r == -1) {
+			if (errno == EAGAIN)
+				continue;
+			DEBUG("Read of uffd gor errno %d", errno);
+			return NULL;
+		}
+
+		if (r != sizeof(msg)) {
+			DEBUG("Read on uffd returned unexpected size: %d bytes",
+			      r);
+			return NULL;
+		}
+
+		if (!(msg.event & UFFD_EVENT_PAGEFAULT))
+			continue;
+
+		addr =  msg.arg.pagefault.address;
+		r = handle_uffd_page_request(uffd, addr);
+		if (r < 0)
+			return NULL;
+		pages++;
+	}
+
+	return NULL;
+}
+
+static int setup_demand_paging(struct kvm_vm *vm,
+			       pthread_t *uffd_handler_thread)
+{
+	int uffd;
+	struct uffdio_api uffdio_api;
+	struct uffdio_register uffdio_register;
+	struct uffd_handler_args uffd_args;
+
+	guest_data_prototype = malloc(host_page_size);
+	memset(guest_data_prototype, 0xAB, host_page_size);
+
+	uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK);
+	if (uffd == -1) {
+		DEBUG("uffd creation failed\n");
+		return -1;
+	}
+
+	uffdio_api.api = UFFD_API;
+	uffdio_api.features = 0;
+	if (ioctl(uffd, UFFDIO_API, &uffdio_api) == -1) {
+		DEBUG("ioctl uffdio_api failed\n");
+		return -1;
+	}
+
+	uffdio_register.range.start = (uint64_t)host_test_mem;
+	uffdio_register.range.len = host_num_pages * host_page_size;
+	uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING;
+	if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) {
+		DEBUG("ioctl uffdio_register failed\n");
+		return -1;
+	}
+
+	if ((uffdio_register.ioctls & UFFD_API_RANGE_IOCTLS) !=
+			UFFD_API_RANGE_IOCTLS) {
+		DEBUG("unexpected userfaultfd ioctl set\n");
+		return -1;
+	}
+
+	uffd_args.uffd = uffd;
+	pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn,
+		       &uffd_args);
+
+	return 0;
+}
+
 #define GUEST_MEM_SHIFT 30 /* 1G */
 #define PAGE_SHIFT_4K  12
 
-static void run_test(enum vm_guest_mode mode)
+static void run_test(enum vm_guest_mode mode, bool use_uffd)
 {
 	pthread_t vcpu_thread;
+	pthread_t uffd_handler_thread;
 	struct kvm_vm *vm;
+	int r;
 
 	/*
 	 * We reserve page table for 2 times of extra dirty mem which
@@ -173,6 +323,14 @@ static void run_test(enum vm_guest_mode mode)
 	/* Cache the HVA pointer of the region */
 	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
 
+	if (use_uffd) {
+		/* Set up user fault fd to handle demand paging requests. */
+		quit_uffd_thread = false;
+		r = setup_demand_paging(vm, &uffd_handler_thread);
+		if (r < 0)
+			exit(-r);
+	}
+
 #ifdef __x86_64__
 	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
 #endif
@@ -188,6 +346,12 @@ static void run_test(enum vm_guest_mode mode)
 	/* Wait for the vcpu thread to quit */
 	pthread_join(vcpu_thread, NULL);
 
+	if (use_uffd) {
+		/* Tell the user fault fd handler thread to quit */
+		quit_uffd_thread = true;
+		pthread_join(uffd_handler_thread, NULL);
+	}
+
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
 }
@@ -209,7 +373,7 @@ static void help(char *name)
 	int i;
 
 	puts("");
-	printf("usage: %s [-h] [-m mode]\n", name);
+	printf("usage: %s [-h] [-m mode] [-u]\n", name);
 	printf(" -m: specify the guest mode ID to test\n"
 	       "     (default: test all supported modes)\n"
 	       "     This option may be used multiple times.\n"
@@ -218,6 +382,7 @@ static void help(char *name)
 		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
 		       vm_guest_mode_params[i].supported ? " (supported)" : "");
 	}
+	printf(" -u: Use User Fault FD to handle vCPU page faults.\n");
 	puts("");
 	exit(0);
 }
@@ -227,6 +392,7 @@ int main(int argc, char *argv[])
 	bool mode_selected = false;
 	unsigned int mode;
 	int opt, i;
+	bool use_uffd = false;
 
 #ifdef __x86_64__
 	vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true);
@@ -235,7 +401,7 @@ int main(int argc, char *argv[])
 	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
 #endif
 
-	while ((opt = getopt(argc, argv, "hm:")) != -1) {
+	while ((opt = getopt(argc, argv, "hm:u")) != -1) {
 		switch (opt) {
 		case 'm':
 			if (!mode_selected) {
@@ -248,6 +414,9 @@ int main(int argc, char *argv[])
 				    "Guest mode ID %d too big", mode);
 			vm_guest_mode_params[mode].enabled = true;
 			break;
+		case 'u':
+			use_uffd = true;
+			break;
 		case 'h':
 		default:
 			help(argv[0]);
@@ -261,7 +430,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(vm_guest_mode_params[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i);
+		run_test(i, use_uffd);
 	}
 
 	return 0;
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
  2019-12-16 21:38 ` [PATCH v3 1/8] KVM: selftests: Create a " Ben Gardon
  2019-12-16 21:38 ` [PATCH v3 2/8] KVM: selftests: Add demand paging content to the " Ben Gardon
@ 2019-12-16 21:38 ` Ben Gardon
  2020-01-07 16:04   ` Peter Xu
  2019-12-16 21:38 ` [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test Ben Gardon
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

When running the demand paging test with the -u option, the User Fault
FD handler essentially adds an arbitrary delay to page fault resolution.
To enable better simulation of a real demand paging scenario, add a
configurable delay to the UFFD handler.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 32 +++++++++++++++----
 1 file changed, 25 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index a8f775dab7d4a..11de5b58995fb 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -142,12 +142,14 @@ bool quit_uffd_thread;
 
 struct uffd_handler_args {
 	int uffd;
+	useconds_t delay;
 };
 
 static void *uffd_handler_thread_fn(void *arg)
 {
 	struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg;
 	int uffd = uffd_args->uffd;
+	useconds_t delay = uffd_args->delay;
 	int64_t pages = 0;
 
 	while (!quit_uffd_thread) {
@@ -203,6 +205,8 @@ static void *uffd_handler_thread_fn(void *arg)
 		if (!(msg.event & UFFD_EVENT_PAGEFAULT))
 			continue;
 
+		if (delay)
+			usleep(delay);
 		addr =  msg.arg.pagefault.address;
 		r = handle_uffd_page_request(uffd, addr);
 		if (r < 0)
@@ -214,7 +218,8 @@ static void *uffd_handler_thread_fn(void *arg)
 }
 
 static int setup_demand_paging(struct kvm_vm *vm,
-			       pthread_t *uffd_handler_thread)
+			       pthread_t *uffd_handler_thread,
+			       useconds_t uffd_delay)
 {
 	int uffd;
 	struct uffdio_api uffdio_api;
@@ -252,6 +257,7 @@ static int setup_demand_paging(struct kvm_vm *vm,
 	}
 
 	uffd_args.uffd = uffd;
+	uffd_args.delay = uffd_delay;
 	pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn,
 		       &uffd_args);
 
@@ -261,7 +267,8 @@ static int setup_demand_paging(struct kvm_vm *vm,
 #define GUEST_MEM_SHIFT 30 /* 1G */
 #define PAGE_SHIFT_4K  12
 
-static void run_test(enum vm_guest_mode mode, bool use_uffd)
+static void run_test(enum vm_guest_mode mode, bool use_uffd,
+		     useconds_t uffd_delay)
 {
 	pthread_t vcpu_thread;
 	pthread_t uffd_handler_thread;
@@ -326,7 +333,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd)
 	if (use_uffd) {
 		/* Set up user fault fd to handle demand paging requests. */
 		quit_uffd_thread = false;
-		r = setup_demand_paging(vm, &uffd_handler_thread);
+		r = setup_demand_paging(vm, &uffd_handler_thread,
+					uffd_delay);
 		if (r < 0)
 			exit(-r);
 	}
@@ -373,7 +381,7 @@ static void help(char *name)
 	int i;
 
 	puts("");
-	printf("usage: %s [-h] [-m mode] [-u]\n", name);
+	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n", name);
 	printf(" -m: specify the guest mode ID to test\n"
 	       "     (default: test all supported modes)\n"
 	       "     This option may be used multiple times.\n"
@@ -382,7 +390,11 @@ static void help(char *name)
 		printf("         %d:    %s%s\n", i, vm_guest_mode_string(i),
 		       vm_guest_mode_params[i].supported ? " (supported)" : "");
 	}
-	printf(" -u: Use User Fault FD to handle vCPU page faults.\n");
+	printf(" -u: use User Fault FD to handle vCPU page\n"
+	       "     faults.\n");
+	printf(" -d: add a delay in usec to the User Fault\n"
+	       "     FD handler to simulate demand paging\n"
+	       "     overheads. Ignored without -u.\n");
 	puts("");
 	exit(0);
 }
@@ -393,6 +405,7 @@ int main(int argc, char *argv[])
 	unsigned int mode;
 	int opt, i;
 	bool use_uffd = false;
+	useconds_t uffd_delay = 0;
 
 #ifdef __x86_64__
 	vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true);
@@ -401,7 +414,7 @@ int main(int argc, char *argv[])
 	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
 #endif
 
-	while ((opt = getopt(argc, argv, "hm:u")) != -1) {
+	while ((opt = getopt(argc, argv, "hm:ud:")) != -1) {
 		switch (opt) {
 		case 'm':
 			if (!mode_selected) {
@@ -417,6 +430,11 @@ int main(int argc, char *argv[])
 		case 'u':
 			use_uffd = true;
 			break;
+		case 'd':
+			uffd_delay = strtoul(optarg, NULL, 0);
+			TEST_ASSERT(uffd_delay >= 0,
+				    "A negative UFFD delay is not supported.");
+			break;
 		case 'h':
 		default:
 			help(argv[0]);
@@ -430,7 +448,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(vm_guest_mode_params[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd);
+		run_test(i, use_uffd, uffd_delay);
 	}
 
 	return 0;
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
                   ` (2 preceding siblings ...)
  2019-12-16 21:38 ` [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay Ben Gardon
@ 2019-12-16 21:38 ` Ben Gardon
  2020-01-07 15:02   ` Andrew Jones
  2019-12-16 21:38 ` [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

Add an argument to allow the demand paging test to work on larger and
smaller guest sizes.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 56 ++++++++++++-------
 1 file changed, 35 insertions(+), 21 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 11de5b58995fb..4aa90a3fce99c 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -32,6 +32,8 @@
 /* Default guest test virtual memory offset */
 #define DEFAULT_GUEST_TEST_MEM		0xc0000000
 
+#define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
+
 /*
  * Guest/Host shared variables. Ensure addr_gva2hva() and/or
  * sync_global_to/from_guest() are used when accessing from
@@ -264,11 +266,10 @@ static int setup_demand_paging(struct kvm_vm *vm,
 	return 0;
 }
 
-#define GUEST_MEM_SHIFT 30 /* 1G */
 #define PAGE_SHIFT_4K  12
 
 static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay)
+		     useconds_t uffd_delay, uint64_t guest_memory_bytes)
 {
 	pthread_t vcpu_thread;
 	pthread_t uffd_handler_thread;
@@ -276,33 +277,40 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	int r;
 
 	/*
-	 * We reserve page table for 2 times of extra dirty mem which
-	 * will definitely cover the original (1G+) test range.  Here
-	 * we do the calculation with 4K page size which is the
-	 * smallest so the page number will be enough for all archs
-	 * (e.g., 64K page size guest will need even less memory for
-	 * page tables).
+	 * We reserve page table for twice the ammount of memory we intend
+	 * to use in the test region for demand paging. Here we do the
+	 * calculation with 4K page size which is the smallest so the page
+	 * number will be enough for all archs. (e.g., 64K page size guest
+	 * will need even less memory for page tables).
 	 */
 	vm = create_vm(mode, VCPU_ID,
-		       2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K),
+		       (2 * guest_memory_bytes) >> PAGE_SHIFT_4K,
 		       guest_code);
 
 	guest_page_size = vm_get_page_size(vm);
-	/*
-	 * A little more than 1G of guest page sized pages.  Cover the
-	 * case where the size is not aligned to 64 pages.
-	 */
-	guest_num_pages = (1ul << (GUEST_MEM_SHIFT -
-				   vm_get_page_shift(vm))) + 16;
+
+	TEST_ASSERT(guest_memory_bytes % guest_page_size == 0,
+		    "Guest memory size is not guest page size aligned.");
+
+	guest_num_pages = guest_memory_bytes / guest_page_size;
+
 #ifdef __s390x__
 	/* Round up to multiple of 1M (segment size) */
 	guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL;
 #endif
+	/*
+	 * If there should be more memory in the guest test region than there
+	 * can be pages in the guest, it will definitely cause problems.
+	 */
+	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
+		    "Requested more guest memory than address space allows.\n"
+		    "    guest pages: %lx max gfn: %lx\n",
+		    guest_num_pages, vm_get_max_gfn(vm));
 
 	host_page_size = getpagesize();
-	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
-			 !!((guest_num_pages * guest_page_size) %
-			    host_page_size);
+	TEST_ASSERT(guest_memory_bytes % host_page_size == 0,
+		    "Guest memory size is not host page size aligned.");
+	host_num_pages = guest_memory_bytes / host_page_size;
 
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
 			      guest_page_size;
@@ -381,7 +389,8 @@ static void help(char *name)
 	int i;
 
 	puts("");
-	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n", name);
+	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
+	       "          [-b bytes test memory]\n", name);
 	printf(" -m: specify the guest mode ID to test\n"
 	       "     (default: test all supported modes)\n"
 	       "     This option may be used multiple times.\n"
@@ -395,6 +404,8 @@ static void help(char *name)
 	printf(" -d: add a delay in usec to the User Fault\n"
 	       "     FD handler to simulate demand paging\n"
 	       "     overheads. Ignored without -u.\n");
+	printf(" -b: specify the number of bytes of memory which should be\n"
+	       "     allocated to the guest.\n");
 	puts("");
 	exit(0);
 }
@@ -402,6 +413,7 @@ static void help(char *name)
 int main(int argc, char *argv[])
 {
 	bool mode_selected = false;
+	uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
 	unsigned int mode;
 	int opt, i;
 	bool use_uffd = false;
@@ -414,7 +426,7 @@ int main(int argc, char *argv[])
 	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
 #endif
 
-	while ((opt = getopt(argc, argv, "hm:ud:")) != -1) {
+	while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) {
 		switch (opt) {
 		case 'm':
 			if (!mode_selected) {
@@ -435,6 +447,8 @@ int main(int argc, char *argv[])
 			TEST_ASSERT(uffd_delay >= 0,
 				    "A negative UFFD delay is not supported.");
 			break;
+		case 'b':
+			guest_memory_bytes = strtoull(optarg, NULL, 0);
 		case 'h':
 		default:
 			help(argv[0]);
@@ -448,7 +462,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(vm_guest_mode_params[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay);
+		run_test(i, use_uffd, uffd_delay, guest_memory_bytes);
 	}
 
 	return 0;
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
                   ` (3 preceding siblings ...)
  2019-12-16 21:38 ` [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test Ben Gardon
@ 2019-12-16 21:38 ` Ben Gardon
  2020-01-07 15:23   ` Andrew Jones
  2019-12-16 21:38 ` [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

In preparation for supporting multiple vCPUs in the demand paging test,
pass arguments to the vCPU instead of syncing globals to it.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 61 +++++++++++--------
 1 file changed, 37 insertions(+), 24 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 4aa90a3fce99c..8ede26e088ab6 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -42,7 +42,6 @@
  */
 static uint64_t host_page_size;
 static uint64_t guest_page_size;
-static uint64_t guest_num_pages;
 
 static char *guest_data_prototype;
 
@@ -63,14 +62,13 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
  * Continuously write to the first 8 bytes of each page in the demand paging
  * memory region.
  */
-static void guest_code(void)
+static void guest_code(uint64_t gva, uint64_t pages)
 {
 	int i;
 
-	for (i = 0; i < guest_num_pages; i++) {
-		uint64_t addr = guest_test_virt_mem;
+	for (i = 0; i < pages; i++) {
+		uint64_t addr = gva + (i * guest_page_size);
 
-		addr += i * guest_page_size;
 		addr &= ~(host_page_size - 1);
 		*(uint64_t *)addr = 0x0123456789ABCDEF;
 	}
@@ -82,18 +80,31 @@ static void guest_code(void)
 static void *host_test_mem;
 static uint64_t host_num_pages;
 
+struct vcpu_thread_args {
+	uint64_t gva;
+	uint64_t pages;
+	struct kvm_vm *vm;
+	int vcpu_id;
+};
+
 static void *vcpu_worker(void *data)
 {
 	int ret;
-	struct kvm_vm *vm = data;
+	struct vcpu_thread_args *args = (struct vcpu_thread_args *)data;
+	struct kvm_vm *vm = args->vm;
+	int vcpu_id = args->vcpu_id;
+	uint64_t gva = args->gva;
+	uint64_t pages = args->pages;
 	struct kvm_run *run;
 
-	run = vcpu_state(vm, VCPU_ID);
+	vcpu_args_set(vm, vcpu_id, 2, gva, pages);
+
+	run = vcpu_state(vm, vcpu_id);
 
 	/* Let the guest access its memory */
-	ret = _vcpu_run(vm, VCPU_ID);
+	ret = _vcpu_run(vm, vcpu_id);
 	TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
-	if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) {
+	if (get_ucall(vm, vcpu_id, NULL) != UCALL_SYNC) {
 		TEST_ASSERT(false,
 			    "Invalid guest sync status: exit_reason=%s\n",
 			    exit_reason_str(run->exit_reason));
@@ -269,11 +280,13 @@ static int setup_demand_paging(struct kvm_vm *vm,
 #define PAGE_SHIFT_4K  12
 
 static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay, uint64_t guest_memory_bytes)
+		     useconds_t uffd_delay, uint64_t vcpu_wss)
 {
 	pthread_t vcpu_thread;
 	pthread_t uffd_handler_thread;
 	struct kvm_vm *vm;
+	struct vcpu_thread_args vcpu_args;
+	uint64_t guest_num_pages;
 	int r;
 
 	/*
@@ -283,16 +296,15 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	 * number will be enough for all archs. (e.g., 64K page size guest
 	 * will need even less memory for page tables).
 	 */
-	vm = create_vm(mode, VCPU_ID,
-		       (2 * guest_memory_bytes) >> PAGE_SHIFT_4K,
+	vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K,
 		       guest_code);
 
 	guest_page_size = vm_get_page_size(vm);
 
-	TEST_ASSERT(guest_memory_bytes % guest_page_size == 0,
+	TEST_ASSERT(vcpu_wss % guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
-	guest_num_pages = guest_memory_bytes / guest_page_size;
+	guest_num_pages = vcpu_wss / guest_page_size;
 
 #ifdef __s390x__
 	/* Round up to multiple of 1M (segment size) */
@@ -308,9 +320,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 		    guest_num_pages, vm_get_max_gfn(vm));
 
 	host_page_size = getpagesize();
-	TEST_ASSERT(guest_memory_bytes % host_page_size == 0,
+	TEST_ASSERT(vcpu_wss % host_page_size == 0,
 		    "Guest memory size is not host page size aligned.");
-	host_num_pages = guest_memory_bytes / host_page_size;
+	host_num_pages = vcpu_wss / host_page_size;
 
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
 			      guest_page_size;
@@ -354,10 +366,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	/* Export the shared variables to the guest */
 	sync_global_to_guest(vm, host_page_size);
 	sync_global_to_guest(vm, guest_page_size);
-	sync_global_to_guest(vm, guest_test_virt_mem);
-	sync_global_to_guest(vm, guest_num_pages);
 
-	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
+	vcpu_args.vm = vm;
+	vcpu_args.vcpu_id = VCPU_ID;
+	vcpu_args.gva = guest_test_virt_mem;
+	vcpu_args.pages = guest_num_pages;
+	pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args);
 
 	/* Wait for the vcpu thread to quit */
 	pthread_join(vcpu_thread, NULL);
@@ -404,8 +418,7 @@ static void help(char *name)
 	printf(" -d: add a delay in usec to the User Fault\n"
 	       "     FD handler to simulate demand paging\n"
 	       "     overheads. Ignored without -u.\n");
-	printf(" -b: specify the number of bytes of memory which should be\n"
-	       "     allocated to the guest.\n");
+	printf(" -b: specify the working set size, in bytes for each vCPU.\n");
 	puts("");
 	exit(0);
 }
@@ -413,7 +426,7 @@ static void help(char *name)
 int main(int argc, char *argv[])
 {
 	bool mode_selected = false;
-	uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
+	uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE;
 	unsigned int mode;
 	int opt, i;
 	bool use_uffd = false;
@@ -448,7 +461,7 @@ int main(int argc, char *argv[])
 				    "A negative UFFD delay is not supported.");
 			break;
 		case 'b':
-			guest_memory_bytes = strtoull(optarg, NULL, 0);
+			vcpu_wss = strtoull(optarg, NULL, 0);
 		case 'h':
 		default:
 			help(argv[0]);
@@ -462,7 +475,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(vm_guest_mode_params[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay, guest_memory_bytes);
+		run_test(i, use_uffd, uffd_delay, vcpu_wss);
 	}
 
 	return 0;
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
                   ` (4 preceding siblings ...)
  2019-12-16 21:38 ` [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
@ 2019-12-16 21:38 ` Ben Gardon
  2020-01-07 15:27   ` Andrew Jones
  2019-12-16 21:39 ` [PATCH v3 7/8] KVM: selftests: Time guest demand paging Ben Gardon
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:38 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

Most VMs have multiple vCPUs, the concurrent execution of which has a
substantial impact on demand paging performance. Add an option to create
multiple vCPUs to each access disjoint regions of memory.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 199 ++++++++++++------
 1 file changed, 136 insertions(+), 63 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 8ede26e088ab6..2b80f614dd537 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -24,8 +24,6 @@
 #include "kvm_util.h"
 #include "processor.h"
 
-#define VCPU_ID				1
-
 /* The memory slot index demand page */
 #define TEST_MEM_SLOT_INDEX		1
 
@@ -34,6 +32,12 @@
 
 #define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
 
+#ifdef PRINT_PER_VCPU_UPDATES
+#define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__)
+#else
+#define PER_VCPU_DEBUG(...)
+#endif
+
 /*
  * Guest/Host shared variables. Ensure addr_gva2hva() and/or
  * sync_global_to/from_guest() are used when accessing from
@@ -76,10 +80,6 @@ static void guest_code(uint64_t gva, uint64_t pages)
 	GUEST_SYNC(1);
 }
 
-/* Points to the test VM memory region on which we are doing demand paging */
-static void *host_test_mem;
-static uint64_t host_num_pages;
-
 struct vcpu_thread_args {
 	uint64_t gva;
 	uint64_t pages;
@@ -113,18 +113,32 @@ static void *vcpu_worker(void *data)
 	return NULL;
 }
 
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
-				uint64_t extra_mem_pages, void *guest_code)
+#define PAGE_SHIFT_4K  12
+#define PTES_PER_PT 512
+
+static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
+				uint64_t vcpu_wss)
 {
 	struct kvm_vm *vm;
-	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
+	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
 
-	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
+	/* Account for a few pages per-vCPU for stacks */
+	pages += DEFAULT_STACK_PGS * vcpus;
+
+	/*
+	 * Reserve twice the ammount of memory needed to map the test region and
+	 * the page table / stacks region, at 4k, for page tables. Do the
+	 * calculation with 4K page size: the smallest of all archs. (e.g., 64K
+	 * page size guest will need even less memory for page tables).
+	 */
+	pages += (2 * pages) / PTES_PER_PT;
+	pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT;
+
+	vm = _vm_create(mode, pages, O_RDWR);
 	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
 #ifdef __x86_64__
 	vm_create_irqchip(vm);
 #endif
-	vm_vcpu_add_default(vm, vcpuid, guest_code);
 	return vm;
 }
 
@@ -232,15 +246,13 @@ static void *uffd_handler_thread_fn(void *arg)
 
 static int setup_demand_paging(struct kvm_vm *vm,
 			       pthread_t *uffd_handler_thread,
-			       useconds_t uffd_delay)
+			       useconds_t uffd_delay,
+			       struct uffd_handler_args *uffd_args,
+			       void *hva, uint64_t len)
 {
 	int uffd;
 	struct uffdio_api uffdio_api;
 	struct uffdio_register uffdio_register;
-	struct uffd_handler_args uffd_args;
-
-	guest_data_prototype = malloc(host_page_size);
-	memset(guest_data_prototype, 0xAB, host_page_size);
 
 	uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK);
 	if (uffd == -1) {
@@ -255,8 +267,8 @@ static int setup_demand_paging(struct kvm_vm *vm,
 		return -1;
 	}
 
-	uffdio_register.range.start = (uint64_t)host_test_mem;
-	uffdio_register.range.len = host_num_pages * host_page_size;
+	uffdio_register.range.start = (uint64_t)hva;
+	uffdio_register.range.len = len;
 	uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING;
 	if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) {
 		DEBUG("ioctl uffdio_register failed\n");
@@ -269,42 +281,37 @@ static int setup_demand_paging(struct kvm_vm *vm,
 		return -1;
 	}
 
-	uffd_args.uffd = uffd;
-	uffd_args.delay = uffd_delay;
+	uffd_args->uffd = uffd;
+	uffd_args->delay = uffd_delay;
 	pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn,
-		       &uffd_args);
+		       uffd_args);
+
+	PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n",
+		       hva, hva + len);
 
 	return 0;
 }
 
-#define PAGE_SHIFT_4K  12
-
 static void run_test(enum vm_guest_mode mode, bool use_uffd,
-		     useconds_t uffd_delay, uint64_t vcpu_wss)
+		     useconds_t uffd_delay, int vcpus, uint64_t vcpu_wss)
 {
-	pthread_t vcpu_thread;
-	pthread_t uffd_handler_thread;
+	pthread_t *vcpu_threads;
+	pthread_t *uffd_handler_threads = NULL;
+	struct uffd_handler_args *uffd_args = NULL;
 	struct kvm_vm *vm;
-	struct vcpu_thread_args vcpu_args;
+	struct vcpu_thread_args *vcpu_args;
 	uint64_t guest_num_pages;
+	int vcpu_id;
 	int r;
 
-	/*
-	 * We reserve page table for twice the ammount of memory we intend
-	 * to use in the test region for demand paging. Here we do the
-	 * calculation with 4K page size which is the smallest so the page
-	 * number will be enough for all archs. (e.g., 64K page size guest
-	 * will need even less memory for page tables).
-	 */
-	vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K,
-		       guest_code);
+	vm = create_vm(mode, vcpus, vcpu_wss);
 
 	guest_page_size = vm_get_page_size(vm);
 
 	TEST_ASSERT(vcpu_wss % guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
-	guest_num_pages = vcpu_wss / guest_page_size;
+	guest_num_pages = (vcpus * vcpu_wss) / guest_page_size;
 
 #ifdef __s390x__
 	/* Round up to multiple of 1M (segment size) */
@@ -316,13 +323,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	 */
 	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
 		    "Requested more guest memory than address space allows.\n"
-		    "    guest pages: %lx max gfn: %lx\n",
-		    guest_num_pages, vm_get_max_gfn(vm));
+		    "    guest pages: %lx max gfn: %lx vcpus: %d wss: %lx]\n",
+		    guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_wss);
 
 	host_page_size = getpagesize();
 	TEST_ASSERT(vcpu_wss % host_page_size == 0,
 		    "Guest memory size is not host page size aligned.");
-	host_num_pages = vcpu_wss / host_page_size;
 
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
 			      guest_page_size;
@@ -347,43 +353,102 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem,
 		 guest_num_pages * guest_page_size, 0);
 
-	/* Cache the HVA pointer of the region */
-	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+	/* Export the shared variables to the guest */
+	sync_global_to_guest(vm, host_page_size);
+	sync_global_to_guest(vm, guest_page_size);
+
+	guest_data_prototype = malloc(host_page_size);
+	TEST_ASSERT(guest_data_prototype, "Memory allocation failed");
+	memset(guest_data_prototype, 0xAB, host_page_size);
+
+	vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads));
+	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
 	if (use_uffd) {
-		/* Set up user fault fd to handle demand paging requests. */
 		quit_uffd_thread = false;
-		r = setup_demand_paging(vm, &uffd_handler_thread,
-					uffd_delay);
-		if (r < 0)
-			exit(-r);
+
+		uffd_handler_threads =
+			malloc(vcpus * sizeof(*uffd_handler_threads));
+		TEST_ASSERT(uffd_handler_threads, "Memory allocation failed");
+
+		uffd_args = malloc(vcpus * sizeof(*uffd_args));
+		TEST_ASSERT(uffd_args, "Memory allocation failed");
 	}
 
+	vcpu_args = malloc(vcpus * sizeof(*vcpu_args));
+	TEST_ASSERT(vcpu_args, "Memory allocation failed");
+
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		vm_paddr_t vcpu_gpa;
+		void *vcpu_hva;
+
+		vm_vcpu_add_default(vm, vcpu_id, guest_code);
+
+		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_wss);
+		PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
+			       vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_wss);
+
+		/* Cache the HVA pointer of the region */
+		vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
+
+		if (use_uffd) {
+			/*
+			 * Set up user fault fd to handle demand paging
+			 * requests.
+			 */
+			r = setup_demand_paging(vm,
+						&uffd_handler_threads[vcpu_id],
+						uffd_delay, &uffd_args[vcpu_id],
+						vcpu_hva, vcpu_wss);
+			if (r < 0)
+				exit(-r);
+		}
+
 #ifdef __x86_64__
-	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+		vcpu_set_cpuid(vm, vcpu_id, kvm_get_supported_cpuid());
 #endif
 
-	/* Export the shared variables to the guest */
-	sync_global_to_guest(vm, host_page_size);
-	sync_global_to_guest(vm, guest_page_size);
+		vcpu_args[vcpu_id].vm = vm;
+		vcpu_args[vcpu_id].vcpu_id = vcpu_id;
+		vcpu_args[vcpu_id].gva = guest_test_virt_mem +
+					 (vcpu_id * vcpu_wss);
+		vcpu_args[vcpu_id].pages = vcpu_wss / guest_page_size;
+	}
+
+	DEBUG("Finished creating vCPUs and starting uffd threads\n");
+
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
+			       &vcpu_args[vcpu_id]);
+	}
+
+	DEBUG("Started all vCPUs\n");
 
-	vcpu_args.vm = vm;
-	vcpu_args.vcpu_id = VCPU_ID;
-	vcpu_args.gva = guest_test_virt_mem;
-	vcpu_args.pages = guest_num_pages;
-	pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args);
+	/* Wait for the vcpu threads to quit */
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		pthread_join(vcpu_threads[vcpu_id], NULL);
+		PER_VCPU_DEBUG("Joined thread for vCPU %d\n", vcpu_id);
+	}
 
-	/* Wait for the vcpu thread to quit */
-	pthread_join(vcpu_thread, NULL);
+	DEBUG("All vCPU threads joined\n");
 
 	if (use_uffd) {
-		/* Tell the user fault fd handler thread to quit */
+		/* Tell the user fault fd handler threads to quit */
 		quit_uffd_thread = true;
-		pthread_join(uffd_handler_thread, NULL);
+		for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++)
+			pthread_join(uffd_handler_threads[vcpu_id], NULL);
 	}
 
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
+
+	free(guest_data_prototype);
+	free(vcpu_threads);
+	if (use_uffd) {
+		free(uffd_handler_threads);
+		free(uffd_args);
+	}
+	free(vcpu_args);
 }
 
 struct vm_guest_mode_params {
@@ -404,7 +469,7 @@ static void help(char *name)
 
 	puts("");
 	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
-	       "          [-b bytes test memory]\n", name);
+	       "          [-b bytes test memory] [-v vcpus]\n", name);
 	printf(" -m: specify the guest mode ID to test\n"
 	       "     (default: test all supported modes)\n"
 	       "     This option may be used multiple times.\n"
@@ -419,6 +484,7 @@ static void help(char *name)
 	       "     FD handler to simulate demand paging\n"
 	       "     overheads. Ignored without -u.\n");
 	printf(" -b: specify the working set size, in bytes for each vCPU.\n");
+	printf(" -v: specify the number of vCPUs to run.\n");
 	puts("");
 	exit(0);
 }
@@ -427,6 +493,7 @@ int main(int argc, char *argv[])
 {
 	bool mode_selected = false;
 	uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE;
+	int vcpus = 1;
 	unsigned int mode;
 	int opt, i;
 	bool use_uffd = false;
@@ -439,7 +506,7 @@ int main(int argc, char *argv[])
 	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
 #endif
 
-	while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) {
+	while ((opt = getopt(argc, argv, "hm:ud:b:v:")) != -1) {
 		switch (opt) {
 		case 'm':
 			if (!mode_selected) {
@@ -462,6 +529,12 @@ int main(int argc, char *argv[])
 			break;
 		case 'b':
 			vcpu_wss = strtoull(optarg, NULL, 0);
+			break;
+		case 'v':
+			vcpus = atoi(optarg);
+			TEST_ASSERT(vcpus > 0,
+				    "Must have a positive number of vCPUs");
+			break;
 		case 'h':
 		default:
 			help(argv[0]);
@@ -475,7 +548,7 @@ int main(int argc, char *argv[])
 		TEST_ASSERT(vm_guest_mode_params[i].supported,
 			    "Guest mode ID %d (%s) not supported.",
 			    i, vm_guest_mode_string(i));
-		run_test(i, use_uffd, uffd_delay, vcpu_wss);
+		run_test(i, use_uffd, uffd_delay, vcpus, vcpu_wss);
 	}
 
 	return 0;
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 7/8] KVM: selftests: Time guest demand paging
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
                   ` (5 preceding siblings ...)
  2019-12-16 21:38 ` [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
@ 2019-12-16 21:39 ` Ben Gardon
  2019-12-16 21:39 ` [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create Ben Gardon
  2020-01-06 22:46 ` [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
  8 siblings, 0 replies; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:39 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

In order to quantify demand paging performance, time guest execution
during demand paging.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 68 +++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 2b80f614dd537..d93d72bdea4a3 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -32,6 +32,12 @@
 
 #define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
 
+#ifdef PRINT_PER_PAGE_UPDATES
+#define PER_PAGE_DEBUG(...) DEBUG(__VA_ARGS__)
+#else
+#define PER_PAGE_DEBUG(...)
+#endif
+
 #ifdef PRINT_PER_VCPU_UPDATES
 #define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__)
 #else
@@ -62,6 +68,26 @@ static uint64_t guest_test_phys_mem;
  */
 static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
 
+int64_t to_ns(struct timespec ts)
+{
+	return (int64_t)ts.tv_nsec + 1000000000LL * (int64_t)ts.tv_sec;
+}
+
+struct timespec diff(struct timespec start, struct  timespec end)
+{
+	struct   timespec temp;
+
+	if ((end.tv_nsec-start.tv_nsec) < 0) {
+		temp.tv_sec = end.tv_sec - start.tv_sec - 1;
+		temp.tv_nsec = 1000000000 + end.tv_nsec - start.tv_nsec;
+	} else {
+		temp.tv_sec = end.tv_sec - start.tv_sec;
+		temp.tv_nsec = end.tv_nsec - start.tv_nsec;
+	}
+
+	return temp;
+}
+
 /*
  * Continuously write to the first 8 bytes of each page in the demand paging
  * memory region.
@@ -96,11 +122,15 @@ static void *vcpu_worker(void *data)
 	uint64_t gva = args->gva;
 	uint64_t pages = args->pages;
 	struct kvm_run *run;
+	struct timespec start;
+	struct timespec end;
 
 	vcpu_args_set(vm, vcpu_id, 2, gva, pages);
 
 	run = vcpu_state(vm, vcpu_id);
 
+	clock_gettime(CLOCK_MONOTONIC, &start);
+
 	/* Let the guest access its memory */
 	ret = _vcpu_run(vm, vcpu_id);
 	TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
@@ -110,6 +140,11 @@ static void *vcpu_worker(void *data)
 			    exit_reason_str(run->exit_reason));
 	}
 
+	clock_gettime(CLOCK_MONOTONIC, &end);
+	PER_VCPU_DEBUG("vCPU %d execution time: %lld.%.9lds\n", vcpu_id,
+		       (long long)(diff(start, end).tv_sec),
+		       diff(start, end).tv_nsec);
+
 	return NULL;
 }
 
@@ -145,6 +180,8 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
 static int handle_uffd_page_request(int uffd, uint64_t addr)
 {
 	pid_t tid;
+	struct timespec start;
+	struct timespec end;
 	struct uffdio_copy copy;
 	int r;
 
@@ -155,6 +192,8 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 	copy.len = host_page_size;
 	copy.mode = 0;
 
+	clock_gettime(CLOCK_MONOTONIC, &start);
+
 	r = ioctl(uffd, UFFDIO_COPY, &copy);
 	if (r == -1) {
 		DEBUG("Failed Paged in 0x%lx from thread %d with errno: %d\n",
@@ -162,6 +201,13 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 		return r;
 	}
 
+	clock_gettime(CLOCK_MONOTONIC, &end);
+
+	PER_PAGE_DEBUG("UFFDIO_COPY %d \t%lld ns\n", tid,
+		       (long long)to_ns(diff(start, end)));
+	PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n",
+		       host_page_size, addr, tid);
+
 	return 0;
 }
 
@@ -178,7 +224,10 @@ static void *uffd_handler_thread_fn(void *arg)
 	int uffd = uffd_args->uffd;
 	useconds_t delay = uffd_args->delay;
 	int64_t pages = 0;
+	struct timespec start;
+	struct timespec end;
 
+	clock_gettime(CLOCK_MONOTONIC, &start);
 	while (!quit_uffd_thread) {
 		struct uffd_msg msg;
 		struct pollfd pollfd[1];
@@ -241,6 +290,13 @@ static void *uffd_handler_thread_fn(void *arg)
 		pages++;
 	}
 
+	clock_gettime(CLOCK_MONOTONIC, &end);
+	PER_VCPU_DEBUG("userfaulted %ld pages over %lld.%.9lds. (%f/sec)\n",
+		       pages, (long long)(diff(start, end).tv_sec),
+		       diff(start, end).tv_nsec, pages /
+		       ((double)diff(start, end).tv_sec +
+			(double)diff(start, end).tv_nsec / 100000000.0));
+
 	return NULL;
 }
 
@@ -303,6 +359,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 	uint64_t guest_num_pages;
 	int vcpu_id;
 	int r;
+	struct timespec start;
+	struct timespec end;
 
 	vm = create_vm(mode, vcpus, vcpu_wss);
 
@@ -417,6 +475,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	DEBUG("Finished creating vCPUs and starting uffd threads\n");
 
+	clock_gettime(CLOCK_MONOTONIC, &start);
+
 	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
 		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
 			       &vcpu_args[vcpu_id]);
@@ -432,6 +492,8 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 
 	DEBUG("All vCPU threads joined\n");
 
+	clock_gettime(CLOCK_MONOTONIC, &end);
+
 	if (use_uffd) {
 		/* Tell the user fault fd handler threads to quit */
 		quit_uffd_thread = true;
@@ -439,6 +501,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
 			pthread_join(uffd_handler_threads[vcpu_id], NULL);
 	}
 
+	DEBUG("Total guest execution time: %lld.%.9lds\n",
+	      (long long)(diff(start, end).tv_sec), diff(start, end).tv_nsec);
+	DEBUG("Overall demand paging rate: %f pgs/sec\n",
+	      guest_num_pages / ((double)diff(start, end).tv_sec +
+	      (double)diff(start, end).tv_nsec / 100000000.0));
+
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
 
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
                   ` (6 preceding siblings ...)
  2019-12-16 21:39 ` [PATCH v3 7/8] KVM: selftests: Time guest demand paging Ben Gardon
@ 2019-12-16 21:39 ` Ben Gardon
  2020-01-07 15:42   ` Andrew Jones
  2020-01-06 22:46 ` [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
  8 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2019-12-16 21:39 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones, Ben Gardon

KVM creates internal memslots between 3 and 4 GiB paddrs on the first
vCPU creation. If memslot 0 is large enough it collides with these
memslots an causes vCPU creation to fail. When requesting more than 3G,
start memslot 0 at 4G in _vm_create.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 41cf45416060f..886d58e6cac39 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = {
 _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
 	       "Missing new mode strings?");
 
+#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30)
+#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30)
 /*
  * VM Create
  *
@@ -128,13 +130,16 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
  *
  * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
  * When phy_pages is non-zero, a memory region of phy_pages physical pages
- * is created and mapped starting at guest physical address 0.  The file
- * descriptor to control the created VM is created with the permissions
- * given by perm (e.g. O_RDWR).
+ * is created. If phy_pages is less that 3G, it is mapped starting at guest
+ * physical address 0. If phy_pages is greater than 3G it is mapped starting
+ * 4G into the guest physical address space to avoid KVM internal memslots
+ * which map the region between 3G and 4G. The file descriptor to control the
+ * created VM is created with the permissions given by perm (e.g. O_RDWR).
  */
 struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 {
 	struct kvm_vm *vm;
+	uint64_t guest_paddr = 0;
 
 	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
@@ -227,9 +232,11 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
 
 	/* Allocate and setup memory for guest. */
 	vm->vpages_mapped = sparsebit_alloc();
+	if (guest_paddr + phy_pages > KVM_INTERNAL_MEMSLOTS_START_PADDR)
+		guest_paddr = KVM_INTERNAL_MEMSLOTS_END_PADDR;
 	if (phy_pages != 0)
 		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
-					    0, 0, phy_pages, 0);
+					    guest_paddr, 0, phy_pages, 0);
 
 	return vm;
 }
-- 
2.24.1.735.g03f4e72817-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/8] Create a userfaultfd demand paging test
  2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
                   ` (7 preceding siblings ...)
  2019-12-16 21:39 ` [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create Ben Gardon
@ 2020-01-06 22:46 ` Ben Gardon
  8 siblings, 0 replies; 26+ messages in thread
From: Ben Gardon @ 2020-01-06 22:46 UTC (permalink / raw)
  To: linux-kernel, kvm, linux-kselftest
  Cc: Paolo Bonzini, Cannon Matthews, Peter Xu, Andrew Jones

If anyone has a chance to re-review this test patch series I'd be
grateful. I responded to most of the feedback I received in the first
series, and believe this test will be a useful performance benchmark
for future development.

On Mon, Dec 16, 2019 at 1:39 PM Ben Gardon <bgardon@google.com> wrote:
>
> When handling page faults for many vCPUs during demand paging, KVM's MMU
> lock becomes highly contended. This series creates a test with a naive
> userfaultfd based demand paging implementation to demonstrate that
> contention. This test serves both as a functional test of userfaultfd
> and a microbenchmark of demand paging performance with a variable number
> of vCPUs and memory per vCPU.
>
> The test creates N userfaultfd threads, N vCPUs, and a region of memory
> with M pages per vCPU. The N userfaultfd polling threads are each set up
> to serve faults on a region of memory corresponding to one of the vCPUs.
> Each of the vCPUs is then started, and touches each page of its disjoint
> memory region, sequentially. In response to faults, the userfaultfd
> threads copy a static buffer into the guest's memory. This creates a
> worst case for MMU lock contention as we have removed most of the
> contention between the userfaultfd threads and there is no time required
> to fetch the contents of guest memory.
>
> This test was run successfully on Intel Haswell, Broadwell, and
> Cascadelake hosts with a variety of vCPU counts and memory sizes.
>
> This test was adapted from the dirty_log_test.
>
> The series can also be viewed in Gerrit here:
> https://linux-review.googlesource.com/c/virt/kvm/kvm/+/1464
> (Thanks to Dmitry Vyukov <dvyukov@google.com> for setting up the Gerrit
> instance)
>
> Ben Gardon (9):
>   KVM: selftests: Create a demand paging test
>   KVM: selftests: Add demand paging content to the demand paging test
>   KVM: selftests: Add memory size parameter to the demand paging test
>   KVM: selftests: Pass args to vCPU instead of using globals
>   KVM: selftests: Support multiple vCPUs in demand paging test
>   KVM: selftests: Time guest demand paging
>   KVM: selftests: Add parameter to _vm_create for memslot 0 base paddr
>   KVM: selftests: Support large VMs in demand paging test
>   Add static flag
>
>  tools/testing/selftests/kvm/.gitignore        |   1 +
>  tools/testing/selftests/kvm/Makefile          |   4 +-
>  .../selftests/kvm/demand_paging_test.c        | 610 ++++++++++++++++++
>  tools/testing/selftests/kvm/dirty_log_test.c  |   2 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |   3 +-
>  tools/testing/selftests/kvm/lib/kvm_util.c    |   7 +-
>  6 files changed, 621 insertions(+), 6 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c
>
> --
> 2.23.0.444.g18eeb5a265-goog
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/8] KVM: selftests: Create a demand paging test
  2019-12-16 21:38 ` [PATCH v3 1/8] KVM: selftests: Create a " Ben Gardon
@ 2020-01-07 14:33   ` Peter Xu
  2020-01-07 14:56     ` Andrew Jones
  0 siblings, 1 reply; 26+ messages in thread
From: Peter Xu @ 2020-01-07 14:33 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Andrew Jones

On Mon, Dec 16, 2019 at 01:38:54PM -0800, Ben Gardon wrote:
> While userfaultfd, KVM's demand paging implementation, is not specific
> to KVM, having a benchmark for its performance will be useful for
> guiding performance improvements to KVM. As a first step towards creating
> a userfaultfd demand paging test, create a simple memory access test,
> based on dirty_log_test.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>

It's fine to start with x86-only for this test, but imho it would be
better to mention that in cover letter, or reply to reviewer comments
on that you removed aarch64 from previous post.

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/8] KVM: selftests: Create a demand paging test
  2020-01-07 14:33   ` Peter Xu
@ 2020-01-07 14:56     ` Andrew Jones
  2020-01-07 18:41       ` Ben Gardon
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2020-01-07 14:56 UTC (permalink / raw)
  To: Peter Xu
  Cc: Ben Gardon, linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews

On Tue, Jan 07, 2020 at 09:33:34AM -0500, Peter Xu wrote:
> On Mon, Dec 16, 2019 at 01:38:54PM -0800, Ben Gardon wrote:
> > While userfaultfd, KVM's demand paging implementation, is not specific
> > to KVM, having a benchmark for its performance will be useful for
> > guiding performance improvements to KVM. As a first step towards creating
> > a userfaultfd demand paging test, create a simple memory access test,
> > based on dirty_log_test.
> > 
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> 
> It's fine to start with x86-only for this test, but imho it would be
> better to mention that in cover letter, or reply to reviewer comments
> on that you removed aarch64 from previous post.

I'd also prefer that if it's x86-only that it be put in the x86_64
subdirectory and drop the arch #ifdefs. The question is why is it
x86-only for now though? Will it take a lot of work to port it to
other architectures? Or does it just need testing by someone with
the hardware?

Thanks,
drew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test
  2019-12-16 21:38 ` [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test Ben Gardon
@ 2020-01-07 15:02   ` Andrew Jones
  2020-01-07 21:18     ` Ben Gardon
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2020-01-07 15:02 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Mon, Dec 16, 2019 at 01:38:57PM -0800, Ben Gardon wrote:
> Add an argument to allow the demand paging test to work on larger and
> smaller guest sizes.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  .../selftests/kvm/demand_paging_test.c        | 56 ++++++++++++-------
>  1 file changed, 35 insertions(+), 21 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 11de5b58995fb..4aa90a3fce99c 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -32,6 +32,8 @@
>  /* Default guest test virtual memory offset */
>  #define DEFAULT_GUEST_TEST_MEM		0xc0000000
>  
> +#define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
> +
>  /*
>   * Guest/Host shared variables. Ensure addr_gva2hva() and/or
>   * sync_global_to/from_guest() are used when accessing from
> @@ -264,11 +266,10 @@ static int setup_demand_paging(struct kvm_vm *vm,
>  	return 0;
>  }
>  
> -#define GUEST_MEM_SHIFT 30 /* 1G */
>  #define PAGE_SHIFT_4K  12
>  
>  static void run_test(enum vm_guest_mode mode, bool use_uffd,
> -		     useconds_t uffd_delay)
> +		     useconds_t uffd_delay, uint64_t guest_memory_bytes)
>  {
>  	pthread_t vcpu_thread;
>  	pthread_t uffd_handler_thread;
> @@ -276,33 +277,40 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
>  	int r;
>  
>  	/*
> -	 * We reserve page table for 2 times of extra dirty mem which
> -	 * will definitely cover the original (1G+) test range.  Here
> -	 * we do the calculation with 4K page size which is the
> -	 * smallest so the page number will be enough for all archs
> -	 * (e.g., 64K page size guest will need even less memory for
> -	 * page tables).
> +	 * We reserve page table for twice the ammount of memory we intend
> +	 * to use in the test region for demand paging. Here we do the
> +	 * calculation with 4K page size which is the smallest so the page
> +	 * number will be enough for all archs. (e.g., 64K page size guest
> +	 * will need even less memory for page tables).
>  	 */
>  	vm = create_vm(mode, VCPU_ID,
> -		       2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K),
> +		       (2 * guest_memory_bytes) >> PAGE_SHIFT_4K,
>  		       guest_code);
>  
>  	guest_page_size = vm_get_page_size(vm);
> -	/*
> -	 * A little more than 1G of guest page sized pages.  Cover the
> -	 * case where the size is not aligned to 64 pages.
> -	 */
> -	guest_num_pages = (1ul << (GUEST_MEM_SHIFT -
> -				   vm_get_page_shift(vm))) + 16;
> +
> +	TEST_ASSERT(guest_memory_bytes % guest_page_size == 0,
> +		    "Guest memory size is not guest page size aligned.");
> +
> +	guest_num_pages = guest_memory_bytes / guest_page_size;
> +
>  #ifdef __s390x__
>  	/* Round up to multiple of 1M (segment size) */
>  	guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL;
>  #endif
> +	/*
> +	 * If there should be more memory in the guest test region than there
> +	 * can be pages in the guest, it will definitely cause problems.
> +	 */
> +	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
> +		    "Requested more guest memory than address space allows.\n"
> +		    "    guest pages: %lx max gfn: %lx\n",
> +		    guest_num_pages, vm_get_max_gfn(vm));
>  
>  	host_page_size = getpagesize();
> -	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
> -			 !!((guest_num_pages * guest_page_size) %
> -			    host_page_size);
> +	TEST_ASSERT(guest_memory_bytes % host_page_size == 0,
> +		    "Guest memory size is not host page size aligned.");
> +	host_num_pages = guest_memory_bytes / host_page_size;
>  
>  	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
>  			      guest_page_size;
> @@ -381,7 +389,8 @@ static void help(char *name)
>  	int i;
>  
>  	puts("");
> -	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n", name);
> +	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
> +	       "          [-b bytes test memory]\n", name);
>  	printf(" -m: specify the guest mode ID to test\n"
>  	       "     (default: test all supported modes)\n"
>  	       "     This option may be used multiple times.\n"
> @@ -395,6 +404,8 @@ static void help(char *name)
>  	printf(" -d: add a delay in usec to the User Fault\n"
>  	       "     FD handler to simulate demand paging\n"
>  	       "     overheads. Ignored without -u.\n");
> +	printf(" -b: specify the number of bytes of memory which should be\n"
> +	       "     allocated to the guest.\n");

Can we input in megabytes instead? And also it might be nice to output the
default size here.

>  	puts("");
>  	exit(0);
>  }
> @@ -402,6 +413,7 @@ static void help(char *name)
>  int main(int argc, char *argv[])
>  {
>  	bool mode_selected = false;
> +	uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
>  	unsigned int mode;
>  	int opt, i;
>  	bool use_uffd = false;
> @@ -414,7 +426,7 @@ int main(int argc, char *argv[])
>  	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
>  #endif
>  
> -	while ((opt = getopt(argc, argv, "hm:ud:")) != -1) {
> +	while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) {
>  		switch (opt) {
>  		case 'm':
>  			if (!mode_selected) {
> @@ -435,6 +447,8 @@ int main(int argc, char *argv[])
>  			TEST_ASSERT(uffd_delay >= 0,
>  				    "A negative UFFD delay is not supported.");
>  			break;
> +		case 'b':
> +			guest_memory_bytes = strtoull(optarg, NULL, 0);

Missing break. So it doesn't look like this was tested.

>  		case 'h':
>  		default:
>  			help(argv[0]);
> @@ -448,7 +462,7 @@ int main(int argc, char *argv[])
>  		TEST_ASSERT(vm_guest_mode_params[i].supported,
>  			    "Guest mode ID %d (%s) not supported.",
>  			    i, vm_guest_mode_string(i));
> -		run_test(i, use_uffd, uffd_delay);
> +		run_test(i, use_uffd, uffd_delay, guest_memory_bytes);
>  	}
>  
>  	return 0;
> -- 
> 2.24.1.735.g03f4e72817-goog
>

Thanks,
drew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals
  2019-12-16 21:38 ` [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
@ 2020-01-07 15:23   ` Andrew Jones
  2020-01-07 18:26     ` Ben Gardon
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2020-01-07 15:23 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Mon, Dec 16, 2019 at 01:38:58PM -0800, Ben Gardon wrote:
> In preparation for supporting multiple vCPUs in the demand paging test,
> pass arguments to the vCPU instead of syncing globals to it.

This will only work if we don't spill parameters onto the stack and all
data we want to pass fit in registers. I've used multiple VCPUs in tests
before and stuck with the global syncing. I simply used arrays like this

 static my_type_t my_data[NR_VCPUS];

 static void guest_code(void)
 {
     int cpu = arch_get_cpu_id();
     
     // do something with my_data[cpu]
 }

 int main(void)
 {
     for (i = 0; i < NR_VCPUS; ++i) {
         // prepare my_data[i]
         sync_global_to_guest(vm, my_data[i]);
     }

     // run vcpus

    for (i = 0; i < NR_VCPUS; ++i) {
         sync_global_from_guest(vm, my_data[i]);
         // do something with my_data[i]
    }
 }

> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  .../selftests/kvm/demand_paging_test.c        | 61 +++++++++++--------
>  1 file changed, 37 insertions(+), 24 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 4aa90a3fce99c..8ede26e088ab6 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -42,7 +42,6 @@
>   */
>  static uint64_t host_page_size;
>  static uint64_t guest_page_size;
> -static uint64_t guest_num_pages;
>  
>  static char *guest_data_prototype;
>  
> @@ -63,14 +62,13 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
>   * Continuously write to the first 8 bytes of each page in the demand paging
>   * memory region.
>   */
> -static void guest_code(void)
> +static void guest_code(uint64_t gva, uint64_t pages)
>  {
>  	int i;
>  
> -	for (i = 0; i < guest_num_pages; i++) {
> -		uint64_t addr = guest_test_virt_mem;
> +	for (i = 0; i < pages; i++) {
> +		uint64_t addr = gva + (i * guest_page_size);
>  
> -		addr += i * guest_page_size;
>  		addr &= ~(host_page_size - 1);
>  		*(uint64_t *)addr = 0x0123456789ABCDEF;
>  	}
> @@ -82,18 +80,31 @@ static void guest_code(void)
>  static void *host_test_mem;
>  static uint64_t host_num_pages;
>  
> +struct vcpu_thread_args {
> +	uint64_t gva;
> +	uint64_t pages;
> +	struct kvm_vm *vm;
> +	int vcpu_id;
> +};
> +
>  static void *vcpu_worker(void *data)
>  {
>  	int ret;
> -	struct kvm_vm *vm = data;
> +	struct vcpu_thread_args *args = (struct vcpu_thread_args *)data;
> +	struct kvm_vm *vm = args->vm;
> +	int vcpu_id = args->vcpu_id;
> +	uint64_t gva = args->gva;
> +	uint64_t pages = args->pages;
>  	struct kvm_run *run;
>  
> -	run = vcpu_state(vm, VCPU_ID);
> +	vcpu_args_set(vm, vcpu_id, 2, gva, pages);

vcpu_args_set() is currently only implemented by x86, so that's a good
reason for this to be an x86-only test for now. Well, unless this is
switched back to using global sync.

> +
> +	run = vcpu_state(vm, vcpu_id);
>  
>  	/* Let the guest access its memory */
> -	ret = _vcpu_run(vm, VCPU_ID);
> +	ret = _vcpu_run(vm, vcpu_id);
>  	TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
> -	if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) {
> +	if (get_ucall(vm, vcpu_id, NULL) != UCALL_SYNC) {
>  		TEST_ASSERT(false,
>  			    "Invalid guest sync status: exit_reason=%s\n",
>  			    exit_reason_str(run->exit_reason));
> @@ -269,11 +280,13 @@ static int setup_demand_paging(struct kvm_vm *vm,
>  #define PAGE_SHIFT_4K  12
>  
>  static void run_test(enum vm_guest_mode mode, bool use_uffd,
> -		     useconds_t uffd_delay, uint64_t guest_memory_bytes)
> +		     useconds_t uffd_delay, uint64_t vcpu_wss)

Not sure why guest_memory_bytes was renamed to vcpu_wss. What is wss?
Working set size?

>  {
>  	pthread_t vcpu_thread;
>  	pthread_t uffd_handler_thread;
>  	struct kvm_vm *vm;
> +	struct vcpu_thread_args vcpu_args;
> +	uint64_t guest_num_pages;
>  	int r;
>  
>  	/*
> @@ -283,16 +296,15 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
>  	 * number will be enough for all archs. (e.g., 64K page size guest
>  	 * will need even less memory for page tables).
>  	 */
> -	vm = create_vm(mode, VCPU_ID,
> -		       (2 * guest_memory_bytes) >> PAGE_SHIFT_4K,
> +	vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K,
>  		       guest_code);
>  
>  	guest_page_size = vm_get_page_size(vm);
>  
> -	TEST_ASSERT(guest_memory_bytes % guest_page_size == 0,
> +	TEST_ASSERT(vcpu_wss % guest_page_size == 0,
>  		    "Guest memory size is not guest page size aligned.");
>  
> -	guest_num_pages = guest_memory_bytes / guest_page_size;
> +	guest_num_pages = vcpu_wss / guest_page_size;
>  
>  #ifdef __s390x__
>  	/* Round up to multiple of 1M (segment size) */
> @@ -308,9 +320,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
>  		    guest_num_pages, vm_get_max_gfn(vm));
>  
>  	host_page_size = getpagesize();
> -	TEST_ASSERT(guest_memory_bytes % host_page_size == 0,
> +	TEST_ASSERT(vcpu_wss % host_page_size == 0,
>  		    "Guest memory size is not host page size aligned.");
> -	host_num_pages = guest_memory_bytes / host_page_size;
> +	host_num_pages = vcpu_wss / host_page_size;
>  
>  	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
>  			      guest_page_size;
> @@ -354,10 +366,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
>  	/* Export the shared variables to the guest */
>  	sync_global_to_guest(vm, host_page_size);
>  	sync_global_to_guest(vm, guest_page_size);
> -	sync_global_to_guest(vm, guest_test_virt_mem);
> -	sync_global_to_guest(vm, guest_num_pages);
>  
> -	pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
> +	vcpu_args.vm = vm;
> +	vcpu_args.vcpu_id = VCPU_ID;
> +	vcpu_args.gva = guest_test_virt_mem;
> +	vcpu_args.pages = guest_num_pages;
> +	pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args);
>  
>  	/* Wait for the vcpu thread to quit */
>  	pthread_join(vcpu_thread, NULL);
> @@ -404,8 +418,7 @@ static void help(char *name)
>  	printf(" -d: add a delay in usec to the User Fault\n"
>  	       "     FD handler to simulate demand paging\n"
>  	       "     overheads. Ignored without -u.\n");
> -	printf(" -b: specify the number of bytes of memory which should be\n"
> -	       "     allocated to the guest.\n");
> +	printf(" -b: specify the working set size, in bytes for each vCPU.\n");
>  	puts("");
>  	exit(0);
>  }
> @@ -413,7 +426,7 @@ static void help(char *name)
>  int main(int argc, char *argv[])
>  {
>  	bool mode_selected = false;
> -	uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
> +	uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE;
>  	unsigned int mode;
>  	int opt, i;
>  	bool use_uffd = false;
> @@ -448,7 +461,7 @@ int main(int argc, char *argv[])
>  				    "A negative UFFD delay is not supported.");
>  			break;
>  		case 'b':
> -			guest_memory_bytes = strtoull(optarg, NULL, 0);
> +			vcpu_wss = strtoull(optarg, NULL, 0);
>  		case 'h':
>  		default:
>  			help(argv[0]);
> @@ -462,7 +475,7 @@ int main(int argc, char *argv[])
>  		TEST_ASSERT(vm_guest_mode_params[i].supported,
>  			    "Guest mode ID %d (%s) not supported.",
>  			    i, vm_guest_mode_string(i));
> -		run_test(i, use_uffd, uffd_delay, guest_memory_bytes);
> +		run_test(i, use_uffd, uffd_delay, vcpu_wss);
>  	}
>  
>  	return 0;
> -- 
> 2.24.1.735.g03f4e72817-goog
>

Thanks,
drew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test
  2019-12-16 21:38 ` [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
@ 2020-01-07 15:27   ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-01-07 15:27 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Mon, Dec 16, 2019 at 01:38:59PM -0800, Ben Gardon wrote:
> Most VMs have multiple vCPUs, the concurrent execution of which has a
> substantial impact on demand paging performance. Add an option to create
> multiple vCPUs to each access disjoint regions of memory.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  .../selftests/kvm/demand_paging_test.c        | 199 ++++++++++++------
>  1 file changed, 136 insertions(+), 63 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 8ede26e088ab6..2b80f614dd537 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -24,8 +24,6 @@
>  #include "kvm_util.h"
>  #include "processor.h"
>  
> -#define VCPU_ID				1
> -
>  /* The memory slot index demand page */
>  #define TEST_MEM_SLOT_INDEX		1
>  
> @@ -34,6 +32,12 @@
>  
>  #define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
>  
> +#ifdef PRINT_PER_VCPU_UPDATES
> +#define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__)
> +#else
> +#define PER_VCPU_DEBUG(...)
> +#endif
> +
>  /*
>   * Guest/Host shared variables. Ensure addr_gva2hva() and/or
>   * sync_global_to/from_guest() are used when accessing from
> @@ -76,10 +80,6 @@ static void guest_code(uint64_t gva, uint64_t pages)
>  	GUEST_SYNC(1);
>  }
>  
> -/* Points to the test VM memory region on which we are doing demand paging */
> -static void *host_test_mem;
> -static uint64_t host_num_pages;
> -
>  struct vcpu_thread_args {
>  	uint64_t gva;
>  	uint64_t pages;
> @@ -113,18 +113,32 @@ static void *vcpu_worker(void *data)
>  	return NULL;
>  }
>  
> -static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid,
> -				uint64_t extra_mem_pages, void *guest_code)
> +#define PAGE_SHIFT_4K  12
> +#define PTES_PER_PT 512
> +
> +static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus,
> +				uint64_t vcpu_wss)
>  {
>  	struct kvm_vm *vm;
> -	uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
> +	uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
>  
> -	vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
> +	/* Account for a few pages per-vCPU for stacks */
> +	pages += DEFAULT_STACK_PGS * vcpus;
> +
> +	/*
> +	 * Reserve twice the ammount of memory needed to map the test region and
> +	 * the page table / stacks region, at 4k, for page tables. Do the
> +	 * calculation with 4K page size: the smallest of all archs. (e.g., 64K
> +	 * page size guest will need even less memory for page tables).
> +	 */
> +	pages += (2 * pages) / PTES_PER_PT;
> +	pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT;
> +
> +	vm = _vm_create(mode, pages, O_RDWR);
>  	kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
>  #ifdef __x86_64__
>  	vm_create_irqchip(vm);
>  #endif
> -	vm_vcpu_add_default(vm, vcpuid, guest_code);
>  	return vm;
>  }
>  
> @@ -232,15 +246,13 @@ static void *uffd_handler_thread_fn(void *arg)
>  
>  static int setup_demand_paging(struct kvm_vm *vm,
>  			       pthread_t *uffd_handler_thread,
> -			       useconds_t uffd_delay)
> +			       useconds_t uffd_delay,
> +			       struct uffd_handler_args *uffd_args,
> +			       void *hva, uint64_t len)
>  {
>  	int uffd;
>  	struct uffdio_api uffdio_api;
>  	struct uffdio_register uffdio_register;
> -	struct uffd_handler_args uffd_args;
> -
> -	guest_data_prototype = malloc(host_page_size);
> -	memset(guest_data_prototype, 0xAB, host_page_size);
>  
>  	uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK);
>  	if (uffd == -1) {
> @@ -255,8 +267,8 @@ static int setup_demand_paging(struct kvm_vm *vm,
>  		return -1;
>  	}
>  
> -	uffdio_register.range.start = (uint64_t)host_test_mem;
> -	uffdio_register.range.len = host_num_pages * host_page_size;
> +	uffdio_register.range.start = (uint64_t)hva;
> +	uffdio_register.range.len = len;
>  	uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING;
>  	if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) {
>  		DEBUG("ioctl uffdio_register failed\n");
> @@ -269,42 +281,37 @@ static int setup_demand_paging(struct kvm_vm *vm,
>  		return -1;
>  	}
>  
> -	uffd_args.uffd = uffd;
> -	uffd_args.delay = uffd_delay;
> +	uffd_args->uffd = uffd;
> +	uffd_args->delay = uffd_delay;
>  	pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn,
> -		       &uffd_args);
> +		       uffd_args);
> +
> +	PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n",
> +		       hva, hva + len);
>  
>  	return 0;
>  }
>  
> -#define PAGE_SHIFT_4K  12
> -
>  static void run_test(enum vm_guest_mode mode, bool use_uffd,
> -		     useconds_t uffd_delay, uint64_t vcpu_wss)
> +		     useconds_t uffd_delay, int vcpus, uint64_t vcpu_wss)
>  {
> -	pthread_t vcpu_thread;
> -	pthread_t uffd_handler_thread;
> +	pthread_t *vcpu_threads;
> +	pthread_t *uffd_handler_threads = NULL;
> +	struct uffd_handler_args *uffd_args = NULL;
>  	struct kvm_vm *vm;
> -	struct vcpu_thread_args vcpu_args;
> +	struct vcpu_thread_args *vcpu_args;
>  	uint64_t guest_num_pages;
> +	int vcpu_id;
>  	int r;
>  
> -	/*
> -	 * We reserve page table for twice the ammount of memory we intend
> -	 * to use in the test region for demand paging. Here we do the
> -	 * calculation with 4K page size which is the smallest so the page
> -	 * number will be enough for all archs. (e.g., 64K page size guest
> -	 * will need even less memory for page tables).
> -	 */
> -	vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K,
> -		       guest_code);
> +	vm = create_vm(mode, vcpus, vcpu_wss);
>  
>  	guest_page_size = vm_get_page_size(vm);
>  
>  	TEST_ASSERT(vcpu_wss % guest_page_size == 0,
>  		    "Guest memory size is not guest page size aligned.");
>  
> -	guest_num_pages = vcpu_wss / guest_page_size;
> +	guest_num_pages = (vcpus * vcpu_wss) / guest_page_size;
>  
>  #ifdef __s390x__
>  	/* Round up to multiple of 1M (segment size) */
> @@ -316,13 +323,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
>  	 */
>  	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
>  		    "Requested more guest memory than address space allows.\n"
> -		    "    guest pages: %lx max gfn: %lx\n",
> -		    guest_num_pages, vm_get_max_gfn(vm));
> +		    "    guest pages: %lx max gfn: %lx vcpus: %d wss: %lx]\n",
> +		    guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_wss);
>  
>  	host_page_size = getpagesize();
>  	TEST_ASSERT(vcpu_wss % host_page_size == 0,
>  		    "Guest memory size is not host page size aligned.");
> -	host_num_pages = vcpu_wss / host_page_size;
>  
>  	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
>  			      guest_page_size;
> @@ -347,43 +353,102 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
>  	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem,
>  		 guest_num_pages * guest_page_size, 0);
>  
> -	/* Cache the HVA pointer of the region */
> -	host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
> +	/* Export the shared variables to the guest */
> +	sync_global_to_guest(vm, host_page_size);
> +	sync_global_to_guest(vm, guest_page_size);
> +
> +	guest_data_prototype = malloc(host_page_size);
> +	TEST_ASSERT(guest_data_prototype, "Memory allocation failed");
> +	memset(guest_data_prototype, 0xAB, host_page_size);
> +
> +	vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads));
> +	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
>  
>  	if (use_uffd) {
> -		/* Set up user fault fd to handle demand paging requests. */
>  		quit_uffd_thread = false;
> -		r = setup_demand_paging(vm, &uffd_handler_thread,
> -					uffd_delay);
> -		if (r < 0)
> -			exit(-r);
> +
> +		uffd_handler_threads =
> +			malloc(vcpus * sizeof(*uffd_handler_threads));
> +		TEST_ASSERT(uffd_handler_threads, "Memory allocation failed");
> +
> +		uffd_args = malloc(vcpus * sizeof(*uffd_args));
> +		TEST_ASSERT(uffd_args, "Memory allocation failed");
>  	}
>  
> +	vcpu_args = malloc(vcpus * sizeof(*vcpu_args));
> +	TEST_ASSERT(vcpu_args, "Memory allocation failed");
> +
> +	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> +		vm_paddr_t vcpu_gpa;
> +		void *vcpu_hva;
> +
> +		vm_vcpu_add_default(vm, vcpu_id, guest_code);
> +
> +		vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_wss);
> +		PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
> +			       vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_wss);
> +
> +		/* Cache the HVA pointer of the region */
> +		vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
> +
> +		if (use_uffd) {
> +			/*
> +			 * Set up user fault fd to handle demand paging
> +			 * requests.
> +			 */
> +			r = setup_demand_paging(vm,
> +						&uffd_handler_threads[vcpu_id],
> +						uffd_delay, &uffd_args[vcpu_id],
> +						vcpu_hva, vcpu_wss);
> +			if (r < 0)
> +				exit(-r);
> +		}
> +
>  #ifdef __x86_64__
> -	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
> +		vcpu_set_cpuid(vm, vcpu_id, kvm_get_supported_cpuid());
>  #endif
>  
> -	/* Export the shared variables to the guest */
> -	sync_global_to_guest(vm, host_page_size);
> -	sync_global_to_guest(vm, guest_page_size);
> +		vcpu_args[vcpu_id].vm = vm;
> +		vcpu_args[vcpu_id].vcpu_id = vcpu_id;
> +		vcpu_args[vcpu_id].gva = guest_test_virt_mem +
> +					 (vcpu_id * vcpu_wss);
> +		vcpu_args[vcpu_id].pages = vcpu_wss / guest_page_size;
> +	}
> +
> +	DEBUG("Finished creating vCPUs and starting uffd threads\n");
> +
> +	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> +		pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker,
> +			       &vcpu_args[vcpu_id]);
> +	}
> +
> +	DEBUG("Started all vCPUs\n");
>  
> -	vcpu_args.vm = vm;
> -	vcpu_args.vcpu_id = VCPU_ID;
> -	vcpu_args.gva = guest_test_virt_mem;
> -	vcpu_args.pages = guest_num_pages;
> -	pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args);
> +	/* Wait for the vcpu threads to quit */
> +	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> +		pthread_join(vcpu_threads[vcpu_id], NULL);
> +		PER_VCPU_DEBUG("Joined thread for vCPU %d\n", vcpu_id);
> +	}
>  
> -	/* Wait for the vcpu thread to quit */
> -	pthread_join(vcpu_thread, NULL);
> +	DEBUG("All vCPU threads joined\n");
>  
>  	if (use_uffd) {
> -		/* Tell the user fault fd handler thread to quit */
> +		/* Tell the user fault fd handler threads to quit */
>  		quit_uffd_thread = true;
> -		pthread_join(uffd_handler_thread, NULL);
> +		for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++)
> +			pthread_join(uffd_handler_threads[vcpu_id], NULL);
>  	}
>  
>  	ucall_uninit(vm);
>  	kvm_vm_free(vm);
> +
> +	free(guest_data_prototype);
> +	free(vcpu_threads);
> +	if (use_uffd) {
> +		free(uffd_handler_threads);
> +		free(uffd_args);
> +	}
> +	free(vcpu_args);
>  }
>  
>  struct vm_guest_mode_params {
> @@ -404,7 +469,7 @@ static void help(char *name)
>  
>  	puts("");
>  	printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
> -	       "          [-b bytes test memory]\n", name);
> +	       "          [-b bytes test memory] [-v vcpus]\n", name);
>  	printf(" -m: specify the guest mode ID to test\n"
>  	       "     (default: test all supported modes)\n"
>  	       "     This option may be used multiple times.\n"
> @@ -419,6 +484,7 @@ static void help(char *name)
>  	       "     FD handler to simulate demand paging\n"
>  	       "     overheads. Ignored without -u.\n");
>  	printf(" -b: specify the working set size, in bytes for each vCPU.\n");
> +	printf(" -v: specify the number of vCPUs to run.\n");
>  	puts("");
>  	exit(0);
>  }
> @@ -427,6 +493,7 @@ int main(int argc, char *argv[])
>  {
>  	bool mode_selected = false;
>  	uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE;
> +	int vcpus = 1;
>  	unsigned int mode;
>  	int opt, i;
>  	bool use_uffd = false;
> @@ -439,7 +506,7 @@ int main(int argc, char *argv[])
>  	vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
>  #endif
>  
> -	while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) {
> +	while ((opt = getopt(argc, argv, "hm:ud:b:v:")) != -1) {
>  		switch (opt) {
>  		case 'm':
>  			if (!mode_selected) {
> @@ -462,6 +529,12 @@ int main(int argc, char *argv[])
>  			break;
>  		case 'b':
>  			vcpu_wss = strtoull(optarg, NULL, 0);
> +			break;

There's that missing break. It's good to test each patch to ensure
bisectability.

> +		case 'v':
> +			vcpus = atoi(optarg);
> +			TEST_ASSERT(vcpus > 0,
> +				    "Must have a positive number of vCPUs");
> +			break;
>  		case 'h':
>  		default:
>  			help(argv[0]);
> @@ -475,7 +548,7 @@ int main(int argc, char *argv[])
>  		TEST_ASSERT(vm_guest_mode_params[i].supported,
>  			    "Guest mode ID %d (%s) not supported.",
>  			    i, vm_guest_mode_string(i));
> -		run_test(i, use_uffd, uffd_delay, vcpu_wss);
> +		run_test(i, use_uffd, uffd_delay, vcpus, vcpu_wss);
>  	}
>  
>  	return 0;
> -- 
> 2.24.1.735.g03f4e72817-goog
>

Thanks,
drew 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create
  2019-12-16 21:39 ` [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create Ben Gardon
@ 2020-01-07 15:42   ` Andrew Jones
  2020-01-07 21:20     ` Ben Gardon
  0 siblings, 1 reply; 26+ messages in thread
From: Andrew Jones @ 2020-01-07 15:42 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Mon, Dec 16, 2019 at 01:39:01PM -0800, Ben Gardon wrote:
> KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> vCPU creation. If memslot 0 is large enough it collides with these
> memslots an causes vCPU creation to fail. When requesting more than 3G,
> start memslot 0 at 4G in _vm_create.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 41cf45416060f..886d58e6cac39 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = {
>  _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
>  	       "Missing new mode strings?");
>  
> +#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30)
> +#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30)
>  /*
>   * VM Create
>   *
> @@ -128,13 +130,16 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
>   *
>   * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
>   * When phy_pages is non-zero, a memory region of phy_pages physical pages
> - * is created and mapped starting at guest physical address 0.  The file
> - * descriptor to control the created VM is created with the permissions
> - * given by perm (e.g. O_RDWR).
> + * is created. If phy_pages is less that 3G, it is mapped starting at guest
> + * physical address 0. If phy_pages is greater than 3G it is mapped starting
> + * 4G into the guest physical address space to avoid KVM internal memslots
> + * which map the region between 3G and 4G. The file descriptor to control the
> + * created VM is created with the permissions given by perm (e.g. O_RDWR).
>   */
>  struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>  {
>  	struct kvm_vm *vm;
> +	uint64_t guest_paddr = 0;
>  
>  	DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>  
> @@ -227,9 +232,11 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
>  
>  	/* Allocate and setup memory for guest. */
>  	vm->vpages_mapped = sparsebit_alloc();
> +	if (guest_paddr + phy_pages > KVM_INTERNAL_MEMSLOTS_START_PADDR)
> +		guest_paddr = KVM_INTERNAL_MEMSLOTS_END_PADDR;
>  	if (phy_pages != 0)
>  		vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> -					    0, 0, phy_pages, 0);
> +					    guest_paddr, 0, phy_pages, 0);
>  
>  	return vm;
>  }
> -- 
> 2.24.1.735.g03f4e72817-goog
>

I feel like this function is becoming too magic and it'll be more
complicated for tests that need to add additional memory regions
to know what physical addresses are available. Maybe we should assert
if we can't allocate more than 3G at offset zero and also provide
another interface for allocating at an offset input by the user,
as long as the offset is 4G or above (asserting when it isn't)?

Thanks,
drew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/8] KVM: selftests: Add demand paging content to the demand paging test
  2019-12-16 21:38 ` [PATCH v3 2/8] KVM: selftests: Add demand paging content to the " Ben Gardon
@ 2020-01-07 16:00   ` Peter Xu
  2020-01-07 21:13     ` Ben Gardon
  0 siblings, 1 reply; 26+ messages in thread
From: Peter Xu @ 2020-01-07 16:00 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Andrew Jones

On Mon, Dec 16, 2019 at 01:38:55PM -0800, Ben Gardon wrote:

[...]

> +static void *uffd_handler_thread_fn(void *arg)
> +{
> +	struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg;
> +	int uffd = uffd_args->uffd;
> +	int64_t pages = 0;
> +
> +	while (!quit_uffd_thread) {
> +		struct uffd_msg msg;
> +		struct pollfd pollfd[1];
> +		int r;
> +		uint64_t addr;
> +
> +		pollfd[0].fd = uffd;
> +		pollfd[0].events = POLLIN;
> +
> +		/*
> +		 * TODO this introduces a 0.5sec delay at the end of the test.
> +		 * Reduce the timeout or eliminate it following the example in
> +		 * tools/testing/selftests/vm/userfaultfd.c
> +		 */
> +		r = poll(pollfd, 1, 500);

Would you mind implement it instead of adding a todo?  IIUC it's as
simple as a few more lines than the comment itself.  Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay
  2019-12-16 21:38 ` [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay Ben Gardon
@ 2020-01-07 16:04   ` Peter Xu
  0 siblings, 0 replies; 26+ messages in thread
From: Peter Xu @ 2020-01-07 16:04 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Andrew Jones

On Mon, Dec 16, 2019 at 01:38:56PM -0800, Ben Gardon wrote:
> When running the demand paging test with the -u option, the User Fault
> FD handler essentially adds an arbitrary delay to page fault resolution.
> To enable better simulation of a real demand paging scenario, add a
> configurable delay to the UFFD handler.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals
  2020-01-07 15:23   ` Andrew Jones
@ 2020-01-07 18:26     ` Ben Gardon
  2020-01-08 13:59       ` Andrew Jones
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2020-01-07 18:26 UTC (permalink / raw)
  To: Andrew Jones
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Tue, Jan 7, 2020 at 7:23 AM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Dec 16, 2019 at 01:38:58PM -0800, Ben Gardon wrote:
> > In preparation for supporting multiple vCPUs in the demand paging test,
> > pass arguments to the vCPU instead of syncing globals to it.
>
> This will only work if we don't spill parameters onto the stack and all
> data we want to pass fit in registers.

That's a great point. I'll see about using globals and deriving the
cpu ID to look up args. In your pseudocode below I see you use
arch_get_cpu_id, but I don't believe this function exists in selftests
and I don't have the knowledge off the top of my head to implement it
for s390 and aarch64. Do you have any pointers for implementing such a
function?

> I've used multiple VCPUs in tests
> before and stuck with the global syncing. I simply used arrays like this
>
>  static my_type_t my_data[NR_VCPUS];
>
>  static void guest_code(void)
>  {
>      int cpu = arch_get_cpu_id();
>
>      // do something with my_data[cpu]
>  }
>
>  int main(void)
>  {
>      for (i = 0; i < NR_VCPUS; ++i) {
>          // prepare my_data[i]
>          sync_global_to_guest(vm, my_data[i]);
>      }
>
>      // run vcpus
>
>     for (i = 0; i < NR_VCPUS; ++i) {
>          sync_global_from_guest(vm, my_data[i]);
>          // do something with my_data[i]
>     }
>  }
>
> >
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> > ---
> >  .../selftests/kvm/demand_paging_test.c        | 61 +++++++++++--------
> >  1 file changed, 37 insertions(+), 24 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> > index 4aa90a3fce99c..8ede26e088ab6 100644
> > --- a/tools/testing/selftests/kvm/demand_paging_test.c
> > +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> > @@ -42,7 +42,6 @@
> >   */
> >  static uint64_t host_page_size;
> >  static uint64_t guest_page_size;
> > -static uint64_t guest_num_pages;
> >
> >  static char *guest_data_prototype;
> >
> > @@ -63,14 +62,13 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
> >   * Continuously write to the first 8 bytes of each page in the demand paging
> >   * memory region.
> >   */
> > -static void guest_code(void)
> > +static void guest_code(uint64_t gva, uint64_t pages)
> >  {
> >       int i;
> >
> > -     for (i = 0; i < guest_num_pages; i++) {
> > -             uint64_t addr = guest_test_virt_mem;
> > +     for (i = 0; i < pages; i++) {
> > +             uint64_t addr = gva + (i * guest_page_size);
> >
> > -             addr += i * guest_page_size;
> >               addr &= ~(host_page_size - 1);
> >               *(uint64_t *)addr = 0x0123456789ABCDEF;
> >       }
> > @@ -82,18 +80,31 @@ static void guest_code(void)
> >  static void *host_test_mem;
> >  static uint64_t host_num_pages;
> >
> > +struct vcpu_thread_args {
> > +     uint64_t gva;
> > +     uint64_t pages;
> > +     struct kvm_vm *vm;
> > +     int vcpu_id;
> > +};
> > +
> >  static void *vcpu_worker(void *data)
> >  {
> >       int ret;
> > -     struct kvm_vm *vm = data;
> > +     struct vcpu_thread_args *args = (struct vcpu_thread_args *)data;
> > +     struct kvm_vm *vm = args->vm;
> > +     int vcpu_id = args->vcpu_id;
> > +     uint64_t gva = args->gva;
> > +     uint64_t pages = args->pages;
> >       struct kvm_run *run;
> >
> > -     run = vcpu_state(vm, VCPU_ID);
> > +     vcpu_args_set(vm, vcpu_id, 2, gva, pages);
>
> vcpu_args_set() is currently only implemented by x86, so that's a good
> reason for this to be an x86-only test for now. Well, unless this is
> switched back to using global sync.
>
> > +
> > +     run = vcpu_state(vm, vcpu_id);
> >
> >       /* Let the guest access its memory */
> > -     ret = _vcpu_run(vm, VCPU_ID);
> > +     ret = _vcpu_run(vm, vcpu_id);
> >       TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
> > -     if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) {
> > +     if (get_ucall(vm, vcpu_id, NULL) != UCALL_SYNC) {
> >               TEST_ASSERT(false,
> >                           "Invalid guest sync status: exit_reason=%s\n",
> >                           exit_reason_str(run->exit_reason));
> > @@ -269,11 +280,13 @@ static int setup_demand_paging(struct kvm_vm *vm,
> >  #define PAGE_SHIFT_4K  12
> >
> >  static void run_test(enum vm_guest_mode mode, bool use_uffd,
> > -                  useconds_t uffd_delay, uint64_t guest_memory_bytes)
> > +                  useconds_t uffd_delay, uint64_t vcpu_wss)
>
> Not sure why guest_memory_bytes was renamed to vcpu_wss. What is wss?
> Working set size?

wss indeed stands for working set size, but I agree there's no reason
to use it. I'll change guest_memory_bytes to vcpu_memory_bytes
instead.

>
> >  {
> >       pthread_t vcpu_thread;
> >       pthread_t uffd_handler_thread;
> >       struct kvm_vm *vm;
> > +     struct vcpu_thread_args vcpu_args;
> > +     uint64_t guest_num_pages;
> >       int r;
> >
> >       /*
> > @@ -283,16 +296,15 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
> >        * number will be enough for all archs. (e.g., 64K page size guest
> >        * will need even less memory for page tables).
> >        */
> > -     vm = create_vm(mode, VCPU_ID,
> > -                    (2 * guest_memory_bytes) >> PAGE_SHIFT_4K,
> > +     vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K,
> >                      guest_code);
> >
> >       guest_page_size = vm_get_page_size(vm);
> >
> > -     TEST_ASSERT(guest_memory_bytes % guest_page_size == 0,
> > +     TEST_ASSERT(vcpu_wss % guest_page_size == 0,
> >                   "Guest memory size is not guest page size aligned.");
> >
> > -     guest_num_pages = guest_memory_bytes / guest_page_size;
> > +     guest_num_pages = vcpu_wss / guest_page_size;
> >
> >  #ifdef __s390x__
> >       /* Round up to multiple of 1M (segment size) */
> > @@ -308,9 +320,9 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
> >                   guest_num_pages, vm_get_max_gfn(vm));
> >
> >       host_page_size = getpagesize();
> > -     TEST_ASSERT(guest_memory_bytes % host_page_size == 0,
> > +     TEST_ASSERT(vcpu_wss % host_page_size == 0,
> >                   "Guest memory size is not host page size aligned.");
> > -     host_num_pages = guest_memory_bytes / host_page_size;
> > +     host_num_pages = vcpu_wss / host_page_size;
> >
> >       guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
> >                             guest_page_size;
> > @@ -354,10 +366,12 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
> >       /* Export the shared variables to the guest */
> >       sync_global_to_guest(vm, host_page_size);
> >       sync_global_to_guest(vm, guest_page_size);
> > -     sync_global_to_guest(vm, guest_test_virt_mem);
> > -     sync_global_to_guest(vm, guest_num_pages);
> >
> > -     pthread_create(&vcpu_thread, NULL, vcpu_worker, vm);
> > +     vcpu_args.vm = vm;
> > +     vcpu_args.vcpu_id = VCPU_ID;
> > +     vcpu_args.gva = guest_test_virt_mem;
> > +     vcpu_args.pages = guest_num_pages;
> > +     pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args);
> >
> >       /* Wait for the vcpu thread to quit */
> >       pthread_join(vcpu_thread, NULL);
> > @@ -404,8 +418,7 @@ static void help(char *name)
> >       printf(" -d: add a delay in usec to the User Fault\n"
> >              "     FD handler to simulate demand paging\n"
> >              "     overheads. Ignored without -u.\n");
> > -     printf(" -b: specify the number of bytes of memory which should be\n"
> > -            "     allocated to the guest.\n");
> > +     printf(" -b: specify the working set size, in bytes for each vCPU.\n");
> >       puts("");
> >       exit(0);
> >  }
> > @@ -413,7 +426,7 @@ static void help(char *name)
> >  int main(int argc, char *argv[])
> >  {
> >       bool mode_selected = false;
> > -     uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
> > +     uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE;
> >       unsigned int mode;
> >       int opt, i;
> >       bool use_uffd = false;
> > @@ -448,7 +461,7 @@ int main(int argc, char *argv[])
> >                                   "A negative UFFD delay is not supported.");
> >                       break;
> >               case 'b':
> > -                     guest_memory_bytes = strtoull(optarg, NULL, 0);
> > +                     vcpu_wss = strtoull(optarg, NULL, 0);
> >               case 'h':
> >               default:
> >                       help(argv[0]);
> > @@ -462,7 +475,7 @@ int main(int argc, char *argv[])
> >               TEST_ASSERT(vm_guest_mode_params[i].supported,
> >                           "Guest mode ID %d (%s) not supported.",
> >                           i, vm_guest_mode_string(i));
> > -             run_test(i, use_uffd, uffd_delay, guest_memory_bytes);
> > +             run_test(i, use_uffd, uffd_delay, vcpu_wss);
> >       }
> >
> >       return 0;
> > --
> > 2.24.1.735.g03f4e72817-goog
> >
>
> Thanks,
> drew
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/8] KVM: selftests: Create a demand paging test
  2020-01-07 14:56     ` Andrew Jones
@ 2020-01-07 18:41       ` Ben Gardon
  2020-01-08 13:45         ` Andrew Jones
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2020-01-07 18:41 UTC (permalink / raw)
  To: Andrew Jones
  Cc: Peter Xu, linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews

I'll try to implement Drew's suggestion re: syncing global variables
and then looking up CPU ID. If I can do that I'll upload another patch
set for s390, aarch64, and x86. If I can't I'll move this test to the
x86 subdirectory.

I apologize for not responding to the comments on the previous version
of this patch set. I'm still learning the mailing list etiquette. In
the future is it preferable that I reply to those comments when I
upload a new patch set addressing them, or should I add a note in the
new patch emails about the comments I addressed in that update?

I don't have any aarch64 or s390 hardware handy to test on so I'll try
to move support for those architectures to separate commits at the end
of the series, and mark them untested.

Thank you for your quick responses!

On Tue, Jan 7, 2020 at 6:56 AM Andrew Jones <drjones@redhat.com> wrote:
>
> On Tue, Jan 07, 2020 at 09:33:34AM -0500, Peter Xu wrote:
> > On Mon, Dec 16, 2019 at 01:38:54PM -0800, Ben Gardon wrote:
> > > While userfaultfd, KVM's demand paging implementation, is not specific
> > > to KVM, having a benchmark for its performance will be useful for
> > > guiding performance improvements to KVM. As a first step towards creating
> > > a userfaultfd demand paging test, create a simple memory access test,
> > > based on dirty_log_test.
> > >
> > > Signed-off-by: Ben Gardon <bgardon@google.com>
> >
> > It's fine to start with x86-only for this test, but imho it would be
> > better to mention that in cover letter, or reply to reviewer comments
> > on that you removed aarch64 from previous post.
>
> I'd also prefer that if it's x86-only that it be put in the x86_64
> subdirectory and drop the arch #ifdefs. The question is why is it
> x86-only for now though? Will it take a lot of work to port it to
> other architectures? Or does it just need testing by someone with
> the hardware?
>
> Thanks,
> drew
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/8] KVM: selftests: Add demand paging content to the demand paging test
  2020-01-07 16:00   ` Peter Xu
@ 2020-01-07 21:13     ` Ben Gardon
  0 siblings, 0 replies; 26+ messages in thread
From: Ben Gardon @ 2020-01-07 21:13 UTC (permalink / raw)
  To: Peter Xu
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Andrew Jones

Done. I'll have it in the next version of the patch set I send out.

On Tue, Jan 7, 2020 at 8:00 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Mon, Dec 16, 2019 at 01:38:55PM -0800, Ben Gardon wrote:
>
> [...]
>
> > +static void *uffd_handler_thread_fn(void *arg)
> > +{
> > +     struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg;
> > +     int uffd = uffd_args->uffd;
> > +     int64_t pages = 0;
> > +
> > +     while (!quit_uffd_thread) {
> > +             struct uffd_msg msg;
> > +             struct pollfd pollfd[1];
> > +             int r;
> > +             uint64_t addr;
> > +
> > +             pollfd[0].fd = uffd;
> > +             pollfd[0].events = POLLIN;
> > +
> > +             /*
> > +              * TODO this introduces a 0.5sec delay at the end of the test.
> > +              * Reduce the timeout or eliminate it following the example in
> > +              * tools/testing/selftests/vm/userfaultfd.c
> > +              */
> > +             r = poll(pollfd, 1, 500);
>
> Would you mind implement it instead of adding a todo?  IIUC it's as
> simple as a few more lines than the comment itself.  Thanks,
>
> --
> Peter Xu
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test
  2020-01-07 15:02   ` Andrew Jones
@ 2020-01-07 21:18     ` Ben Gardon
  0 siblings, 0 replies; 26+ messages in thread
From: Ben Gardon @ 2020-01-07 21:18 UTC (permalink / raw)
  To: Andrew Jones
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Tue, Jan 7, 2020 at 7:02 AM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Dec 16, 2019 at 01:38:57PM -0800, Ben Gardon wrote:
> > Add an argument to allow the demand paging test to work on larger and
> > smaller guest sizes.
> >
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> > ---
> >  .../selftests/kvm/demand_paging_test.c        | 56 ++++++++++++-------
> >  1 file changed, 35 insertions(+), 21 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> > index 11de5b58995fb..4aa90a3fce99c 100644
> > --- a/tools/testing/selftests/kvm/demand_paging_test.c
> > +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> > @@ -32,6 +32,8 @@
> >  /* Default guest test virtual memory offset */
> >  #define DEFAULT_GUEST_TEST_MEM               0xc0000000
> >
> > +#define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */
> > +
> >  /*
> >   * Guest/Host shared variables. Ensure addr_gva2hva() and/or
> >   * sync_global_to/from_guest() are used when accessing from
> > @@ -264,11 +266,10 @@ static int setup_demand_paging(struct kvm_vm *vm,
> >       return 0;
> >  }
> >
> > -#define GUEST_MEM_SHIFT 30 /* 1G */
> >  #define PAGE_SHIFT_4K  12
> >
> >  static void run_test(enum vm_guest_mode mode, bool use_uffd,
> > -                  useconds_t uffd_delay)
> > +                  useconds_t uffd_delay, uint64_t guest_memory_bytes)
> >  {
> >       pthread_t vcpu_thread;
> >       pthread_t uffd_handler_thread;
> > @@ -276,33 +277,40 @@ static void run_test(enum vm_guest_mode mode, bool use_uffd,
> >       int r;
> >
> >       /*
> > -      * We reserve page table for 2 times of extra dirty mem which
> > -      * will definitely cover the original (1G+) test range.  Here
> > -      * we do the calculation with 4K page size which is the
> > -      * smallest so the page number will be enough for all archs
> > -      * (e.g., 64K page size guest will need even less memory for
> > -      * page tables).
> > +      * We reserve page table for twice the ammount of memory we intend
> > +      * to use in the test region for demand paging. Here we do the
> > +      * calculation with 4K page size which is the smallest so the page
> > +      * number will be enough for all archs. (e.g., 64K page size guest
> > +      * will need even less memory for page tables).
> >        */
> >       vm = create_vm(mode, VCPU_ID,
> > -                    2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K),
> > +                    (2 * guest_memory_bytes) >> PAGE_SHIFT_4K,
> >                      guest_code);
> >
> >       guest_page_size = vm_get_page_size(vm);
> > -     /*
> > -      * A little more than 1G of guest page sized pages.  Cover the
> > -      * case where the size is not aligned to 64 pages.
> > -      */
> > -     guest_num_pages = (1ul << (GUEST_MEM_SHIFT -
> > -                                vm_get_page_shift(vm))) + 16;
> > +
> > +     TEST_ASSERT(guest_memory_bytes % guest_page_size == 0,
> > +                 "Guest memory size is not guest page size aligned.");
> > +
> > +     guest_num_pages = guest_memory_bytes / guest_page_size;
> > +
> >  #ifdef __s390x__
> >       /* Round up to multiple of 1M (segment size) */
> >       guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL;
> >  #endif
> > +     /*
> > +      * If there should be more memory in the guest test region than there
> > +      * can be pages in the guest, it will definitely cause problems.
> > +      */
> > +     TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
> > +                 "Requested more guest memory than address space allows.\n"
> > +                 "    guest pages: %lx max gfn: %lx\n",
> > +                 guest_num_pages, vm_get_max_gfn(vm));
> >
> >       host_page_size = getpagesize();
> > -     host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
> > -                      !!((guest_num_pages * guest_page_size) %
> > -                         host_page_size);
> > +     TEST_ASSERT(guest_memory_bytes % host_page_size == 0,
> > +                 "Guest memory size is not host page size aligned.");
> > +     host_num_pages = guest_memory_bytes / host_page_size;
> >
> >       guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
> >                             guest_page_size;
> > @@ -381,7 +389,8 @@ static void help(char *name)
> >       int i;
> >
> >       puts("");
> > -     printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n", name);
> > +     printf("usage: %s [-h] [-m mode] [-u] [-d uffd_delay_usec]\n"
> > +            "          [-b bytes test memory]\n", name);
> >       printf(" -m: specify the guest mode ID to test\n"
> >              "     (default: test all supported modes)\n"
> >              "     This option may be used multiple times.\n"
> > @@ -395,6 +404,8 @@ static void help(char *name)
> >       printf(" -d: add a delay in usec to the User Fault\n"
> >              "     FD handler to simulate demand paging\n"
> >              "     overheads. Ignored without -u.\n");
> > +     printf(" -b: specify the number of bytes of memory which should be\n"
> > +            "     allocated to the guest.\n");
>
> Can we input in megabytes instead? And also it might be nice to output the
> default size here.

I added a little function to parse size arguments so users can specify
2M or 7G. I also changed this to print the default value.

>
> >       puts("");
> >       exit(0);
> >  }
> > @@ -402,6 +413,7 @@ static void help(char *name)
> >  int main(int argc, char *argv[])
> >  {
> >       bool mode_selected = false;
> > +     uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE;
> >       unsigned int mode;
> >       int opt, i;
> >       bool use_uffd = false;
> > @@ -414,7 +426,7 @@ int main(int argc, char *argv[])
> >       vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true);
> >  #endif
> >
> > -     while ((opt = getopt(argc, argv, "hm:ud:")) != -1) {
> > +     while ((opt = getopt(argc, argv, "hm:ud:b:")) != -1) {
> >               switch (opt) {
> >               case 'm':
> >                       if (!mode_selected) {
> > @@ -435,6 +447,8 @@ int main(int argc, char *argv[])
> >                       TEST_ASSERT(uffd_delay >= 0,
> >                                   "A negative UFFD delay is not supported.");
> >                       break;
> > +             case 'b':
> > +                     guest_memory_bytes = strtoull(optarg, NULL, 0);
>
> Missing break. So it doesn't look like this was tested.

Woops, you're right, I must not have tested this commit. The break is
added in commit 6/8 and I only tested at the end of the series.
I'll have the break in this commit in the next version of this series.

>
> >               case 'h':
> >               default:
> >                       help(argv[0]);
> > @@ -448,7 +462,7 @@ int main(int argc, char *argv[])
> >               TEST_ASSERT(vm_guest_mode_params[i].supported,
> >                           "Guest mode ID %d (%s) not supported.",
> >                           i, vm_guest_mode_string(i));
> > -             run_test(i, use_uffd, uffd_delay);
> > +             run_test(i, use_uffd, uffd_delay, guest_memory_bytes);
> >       }
> >
> >       return 0;
> > --
> > 2.24.1.735.g03f4e72817-goog
> >
>
> Thanks,
> drew
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create
  2020-01-07 15:42   ` Andrew Jones
@ 2020-01-07 21:20     ` Ben Gardon
  2020-01-08 14:07       ` Andrew Jones
  0 siblings, 1 reply; 26+ messages in thread
From: Ben Gardon @ 2020-01-07 21:20 UTC (permalink / raw)
  To: Andrew Jones
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

Would it be viable to allocate at 4G be default and then add another
interface for allocations at low memory addresses? For most tests, I
don't think there's any value to having the backing paddrs below 3G.

On Tue, Jan 7, 2020 at 7:42 AM Andrew Jones <drjones@redhat.com> wrote:
>
> On Mon, Dec 16, 2019 at 01:39:01PM -0800, Ben Gardon wrote:
> > KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> > vCPU creation. If memslot 0 is large enough it collides with these
> > memslots an causes vCPU creation to fail. When requesting more than 3G,
> > start memslot 0 at 4G in _vm_create.
> >
> > Signed-off-by: Ben Gardon <bgardon@google.com>
> > ---
> >  tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++++----
> >  1 file changed, 11 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > index 41cf45416060f..886d58e6cac39 100644
> > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > @@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = {
> >  _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
> >              "Missing new mode strings?");
> >
> > +#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30)
> > +#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30)
> >  /*
> >   * VM Create
> >   *
> > @@ -128,13 +130,16 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
> >   *
> >   * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
> >   * When phy_pages is non-zero, a memory region of phy_pages physical pages
> > - * is created and mapped starting at guest physical address 0.  The file
> > - * descriptor to control the created VM is created with the permissions
> > - * given by perm (e.g. O_RDWR).
> > + * is created. If phy_pages is less that 3G, it is mapped starting at guest
> > + * physical address 0. If phy_pages is greater than 3G it is mapped starting
> > + * 4G into the guest physical address space to avoid KVM internal memslots
> > + * which map the region between 3G and 4G. The file descriptor to control the
> > + * created VM is created with the permissions given by perm (e.g. O_RDWR).
> >   */
> >  struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> >  {
> >       struct kvm_vm *vm;
> > +     uint64_t guest_paddr = 0;
> >
> >       DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
> >
> > @@ -227,9 +232,11 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> >
> >       /* Allocate and setup memory for guest. */
> >       vm->vpages_mapped = sparsebit_alloc();
> > +     if (guest_paddr + phy_pages > KVM_INTERNAL_MEMSLOTS_START_PADDR)
> > +             guest_paddr = KVM_INTERNAL_MEMSLOTS_END_PADDR;
> >       if (phy_pages != 0)
> >               vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> > -                                         0, 0, phy_pages, 0);
> > +                                         guest_paddr, 0, phy_pages, 0);
> >
> >       return vm;
> >  }
> > --
> > 2.24.1.735.g03f4e72817-goog
> >
>
> I feel like this function is becoming too magic and it'll be more
> complicated for tests that need to add additional memory regions
> to know what physical addresses are available. Maybe we should assert
> if we can't allocate more than 3G at offset zero and also provide
> another interface for allocating at an offset input by the user,
> as long as the offset is 4G or above (asserting when it isn't)?
>
> Thanks,
> drew
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/8] KVM: selftests: Create a demand paging test
  2020-01-07 18:41       ` Ben Gardon
@ 2020-01-08 13:45         ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-01-08 13:45 UTC (permalink / raw)
  To: Ben Gardon
  Cc: Peter Xu, linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews

On Tue, Jan 07, 2020 at 10:41:55AM -0800, Ben Gardon wrote:
> I'll try to implement Drew's suggestion re: syncing global variables
> and then looking up CPU ID. If I can do that I'll upload another patch
> set for s390, aarch64, and x86. If I can't I'll move this test to the
> x86 subdirectory.
> 
> I apologize for not responding to the comments on the previous version
> of this patch set. I'm still learning the mailing list etiquette. In
> the future is it preferable that I reply to those comments when I
> upload a new patch set addressing them, or should I add a note in the
> new patch emails about the comments I addressed in that update?

It's typically enough to just create a changelog in the cover letter.
E.g.

v3:
 - Added ...
 - Dropped ...
 - Fixed ...
 - Picked up r-b's

v2:
 - Added ...
 - Dropped ...
 - Fixed ...
 - Picked up r-b's

> 
> I don't have any aarch64 or s390 hardware handy to test on so I'll try
> to move support for those architectures to separate commits at the end
> of the series, and mark them untested.

I'll test on aarch64, and I can also provide fixes if necessary.

Thanks,
drew

> 
> Thank you for your quick responses!
> 
> On Tue, Jan 7, 2020 at 6:56 AM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Tue, Jan 07, 2020 at 09:33:34AM -0500, Peter Xu wrote:
> > > On Mon, Dec 16, 2019 at 01:38:54PM -0800, Ben Gardon wrote:
> > > > While userfaultfd, KVM's demand paging implementation, is not specific
> > > > to KVM, having a benchmark for its performance will be useful for
> > > > guiding performance improvements to KVM. As a first step towards creating
> > > > a userfaultfd demand paging test, create a simple memory access test,
> > > > based on dirty_log_test.
> > > >
> > > > Signed-off-by: Ben Gardon <bgardon@google.com>
> > >
> > > It's fine to start with x86-only for this test, but imho it would be
> > > better to mention that in cover letter, or reply to reviewer comments
> > > on that you removed aarch64 from previous post.
> >
> > I'd also prefer that if it's x86-only that it be put in the x86_64
> > subdirectory and drop the arch #ifdefs. The question is why is it
> > x86-only for now though? Will it take a lot of work to port it to
> > other architectures? Or does it just need testing by someone with
> > the hardware?
> >
> > Thanks,
> > drew
> >
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals
  2020-01-07 18:26     ` Ben Gardon
@ 2020-01-08 13:59       ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-01-08 13:59 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Tue, Jan 07, 2020 at 10:26:10AM -0800, Ben Gardon wrote:
> On Tue, Jan 7, 2020 at 7:23 AM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Mon, Dec 16, 2019 at 01:38:58PM -0800, Ben Gardon wrote:
> > > In preparation for supporting multiple vCPUs in the demand paging test,
> > > pass arguments to the vCPU instead of syncing globals to it.
> >
> > This will only work if we don't spill parameters onto the stack and all
> > data we want to pass fit in registers.
> 
> That's a great point. I'll see about using globals and deriving the
> cpu ID to look up args. In your pseudocode below I see you use
> arch_get_cpu_id, but I don't believe this function exists in selftests
> and I don't have the knowledge off the top of my head to implement it
> for s390 and aarch64. Do you have any pointers for implementing such a
> function?

Yeah, I never posted the patches that I used this approach on. For aarch64
my "arch_get_cpu_id", which was actually just open-coded in guest_code,
was something similar to this

 /* We only look at the first two affinity levels for now. */
 int arch_get_cpu_id(void)
 {
     uint64_t mpidr_el1, aff1, aff0;
     asm volatile("mrs %0, mpidr_el1" : "=r" (mpidr_el1));
     aff0 = mpidr_el1 & 0xf;
     aff1 = (mpidr_el1 >> 8) & 0xff;
     return aff1 * 16 + aff0;
 }

Thanks,
drew


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create
  2020-01-07 21:20     ` Ben Gardon
@ 2020-01-08 14:07       ` Andrew Jones
  0 siblings, 0 replies; 26+ messages in thread
From: Andrew Jones @ 2020-01-08 14:07 UTC (permalink / raw)
  To: Ben Gardon
  Cc: linux-kernel, kvm, linux-kselftest, Paolo Bonzini,
	Cannon Matthews, Peter Xu

On Tue, Jan 07, 2020 at 01:20:53PM -0800, Ben Gardon wrote:
> Would it be viable to allocate at 4G be default and then add another
> interface for allocations at low memory addresses? For most tests, I
> don't think there's any value to having the backing paddrs below 3G.

Please don't top post. Replies should go under the comments and questions
in order to more easily read the thread.

Anyway, this sounds reasonable to me, but we'll need to test all tests
on all architectures, as there could be some assumptions broken with
a change like that.

Thanks,
drew

> 
> On Tue, Jan 7, 2020 at 7:42 AM Andrew Jones <drjones@redhat.com> wrote:
> >
> > On Mon, Dec 16, 2019 at 01:39:01PM -0800, Ben Gardon wrote:
> > > KVM creates internal memslots between 3 and 4 GiB paddrs on the first
> > > vCPU creation. If memslot 0 is large enough it collides with these
> > > memslots an causes vCPU creation to fail. When requesting more than 3G,
> > > start memslot 0 at 4G in _vm_create.
> > >
> > > Signed-off-by: Ben Gardon <bgardon@google.com>
> > > ---
> > >  tools/testing/selftests/kvm/lib/kvm_util.c | 15 +++++++++++----
> > >  1 file changed, 11 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > > index 41cf45416060f..886d58e6cac39 100644
> > > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > > @@ -113,6 +113,8 @@ const char * const vm_guest_mode_string[] = {
> > >  _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
> > >              "Missing new mode strings?");
> > >
> > > +#define KVM_INTERNAL_MEMSLOTS_START_PADDR (3UL << 30)
> > > +#define KVM_INTERNAL_MEMSLOTS_END_PADDR (4UL << 30)
> > >  /*
> > >   * VM Create
> > >   *
> > > @@ -128,13 +130,16 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
> > >   *
> > >   * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
> > >   * When phy_pages is non-zero, a memory region of phy_pages physical pages
> > > - * is created and mapped starting at guest physical address 0.  The file
> > > - * descriptor to control the created VM is created with the permissions
> > > - * given by perm (e.g. O_RDWR).
> > > + * is created. If phy_pages is less that 3G, it is mapped starting at guest
> > > + * physical address 0. If phy_pages is greater than 3G it is mapped starting
> > > + * 4G into the guest physical address space to avoid KVM internal memslots
> > > + * which map the region between 3G and 4G. The file descriptor to control the
> > > + * created VM is created with the permissions given by perm (e.g. O_RDWR).
> > >   */
> > >  struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> > >  {
> > >       struct kvm_vm *vm;
> > > +     uint64_t guest_paddr = 0;
> > >
> > >       DEBUG("Testing guest mode: %s\n", vm_guest_mode_string(mode));
> > >
> > > @@ -227,9 +232,11 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
> > >
> > >       /* Allocate and setup memory for guest. */
> > >       vm->vpages_mapped = sparsebit_alloc();
> > > +     if (guest_paddr + phy_pages > KVM_INTERNAL_MEMSLOTS_START_PADDR)
> > > +             guest_paddr = KVM_INTERNAL_MEMSLOTS_END_PADDR;
> > >       if (phy_pages != 0)
> > >               vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
> > > -                                         0, 0, phy_pages, 0);
> > > +                                         guest_paddr, 0, phy_pages, 0);
> > >
> > >       return vm;
> > >  }
> > > --
> > > 2.24.1.735.g03f4e72817-goog
> > >
> >
> > I feel like this function is becoming too magic and it'll be more
> > complicated for tests that need to add additional memory regions
> > to know what physical addresses are available. Maybe we should assert
> > if we can't allocate more than 3G at offset zero and also provide
> > another interface for allocating at an offset input by the user,
> > as long as the offset is 4G or above (asserting when it isn't)?
> >
> > Thanks,
> > drew
> >
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2020-01-08 14:07 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-16 21:38 [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon
2019-12-16 21:38 ` [PATCH v3 1/8] KVM: selftests: Create a " Ben Gardon
2020-01-07 14:33   ` Peter Xu
2020-01-07 14:56     ` Andrew Jones
2020-01-07 18:41       ` Ben Gardon
2020-01-08 13:45         ` Andrew Jones
2019-12-16 21:38 ` [PATCH v3 2/8] KVM: selftests: Add demand paging content to the " Ben Gardon
2020-01-07 16:00   ` Peter Xu
2020-01-07 21:13     ` Ben Gardon
2019-12-16 21:38 ` [PATCH v3 3/8] KVM: selftests: Add configurable demand paging delay Ben Gardon
2020-01-07 16:04   ` Peter Xu
2019-12-16 21:38 ` [PATCH v3 4/8] KVM: selftests: Add memory size parameter to the demand paging test Ben Gardon
2020-01-07 15:02   ` Andrew Jones
2020-01-07 21:18     ` Ben Gardon
2019-12-16 21:38 ` [PATCH v3 5/8] KVM: selftests: Pass args to vCPU instead of using globals Ben Gardon
2020-01-07 15:23   ` Andrew Jones
2020-01-07 18:26     ` Ben Gardon
2020-01-08 13:59       ` Andrew Jones
2019-12-16 21:38 ` [PATCH v3 6/8] KVM: selftests: Support multiple vCPUs in demand paging test Ben Gardon
2020-01-07 15:27   ` Andrew Jones
2019-12-16 21:39 ` [PATCH v3 7/8] KVM: selftests: Time guest demand paging Ben Gardon
2019-12-16 21:39 ` [PATCH v3 8/8] KVM: selftests: Move large memslots above KVM internal memslots in _vm_create Ben Gardon
2020-01-07 15:42   ` Andrew Jones
2020-01-07 21:20     ` Ben Gardon
2020-01-08 14:07       ` Andrew Jones
2020-01-06 22:46 ` [PATCH v3 0/8] Create a userfaultfd demand paging test Ben Gardon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).