linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory
@ 2022-05-11  0:08 Vishal Annapurve
  2022-05-11  0:08 ` [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall Vishal Annapurve
                   ` (10 more replies)
  0 siblings, 11 replies; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

This v2 series implements selftests targeting the feature floated by Chao
via:
https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/

Below changes aim to test the fd based approach for guest private memory
in context of normal (non-confidential) VMs executing on non-confidential
platforms.

priv_memfd_test.c file adds a suite of selftests to access private memory
from the guest via private/shared accesses and checking if the contents
can be leaked to/accessed by vmm via shared memory view.

Updates in V2:
1) Tests are added to exercise implicit/explicit memory conversion paths.
2) Test is added to exercise UPM feature without double memory allocation.

This series has dependency on following patches:
1) V5 series patches from Chao mentioned above.
2) https://github.com/vishals4gh/linux/commit/b9adedf777ad84af39042e9c19899600a4add68a
  - Fixes host kernel crash with current implementation
3) https://github.com/vishals4gh/linux/commit/0577e351ee36d52c1f6cdcb1b8de7aa6b5f760fe
  - Confidential platforms along with the confidentiality aware software stack
    support a notion of private/shared accesses from the confidential VMs.
    Generally, a bit in the GPA conveys the shared/private-ness of the access.

    Non-confidential platforms don't have a notion of private or shared accesses
    from the guest VMs. To support this notion, KVM_HC_MAP_GPA_RANGE is modified
    to allow marking an access from a VM within a GPA range as always shared or
    private. There is an ongoing discussion about adding support for
    software-only confidential VMs, which should replace this patch.
4) https://github.com/vishals4gh/linux/commit/8d46aea9a7d72e4b1b998066ce0dde085fb963a7
  - Temporary placeholder to be able to test memory conversion paths
    till the memory conversion exit error code is finalized.
5) https://github.com/vishals4gh/linux/commit/4c36706477c62d9416d635fa6ac4ef6484014dfc
  - Fixes GFN calculation during memory conversion path.

Github link for the patches posted as part of this series:
https://github.com/vishals4gh/linux/commits/priv_memfd_selftests_rfc_v2

Austin Diviness (1):
  selftests: kvm: Add hugepage support to priv_memfd_test suite.

Vishal Annapurve (7):
  selftests: kvm: Fix inline assembly for hypercall
  selftests: kvm: Add a basic selftest to test private memory
  selftests: kvm: priv_memfd_test: Add support for memory conversion
  selftests: kvm: priv_memfd_test: Add shared access test
  selftests: kvm: Add implicit memory conversion tests
  selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test
  selftests: kvm: priv_memfd: Add test avoiding double allocation

 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/lib/x86_64/processor.c      |    2 +-
 tools/testing/selftests/kvm/priv_memfd_test.c | 1359 +++++++++++++++++
 3 files changed, 1361 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c

-- 
2.36.0.512.ge40c2bad7a-goog


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:31   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory Vishal Annapurve
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Fix inline assembly for hypercall to explicitly set
eax with hypercall number to allow the implementation
to work even in cases where compiler would inline the
function.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 9f000dfb5594..4d88e1a553bf 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1461,7 +1461,7 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
 
 	asm volatile("vmcall"
 		     : "=a"(r)
-		     : "b"(a0), "c"(a1), "d"(a2), "S"(a3));
+		     : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
 	return r;
 }
 
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
  2022-05-11  0:08 ` [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:29   ` Shuah Khan
  2022-07-20 23:03   ` Sean Christopherson
  2022-05-11  0:08 ` [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion Vishal Annapurve
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add KVM selftest to access private memory privately
from the guest to test that memory updates from guest
and userspace vmm don't affect each other.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 tools/testing/selftests/kvm/priv_memfd_test.c | 283 ++++++++++++++++++
 2 files changed, 284 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 21c2dbd21a81..f2f9a8546c66 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -97,6 +97,7 @@ TEST_GEN_PROGS_x86_64 += max_guest_memory_test
 TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test
 TEST_GEN_PROGS_x86_64 += memslot_perf_test
 TEST_GEN_PROGS_x86_64 += rseq_test
+TEST_GEN_PROGS_x86_64 += priv_memfd_test
 TEST_GEN_PROGS_x86_64 += set_memory_region_test
 TEST_GEN_PROGS_x86_64 += steal_time
 TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
new file mode 100644
index 000000000000..bbb58c62e186
--- /dev/null
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -0,0 +1,283 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <sched.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include <linux/compiler.h>
+#include <linux/kernel.h>
+#include <linux/kvm_para.h>
+#include <linux/memfd.h>
+
+#include <test_util.h>
+#include <kvm_util.h>
+#include <processor.h>
+
+#define TEST_MEM_GPA		0xb0000000
+#define TEST_MEM_SIZE		0x2000
+#define TEST_MEM_END		(TEST_MEM_GPA + TEST_MEM_SIZE)
+#define TEST_MEM_DATA_PAT1	0x6666666666666666
+#define TEST_MEM_DATA_PAT2	0x9999999999999999
+#define TEST_MEM_DATA_PAT3	0x3333333333333333
+#define TEST_MEM_DATA_PAT4	0xaaaaaaaaaaaaaaaa
+
+enum mem_op {
+	SET_PAT,
+	VERIFY_PAT
+};
+
+#define TEST_MEM_SLOT		10
+
+#define VCPU_ID			0
+
+#define VM_STAGE_PROCESSED(x)	pr_info("Processed stage %s\n", #x)
+
+typedef bool (*vm_stage_handler_fn)(struct kvm_vm *,
+				void *, uint64_t);
+typedef void (*guest_code_fn)(void);
+struct test_run_helper {
+	char *test_desc;
+	vm_stage_handler_fn vmst_handler;
+	guest_code_fn guest_fn;
+	void *shared_mem;
+	int priv_memfd;
+};
+
+/* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load
+ * which doesn't handle global offset table updates. Calling standard libc
+ * functions would normally result in referring to the global offset table.
+ * Adding O1 here seems to prohibit compiler from replacing the memory
+ * operations with standard libc functions such as memset.
+ */
+static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op,
+		void *mem, uint64_t pat, uint32_t size)
+{
+	uint64_t *buf = (uint64_t *)mem;
+	uint32_t chunk_size = sizeof(pat);
+	uint64_t mem_addr = (uint64_t)mem;
+
+	if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0))
+		return false;
+
+	for (uint32_t i = 0; i < (size / chunk_size); i++) {
+		if (op == SET_PAT)
+			buf[i] = pat;
+		if (op == VERIFY_PAT) {
+			if (buf[i] != pat)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+/* Test to verify guest private accesses on private memory with following steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM populates the shared memory with known pattern and continues guest
+ *    execution.
+ * 3) Guest writes a different pattern on the private memory and signals VMM
+ *      that it has updated private memory.
+ * 4) VMM verifies its shared memory contents to be same as the data populated
+ *      in step 2 and continues guest execution.
+ * 5) Guest verifies its private memory contents to be same as the data
+ *      populated in step 3 and marks the end of the guest execution.
+ */
+#define PMPAT_ID				0
+#define PMPAT_DESC				"PrivateMemoryPrivateAccessTest"
+
+/* Guest code execution stages for private mem access test */
+#define PMPAT_GUEST_STARTED			0ULL
+#define PMPAT_GUEST_PRIV_MEM_UPDATED		1ULL
+
+static bool pmpat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+
+	switch (stage) {
+	case PMPAT_GUEST_STARTED: {
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory update failure");
+		VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED);
+		break;
+	}
+	case PMPAT_GUEST_PRIV_MEM_UPDATED: {
+		/* verify host updated data is still intact */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void pmpat_guest_code(void)
+{
+	void *priv_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	GUEST_SYNC(PMPAT_GUEST_STARTED);
+
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2,
+			TEST_MEM_SIZE));
+	GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	GUEST_DONE();
+}
+
+static struct test_run_helper priv_memfd_testsuite[] = {
+	[PMPAT_ID] = {
+		.test_desc = PMPAT_DESC,
+		.vmst_handler = pmpat_handle_vm_stage,
+		.guest_fn = pmpat_guest_code,
+	},
+};
+
+static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
+{
+	struct kvm_run *run;
+	struct ucall uc;
+	uint64_t cmd;
+
+	/*
+	 * Loop until the guest is done.
+	 */
+	run = vcpu_state(vm, VCPU_ID);
+
+	while (true) {
+		vcpu_run(vm, VCPU_ID);
+
+		if (run->exit_reason == KVM_EXIT_IO) {
+			cmd = get_ucall(vm, VCPU_ID, &uc);
+			if (cmd != UCALL_SYNC)
+				break;
+
+			if (!priv_memfd_testsuite[test_id].vmst_handler(
+				vm, &priv_memfd_testsuite[test_id], uc.args[1]))
+				break;
+
+			continue;
+		}
+
+		TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason);
+		break;
+	}
+
+	if (run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT)
+		TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0],
+			  __FILE__, uc.args[1], uc.args[2]);
+}
+
+static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot,
+				uint32_t size, uint64_t guest_addr,
+				uint32_t priv_fd, uint64_t priv_offset)
+{
+	struct kvm_userspace_memory_region_ext region_ext;
+	int ret;
+
+	region_ext.region.slot = slot;
+	region_ext.region.flags = KVM_MEM_PRIVATE;
+	region_ext.region.guest_phys_addr = guest_addr;
+	region_ext.region.memory_size = size;
+	region_ext.region.userspace_addr = (uintptr_t) mem;
+	region_ext.private_fd = priv_fd;
+	region_ext.private_offset = priv_offset;
+	ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION, &region_ext);
+	TEST_ASSERT(ret == 0, "Failed to register user region for gpa 0x%lx\n",
+		guest_addr);
+}
+
+/* Do private access to the guest's private memory */
+static void setup_and_execute_test(uint32_t test_id)
+{
+	struct kvm_vm *vm;
+	int priv_memfd;
+	int ret;
+	void *shared_mem;
+	struct kvm_enable_cap cap;
+
+	vm = vm_create_default(VCPU_ID, 0,
+				priv_memfd_testsuite[test_id].guest_fn);
+
+	/* Allocate shared memory */
+	shared_mem = mmap(NULL, TEST_MEM_SIZE,
+			PROT_READ | PROT_WRITE,
+			MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
+	TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
+
+	/* Allocate private memory */
+	priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE);
+	TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd");
+	ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE);
+	TEST_ASSERT(ret != -1, "fallocate failed");
+
+	priv_memory_region_add(vm, shared_mem,
+				TEST_MEM_SLOT, TEST_MEM_SIZE,
+				TEST_MEM_GPA, priv_memfd, 0);
+
+	pr_info("Mapping test memory pages 0x%x page_size 0x%x\n",
+					TEST_MEM_SIZE/vm_get_page_size(vm),
+					vm_get_page_size(vm));
+	virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA,
+					(TEST_MEM_SIZE/vm_get_page_size(vm)));
+
+	/* Enable exit on KVM_HC_MAP_GPA_RANGE */
+	pr_info("Enabling exit on map_gpa_range hypercall\n");
+	ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL);
+	TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE),
+				"VM exit on MAP_GPA_RANGE HC not supported");
+	cap.cap = KVM_CAP_EXIT_HYPERCALL;
+	cap.flags = 0;
+	cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE);
+	ret = ioctl(vm_get_fd(vm), KVM_ENABLE_CAP, &cap);
+	TEST_ASSERT(ret == 0,
+		"Failed to enable exit on MAP_GPA_RANGE hypercall\n");
+
+	priv_memfd_testsuite[test_id].shared_mem = shared_mem;
+	priv_memfd_testsuite[test_id].priv_memfd = priv_memfd;
+	vcpu_work(vm, test_id);
+
+	munmap(shared_mem, TEST_MEM_SIZE);
+	priv_memfd_testsuite[test_id].shared_mem = NULL;
+	close(priv_memfd);
+	priv_memfd_testsuite[test_id].priv_memfd = -1;
+	kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) {
+		pr_info("=== Starting test %s... ===\n",
+				priv_memfd_testsuite[i].test_desc);
+		setup_and_execute_test(i);
+		pr_info("--- completed test %s ---\n\n",
+				priv_memfd_testsuite[i].test_desc);
+	}
+
+	return 0;
+}
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
  2022-05-11  0:08 ` [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall Vishal Annapurve
  2022-05-11  0:08 ` [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:40   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test Vishal Annapurve
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add handling of explicit private/shared memory conversion using
KVM_HC_MAP_GPA_RANGE and implicit memory conversion by handling
KVM_EXIT_MEMORY_ERROR.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 87 +++++++++++++++++++
 1 file changed, 87 insertions(+)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index bbb58c62e186..55e24c893b07 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -155,6 +155,83 @@ static struct test_run_helper priv_memfd_testsuite[] = {
 	},
 };
 
+static void handle_vm_exit_hypercall(struct kvm_run *run,
+	uint32_t test_id)
+{
+	uint64_t gpa, npages, attrs;
+	int priv_memfd =
+		priv_memfd_testsuite[test_id].priv_memfd;
+	int ret;
+	int fallocate_mode;
+
+	if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) {
+		TEST_FAIL("Unhandled Hypercall %lld\n",
+					run->hypercall.nr);
+	}
+
+	gpa = run->hypercall.args[0];
+	npages = run->hypercall.args[1];
+	attrs = run->hypercall.args[2];
+
+	if ((gpa < TEST_MEM_GPA) || ((gpa +
+		(npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) {
+		TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n",
+			gpa, npages);
+	}
+
+	if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
+		fallocate_mode = 0;
+	else {
+		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
+			FALLOC_FL_KEEP_SIZE);
+	}
+	pr_info("Converting off 0x%lx pages 0x%lx to %s\n",
+		(gpa - TEST_MEM_GPA), npages,
+		fallocate_mode ?
+			"shared" : "private");
+	ret = fallocate(priv_memfd, fallocate_mode,
+		(gpa - TEST_MEM_GPA),
+		npages << MIN_PAGE_SHIFT);
+	TEST_ASSERT(ret != -1,
+		"fallocate failed in hc handling");
+	run->hypercall.ret = 0;
+}
+
+static void handle_vm_exit_memory_error(struct kvm_run *run,
+	uint32_t test_id)
+{
+	uint64_t gpa, size, flags;
+	int ret;
+	int priv_memfd =
+		priv_memfd_testsuite[test_id].priv_memfd;
+	int fallocate_mode;
+
+	gpa = run->memory.gpa;
+	size = run->memory.size;
+	flags = run->memory.flags;
+
+	if ((gpa < TEST_MEM_GPA) || ((gpa + size)
+					> TEST_MEM_END)) {
+		TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n",
+			gpa, size);
+	}
+
+	if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE)
+		fallocate_mode = 0;
+	else {
+		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
+				FALLOC_FL_KEEP_SIZE);
+	}
+	pr_info("Converting off 0x%lx size 0x%lx to %s\n",
+		(gpa - TEST_MEM_GPA), size,
+		fallocate_mode ?
+			"shared" : "private");
+	ret = fallocate(priv_memfd, fallocate_mode,
+		(gpa - TEST_MEM_GPA), size);
+	TEST_ASSERT(ret != -1,
+		"fallocate failed in memory error handling");
+}
+
 static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
 {
 	struct kvm_run *run;
@@ -181,6 +258,16 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
 			continue;
 		}
 
+		if (run->exit_reason == KVM_EXIT_HYPERCALL) {
+			handle_vm_exit_hypercall(run, test_id);
+			continue;
+		}
+
+		if (run->exit_reason == KVM_EXIT_MEMORY_ERROR) {
+			handle_vm_exit_memory_error(run, test_id);
+			continue;
+		}
+
 		TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason);
 		break;
 	}
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (2 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:45   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests Vishal Annapurve
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add a test to access private memory in shared fashion
which should exercise implicit memory conversion path
using KVM_EXIT_MEMORY_ERROR.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 69 +++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index 55e24c893b07..48bc4343e7b5 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -147,12 +147,81 @@ static void pmpat_guest_code(void)
 	GUEST_DONE();
 }
 
+/* Test to verify guest shared accesses on private memory with following steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM populates the shared memory with known pattern and continues guest
+ *    execution.
+ * 3) Guest reads private gpa range in a shared fashion and verifies that it
+ *    reads what VMM has written in step2.
+ * 3) Guest writes a different pattern on the shared memory and signals VMM
+ *      that it has updated the shared memory.
+ * 4) VMM verifies shared memory contents to be same as the data populated
+ *      in step 3 and continues guest execution.
+ */
+#define PMSAT_ID				1
+#define PMSAT_DESC				"PrivateMemorySharedAccessTest"
+
+/* Guest code execution stages for private mem access test */
+#define PMSAT_GUEST_STARTED			0ULL
+#define PMSAT_GUEST_TEST_MEM_UPDATED		1ULL
+
+static bool pmsat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+
+	switch (stage) {
+	case PMSAT_GUEST_STARTED: {
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory update failed");
+		VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED);
+		break;
+	}
+	case PMSAT_GUEST_TEST_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void pmsat_guest_code(void)
+{
+	void *shared_mem = (void *)TEST_MEM_GPA;
+
+	GUEST_SYNC(PMSAT_GUEST_STARTED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED);
+
+	GUEST_DONE();
+}
+
 static struct test_run_helper priv_memfd_testsuite[] = {
 	[PMPAT_ID] = {
 		.test_desc = PMPAT_DESC,
 		.vmst_handler = pmpat_handle_vm_stage,
 		.guest_fn = pmpat_guest_code,
 	},
+	[PMSAT_ID] = {
+		.test_desc = PMSAT_DESC,
+		.vmst_handler = pmsat_handle_vm_stage,
+		.guest_fn = pmsat_guest_code,
+	},
 };
 
 static void handle_vm_exit_hypercall(struct kvm_run *run,
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (3 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:48   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test Vishal Annapurve
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add tests to exercise implicit memory conversion path.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 384 +++++++++++++++++-
 1 file changed, 383 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index 48bc4343e7b5..f6f6b064a101 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -211,6 +211,369 @@ static void pmsat_guest_code(void)
 	GUEST_DONE();
 }
 
+/* Test to verify guest shared accesses on shared memory with following steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM deallocates the backing private memory and populates the shared memory
+ *    with known pattern and continues guest execution.
+ * 3) Guest reads shared gpa range in a shared fashion and verifies that it
+ *    reads what VMM has written in step2.
+ * 3) Guest writes a different pattern on the shared memory and signals VMM
+ *      that it has updated the shared memory.
+ * 4) VMM verifies shared memory contents to be same as the data populated
+ *      in step 3 and continues guest execution.
+ */
+#define SMSAT_ID				2
+#define SMSAT_DESC				"SharedMemorySharedAccessTest"
+
+#define SMSAT_GUEST_STARTED			0ULL
+#define SMSAT_GUEST_TEST_MEM_UPDATED		1ULL
+
+static bool smsat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+	int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd;
+
+	switch (stage) {
+	case SMSAT_GUEST_STARTED: {
+		/* Remove the backing private memory storage */
+		int ret = fallocate(priv_memfd,
+				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+				0, TEST_MEM_SIZE);
+		TEST_ASSERT(ret != -1,
+			"fallocate failed in smsat handling");
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory updated failed");
+		VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED);
+		break;
+	}
+	case SMSAT_GUEST_TEST_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void smsat_guest_code(void)
+{
+	void *shared_mem = (void *)TEST_MEM_GPA;
+
+	GUEST_SYNC(SMSAT_GUEST_STARTED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(SMSAT_GUEST_TEST_MEM_UPDATED);
+
+	GUEST_DONE();
+}
+
+/* Test to verify guest private accesses on shared memory with following steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM deallocates the backing private memory and populates the shared memory
+ *    with known pattern and continues guest execution.
+ * 3) Guest writes gpa range via private access and signals VMM.
+ * 4) VMM verifies shared memory contents to be same as the data populated
+ *    in step 2 and continues guest execution.
+ * 5) Guest reads gpa range via private access and verifies that the contents
+ *    are same as written in step 3.
+ */
+#define SMPAT_ID				3
+#define SMPAT_DESC				"SharedMemoryPrivateAccessTest"
+
+#define SMPAT_GUEST_STARTED			0ULL
+#define SMPAT_GUEST_TEST_MEM_UPDATED		1ULL
+
+static bool smpat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+	int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd;
+
+	switch (stage) {
+	case SMPAT_GUEST_STARTED: {
+		/* Remove the backing private memory storage */
+		int ret = fallocate(priv_memfd,
+				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+				0, TEST_MEM_SIZE);
+		TEST_ASSERT(ret != -1,
+			"fallocate failed in smpat handling");
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory updated failed");
+		VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED);
+		break;
+	}
+	case SMPAT_GUEST_TEST_MEM_UPDATED: {
+		/* verify data to be same as what vmm wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void smpat_guest_code(void)
+{
+	void *shared_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	GUEST_SYNC(SMPAT_GUEST_STARTED);
+
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(SMPAT_GUEST_TEST_MEM_UPDATED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	GUEST_DONE();
+}
+
+/* Test to verify guest shared and private accesses on memory with following
+ * steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM populates the shared memory with known pattern and continues guest
+ *    execution.
+ * 3) Guest writes shared gpa range in a private fashion and signals VMM
+ * 4) VMM verifies that shared memory still contains the pattern written in
+ *    step 2 and continues guest execution.
+ * 5) Guest verifies private memory contents to be same as the data populated
+ *    in step 3 and signals VMM.
+ * 6) VMM removes the private memory backing which should also clear out the
+ *    second stage mappings for the VM
+ * 6) Guest does shared write access on shared memory and signals vmm
+ * 7) VMM reads the shared memory and verifies that the data is same as what
+ *    guest wrote in step 6 and continues guest execution.
+ * 8) Guest reads the private memory and verifies that the data is same as
+ *    written in step 6.
+ */
+#define PSAT_ID			4
+#define PSAT_DESC		"PrivateSharedAccessTest"
+
+#define PSAT_GUEST_STARTED			0ULL
+#define PSAT_GUEST_PRIVATE_MEM_UPDATED		1ULL
+#define PSAT_GUEST_PRIVATE_MEM_VERIFIED		2ULL
+#define PSAT_GUEST_SHARED_MEM_UPDATED		3ULL
+
+static bool psat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+	int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd;
+
+	switch (stage) {
+	case PSAT_GUEST_STARTED: {
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory update failed");
+		VM_STAGE_PROCESSED(PSAT_GUEST_STARTED);
+		break;
+	}
+	case PSAT_GUEST_PRIVATE_MEM_UPDATED: {
+		/* verify data to be same as what vmm wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED);
+		break;
+	}
+	case PSAT_GUEST_PRIVATE_MEM_VERIFIED: {
+		/* Remove the backing private memory storage so that
+		 * subsequent accesses from guest cause a second stage
+		 * page fault
+		 */
+		int ret = fallocate(priv_memfd,
+				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+				0, TEST_MEM_SIZE);
+		TEST_ASSERT(ret != -1,
+			"fallocate failed in smpat handling");
+		VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED);
+		break;
+	}
+	case PSAT_GUEST_SHARED_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void psat_guest_code(void)
+{
+	void *shared_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	GUEST_SYNC(PSAT_GUEST_STARTED);
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_VERIFIED);
+
+	/* Mark no GPA range to be treated as accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0,
+		0, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(PSAT_GUEST_SHARED_MEM_UPDATED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	GUEST_DONE();
+}
+
+/* Test to verify guest shared and private accesses on memory with following
+ * steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM removes the private memory backing and populates the shared memory
+ *    with known pattern and continues guest execution.
+ * 3) Guest reads shared gpa range in a shared fashion and verifies that it
+ *    reads what VMM has written in step2.
+ * 4) Guest writes a different pattern on the shared memory and signals VMM
+ *      that it has updated the shared memory.
+ * 5) VMM verifies shared memory contents to be same as the data populated
+ *      in step 4 and installs private memory backing again to allow guest
+ *      to do private access and invalidate second stage mappings.
+ * 6) Guest does private write access on shared memory and signals vmm
+ * 7) VMM reads the shared memory and verified that the data is still same
+ *    as in step 4 and continues guest execution.
+ * 8) Guest reads the private memory and verifies that the data is same as
+ *    written in step 6.
+ */
+#define SPAT_ID					5
+#define SPAT_DESC				"SharedPrivateAccessTest"
+
+#define SPAT_GUEST_STARTED			0ULL
+#define SPAT_GUEST_SHARED_MEM_UPDATED		1ULL
+#define SPAT_GUEST_PRIVATE_MEM_UPDATED		2ULL
+
+static bool spat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+	int priv_memfd = ((struct test_run_helper *)test_info)->priv_memfd;
+
+	switch (stage) {
+	case SPAT_GUEST_STARTED: {
+		/* Remove the backing private memory storage so that
+		 * subsequent accesses from guest cause a second stage
+		 * page fault
+		 */
+		int ret = fallocate(priv_memfd,
+				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+				0, TEST_MEM_SIZE);
+		TEST_ASSERT(ret != -1,
+			"fallocate failed in spat handling");
+
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory updated failed");
+		VM_STAGE_PROCESSED(SPAT_GUEST_STARTED);
+		break;
+	}
+	case SPAT_GUEST_SHARED_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		/* Allocate memory for private backing store */
+		int ret = fallocate(priv_memfd,
+				0, 0, TEST_MEM_SIZE);
+		TEST_ASSERT(ret != -1,
+			"fallocate failed in spat handling");
+		VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED);
+		break;
+	}
+	case SPAT_GUEST_PRIVATE_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void spat_guest_code(void)
+{
+	void *shared_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	GUEST_SYNC(SPAT_GUEST_STARTED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(SPAT_GUEST_SHARED_MEM_UPDATED);
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+	GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+	GUEST_DONE();
+}
+
 static struct test_run_helper priv_memfd_testsuite[] = {
 	[PMPAT_ID] = {
 		.test_desc = PMPAT_DESC,
@@ -222,6 +585,26 @@ static struct test_run_helper priv_memfd_testsuite[] = {
 		.vmst_handler = pmsat_handle_vm_stage,
 		.guest_fn = pmsat_guest_code,
 	},
+	[SMSAT_ID] = {
+		.test_desc = SMSAT_DESC,
+		.vmst_handler = smsat_handle_vm_stage,
+		.guest_fn = smsat_guest_code,
+	},
+	[SMPAT_ID] = {
+		.test_desc = SMPAT_DESC,
+		.vmst_handler = smpat_handle_vm_stage,
+		.guest_fn = smpat_guest_code,
+	},
+	[PSAT_ID] = {
+		.test_desc = PSAT_DESC,
+		.vmst_handler = psat_handle_vm_stage,
+		.guest_fn = psat_guest_code,
+	},
+	[SPAT_ID] = {
+		.test_desc = SPAT_DESC,
+		.vmst_handler = spat_handle_vm_stage,
+		.guest_fn = spat_guest_code,
+	},
 };
 
 static void handle_vm_exit_hypercall(struct kvm_run *run,
@@ -365,7 +748,6 @@ static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot,
 		guest_addr);
 }
 
-/* Do private access to the guest's private memory */
 static void setup_and_execute_test(uint32_t test_id)
 {
 	struct kvm_vm *vm;
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (4 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:50   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite Vishal Annapurve
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add test to exercise explicit memory conversion path using
KVM_HC_MAP_GPA_RANGE hypercall.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 148 ++++++++++++++++++
 1 file changed, 148 insertions(+)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index f6f6b064a101..c2ea8f67337c 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -574,6 +574,149 @@ static void spat_guest_code(void)
 	GUEST_DONE();
 }
 
+/* Test to verify guest private, shared, private accesses on memory with
+ * following steps:
+ * 1) Upon entry, guest signals VMM that it has started.
+ * 2) VMM initializes the shared memory with known pattern and continues guest
+ *    execution
+ * 3) Guest writes the private memory privately via a known pattern and
+ *    signals VMM
+ * 4) VMM reads the shared memory and verifies that it's same as whats written
+ *    in step 2 and continues guest execution
+ * 5) Guest reads the private memory privately and verifies that the contents
+ *    are same as written in step 3.
+ * 6) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared
+ *    and marks the range to be accessed via shared access.
+ * 7) Guest does a shared access to shared memory and verifies that the
+ *    contents are same as written in step 2.
+ * 8) Guest writes known pattern to test memory and signals VMM.
+ * 9) VMM verifies the memory contents to be same as written by guest in step
+ *    8
+ * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private
+ *    and marks the range to be accessed via private access.
+ * 11) Guest writes a known pattern to the test memory and signals VMM.
+ * 12) VMM verifies the memory contents to be same as written by guest in step
+ *     8 and continues guest execution.
+ * 13) Guest verififes the memory pattern to be same as written in step 11.
+ */
+#define PSPAHCT_ID		6
+#define PSPAHCT_DESC		"PrivateSharedPrivateAccessHyperCallTest"
+
+#define PSPAHCT_GUEST_STARTED				0ULL
+#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED		1ULL
+#define PSPAHCT_GUEST_SHARED_MEM_UPDATED		2ULL
+#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2		3ULL
+
+static bool pspahct_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+
+	switch (stage) {
+	case PSPAHCT_GUEST_STARTED: {
+		/* Initialize the contents of shared memory */
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory update failed");
+		VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED);
+		break;
+	}
+	case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED);
+		break;
+	}
+	case PSPAHCT_GUEST_SHARED_MEM_UPDATED: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED);
+		break;
+	}
+	case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			"Shared memory view mismatch");
+		VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void pspahct_guest_code(void)
+{
+	void *test_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	GUEST_SYNC(PSPAHCT_GUEST_STARTED);
+
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+		TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	/* Map the GPA range to be treated as shared */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via shared
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+	GUEST_SYNC(PSPAHCT_GUEST_SHARED_MEM_UPDATED);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+
+	/* Map the GPA range to be treated as private */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via private
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+	GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2);
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+	GUEST_DONE();
+}
+
 static struct test_run_helper priv_memfd_testsuite[] = {
 	[PMPAT_ID] = {
 		.test_desc = PMPAT_DESC,
@@ -605,6 +748,11 @@ static struct test_run_helper priv_memfd_testsuite[] = {
 		.vmst_handler = spat_handle_vm_stage,
 		.guest_fn = spat_guest_code,
 	},
+	[PSPAHCT_ID] = {
+		.test_desc = PSPAHCT_DESC,
+		.vmst_handler = pspahct_handle_vm_stage,
+		.guest_fn = pspahct_guest_code,
+	},
 };
 
 static void handle_vm_exit_hypercall(struct kvm_run *run,
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite.
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (5 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:51   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation Vishal Annapurve
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

From: Austin Diviness <diviness@google.com>

Adds ability to run priv_memfd_test test suite across various sizes of pages
for shared/private memory. Shared and private memory can be allocated
with different sized pages. In order to verify that there isn't a
behavior change based on different page sizes, this change runs the
tests using the currently supported permutations.

Adds command line flags to adjust whether the tests should run with
hugepages backing the test memory.

Signed-off-by: Austin Diviness <diviness@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 369 ++++++++++++++----
 1 file changed, 294 insertions(+), 75 deletions(-)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index c2ea8f67337c..dbe6ead92ba7 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 #define _GNU_SOURCE /* for program_invocation_short_name */
 #include <fcntl.h>
+#include <limits.h>
 #include <sched.h>
 #include <signal.h>
 #include <stdio.h>
@@ -17,9 +18,18 @@
 #include <kvm_util.h>
 #include <processor.h>
 
+#define BYTE_MASK 0xFF
+
+// flags for mmap
+#define MAP_HUGE_2MB    (21 << MAP_HUGE_SHIFT)
+#define MAP_HUGE_1GB    (30 << MAP_HUGE_SHIFT)
+
+// page sizes
+#define PAGE_SIZE_4KB ((size_t)0x1000)
+#define PAGE_SIZE_2MB (PAGE_SIZE_4KB * (size_t)512)
+#define PAGE_SIZE_1GB ((PAGE_SIZE_4KB * 256) * 1024)
+
 #define TEST_MEM_GPA		0xb0000000
-#define TEST_MEM_SIZE		0x2000
-#define TEST_MEM_END		(TEST_MEM_GPA + TEST_MEM_SIZE)
 #define TEST_MEM_DATA_PAT1	0x6666666666666666
 #define TEST_MEM_DATA_PAT2	0x9999999999999999
 #define TEST_MEM_DATA_PAT3	0x3333333333333333
@@ -34,8 +44,16 @@ enum mem_op {
 
 #define VCPU_ID			0
 
+// address where guests can receive the mem size of the data
+// allocated to them by the vmm
+#define MEM_SIZE_MMIO_ADDRESS 0xa0000000
+
 #define VM_STAGE_PROCESSED(x)	pr_info("Processed stage %s\n", #x)
 
+// global used for storing the current mem allocation size
+// for the running test
+static size_t test_mem_size;
+
 typedef bool (*vm_stage_handler_fn)(struct kvm_vm *,
 				void *, uint64_t);
 typedef void (*guest_code_fn)(void);
@@ -47,6 +65,36 @@ struct test_run_helper {
 	int priv_memfd;
 };
 
+enum page_size {
+	PAGE_4KB,
+	PAGE_2MB,
+	PAGE_1GB
+};
+
+struct page_combo {
+	enum page_size shared;
+	enum page_size private;
+};
+
+static char *page_size_to_str(enum page_size x)
+{
+	switch (x) {
+	case PAGE_4KB:
+		return "PAGE_4KB";
+	case PAGE_2MB:
+		return "PAGE_2MB";
+	case PAGE_1GB:
+		return "PAGE_1GB";
+	default:
+		return "UNKNOWN";
+	}
+}
+
+static uint64_t test_mem_end(const uint64_t start, const uint64_t size)
+{
+	return start + size;
+}
+
 /* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load
  * which doesn't handle global offset table updates. Calling standard libc
  * functions would normally result in referring to the global offset table.
@@ -103,7 +151,7 @@ static bool pmpat_handle_vm_stage(struct kvm_vm *vm,
 	case PMPAT_GUEST_STARTED: {
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory update failure");
 		VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED);
 		break;
@@ -111,7 +159,7 @@ static bool pmpat_handle_vm_stage(struct kvm_vm *vm,
 	case PMPAT_GUEST_PRIV_MEM_UPDATED: {
 		/* verify host updated data is still intact */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED);
 		break;
@@ -131,18 +179,20 @@ static void pmpat_guest_code(void)
 
 	GUEST_SYNC(PMPAT_GUEST_STARTED);
 
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
 	/* Mark the GPA range to be treated as always accessed privately */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2,
-			TEST_MEM_SIZE));
+			mem_size));
 	GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED);
 
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 
 	GUEST_DONE();
 }
@@ -175,7 +225,7 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm,
 	case PMSAT_GUEST_STARTED: {
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory update failed");
 		VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED);
 		break;
@@ -183,7 +233,7 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm,
 	case PMSAT_GUEST_TEST_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED);
 		break;
@@ -199,13 +249,14 @@ static bool pmsat_handle_vm_stage(struct kvm_vm *vm,
 static void pmsat_guest_code(void)
 {
 	void *shared_mem = (void *)TEST_MEM_GPA;
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
 
 	GUEST_SYNC(PMSAT_GUEST_STARTED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 	GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED);
 
 	GUEST_DONE();
@@ -240,12 +291,12 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm,
 		/* Remove the backing private memory storage */
 		int ret = fallocate(priv_memfd,
 				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
-				0, TEST_MEM_SIZE);
+				0, test_mem_size);
 		TEST_ASSERT(ret != -1,
 			"fallocate failed in smsat handling");
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory updated failed");
 		VM_STAGE_PROCESSED(SMSAT_GUEST_STARTED);
 		break;
@@ -253,7 +304,7 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm,
 	case SMSAT_GUEST_TEST_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(SMSAT_GUEST_TEST_MEM_UPDATED);
 		break;
@@ -269,13 +320,14 @@ static bool smsat_handle_vm_stage(struct kvm_vm *vm,
 static void smsat_guest_code(void)
 {
 	void *shared_mem = (void *)TEST_MEM_GPA;
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
 
 	GUEST_SYNC(SMSAT_GUEST_STARTED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 	GUEST_SYNC(SMSAT_GUEST_TEST_MEM_UPDATED);
 
 	GUEST_DONE();
@@ -309,12 +361,12 @@ static bool smpat_handle_vm_stage(struct kvm_vm *vm,
 		/* Remove the backing private memory storage */
 		int ret = fallocate(priv_memfd,
 				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
-				0, TEST_MEM_SIZE);
+				0, test_mem_size);
 		TEST_ASSERT(ret != -1,
 			"fallocate failed in smpat handling");
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory updated failed");
 		VM_STAGE_PROCESSED(SMPAT_GUEST_STARTED);
 		break;
@@ -322,7 +374,7 @@ static bool smpat_handle_vm_stage(struct kvm_vm *vm,
 	case SMPAT_GUEST_TEST_MEM_UPDATED: {
 		/* verify data to be same as what vmm wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(SMPAT_GUEST_TEST_MEM_UPDATED);
 		break;
@@ -342,17 +394,19 @@ static void smpat_guest_code(void)
 
 	GUEST_SYNC(SMPAT_GUEST_STARTED);
 
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
 	/* Mark the GPA range to be treated as always accessed privately */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 	GUEST_SYNC(SMPAT_GUEST_TEST_MEM_UPDATED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 
 	GUEST_DONE();
 }
@@ -394,7 +448,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm,
 	case PSAT_GUEST_STARTED: {
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory update failed");
 		VM_STAGE_PROCESSED(PSAT_GUEST_STARTED);
 		break;
@@ -402,7 +456,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm,
 	case PSAT_GUEST_PRIVATE_MEM_UPDATED: {
 		/* verify data to be same as what vmm wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_UPDATED);
 		break;
@@ -414,7 +468,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm,
 		 */
 		int ret = fallocate(priv_memfd,
 				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
-				0, TEST_MEM_SIZE);
+				0, test_mem_size);
 		TEST_ASSERT(ret != -1,
 			"fallocate failed in smpat handling");
 		VM_STAGE_PROCESSED(PSAT_GUEST_PRIVATE_MEM_VERIFIED);
@@ -423,7 +477,7 @@ static bool psat_handle_vm_stage(struct kvm_vm *vm,
 	case PSAT_GUEST_SHARED_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PSAT_GUEST_SHARED_MEM_UPDATED);
 		break;
@@ -442,17 +496,20 @@ static void psat_guest_code(void)
 	int ret;
 
 	GUEST_SYNC(PSAT_GUEST_STARTED);
+
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
 	/* Mark the GPA range to be treated as always accessed privately */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 	GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 
 	GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_VERIFIED);
 
@@ -461,10 +518,10 @@ static void psat_guest_code(void)
 		0, KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 	GUEST_SYNC(PSAT_GUEST_SHARED_MEM_UPDATED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 
 	GUEST_DONE();
 }
@@ -509,13 +566,13 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm,
 		 */
 		int ret = fallocate(priv_memfd,
 				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
-				0, TEST_MEM_SIZE);
+				0, test_mem_size);
 		TEST_ASSERT(ret != -1,
 			"fallocate failed in spat handling");
 
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory updated failed");
 		VM_STAGE_PROCESSED(SPAT_GUEST_STARTED);
 		break;
@@ -523,11 +580,11 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm,
 	case SPAT_GUEST_SHARED_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		/* Allocate memory for private backing store */
 		int ret = fallocate(priv_memfd,
-				0, 0, TEST_MEM_SIZE);
+				0, 0, test_mem_size);
 		TEST_ASSERT(ret != -1,
 			"fallocate failed in spat handling");
 		VM_STAGE_PROCESSED(SPAT_GUEST_SHARED_MEM_UPDATED);
@@ -536,7 +593,7 @@ static bool spat_handle_vm_stage(struct kvm_vm *vm,
 	case SPAT_GUEST_PRIVATE_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(SPAT_GUEST_PRIVATE_MEM_UPDATED);
 		break;
@@ -554,23 +611,26 @@ static void spat_guest_code(void)
 	void *shared_mem = (void *)TEST_MEM_GPA;
 	int ret;
 
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
 	GUEST_SYNC(SPAT_GUEST_STARTED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
+
 	GUEST_SYNC(SPAT_GUEST_SHARED_MEM_UPDATED);
 	/* Mark the GPA range to be treated as always accessed privately */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 	GUEST_SYNC(PSAT_GUEST_PRIVATE_MEM_UPDATED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 	GUEST_DONE();
 }
 
@@ -617,7 +677,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm,
 	case PSPAHCT_GUEST_STARTED: {
 		/* Initialize the contents of shared memory */
 		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory update failed");
 		VM_STAGE_PROCESSED(PSPAHCT_GUEST_STARTED);
 		break;
@@ -625,7 +685,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm,
 	case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT1, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED);
 		break;
@@ -633,7 +693,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm,
 	case PSPAHCT_GUEST_SHARED_MEM_UPDATED: {
 		/* verify data to be same as what guest wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PSPAHCT_GUEST_SHARED_MEM_UPDATED);
 		break;
@@ -641,7 +701,7 @@ static bool pspahct_handle_vm_stage(struct kvm_vm *vm,
 	case PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2: {
 		/* verify data to be same as what guest wrote earlier */
 		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
+			TEST_MEM_DATA_PAT2, test_mem_size),
 			"Shared memory view mismatch");
 		VM_STAGE_PROCESSED(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2);
 		break;
@@ -661,21 +721,23 @@ static void pspahct_guest_code(void)
 
 	GUEST_SYNC(PSPAHCT_GUEST_STARTED);
 
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
 	/* Mark the GPA range to be treated as always accessed privately */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
-		TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+		TEST_MEM_DATA_PAT2, mem_size));
 
 	GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 
 	/* Map the GPA range to be treated as shared */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
@@ -687,17 +749,17 @@ static void pspahct_guest_code(void)
 	GUEST_ASSERT_1(ret == 0, ret);
 
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 	GUEST_SYNC(PSPAHCT_GUEST_SHARED_MEM_UPDATED);
 
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
-			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT2, mem_size));
 
 	/* Map the GPA range to be treated as private */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
@@ -705,15 +767,15 @@ static void pspahct_guest_code(void)
 	 * access
 	 */
 	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
-		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
+		mem_size >> MIN_PAGE_SHIFT,
 		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
 	GUEST_ASSERT_1(ret == 0, ret);
 
 	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 	GUEST_SYNC(PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2);
 	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
-			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
+			TEST_MEM_DATA_PAT1, mem_size));
 	GUEST_DONE();
 }
 
@@ -758,7 +820,7 @@ static struct test_run_helper priv_memfd_testsuite[] = {
 static void handle_vm_exit_hypercall(struct kvm_run *run,
 	uint32_t test_id)
 {
-	uint64_t gpa, npages, attrs;
+	uint64_t gpa, npages, attrs, mem_end;
 	int priv_memfd =
 		priv_memfd_testsuite[test_id].priv_memfd;
 	int ret;
@@ -772,9 +834,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 	gpa = run->hypercall.args[0];
 	npages = run->hypercall.args[1];
 	attrs = run->hypercall.args[2];
+	mem_end = test_mem_end(gpa, test_mem_size);
 
 	if ((gpa < TEST_MEM_GPA) || ((gpa +
-		(npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) {
+		(npages << MIN_PAGE_SHIFT)) > mem_end)) {
 		TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n",
 			gpa, npages);
 	}
@@ -800,7 +863,7 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 static void handle_vm_exit_memory_error(struct kvm_run *run,
 	uint32_t test_id)
 {
-	uint64_t gpa, size, flags;
+	uint64_t gpa, size, flags, mem_end;
 	int ret;
 	int priv_memfd =
 		priv_memfd_testsuite[test_id].priv_memfd;
@@ -809,9 +872,10 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 	gpa = run->memory.gpa;
 	size = run->memory.size;
 	flags = run->memory.flags;
+	mem_end = test_mem_end(gpa, test_mem_size);
 
 	if ((gpa < TEST_MEM_GPA) || ((gpa + size)
-					> TEST_MEM_END)) {
+					> mem_end)) {
 		TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n",
 			gpa, size);
 	}
@@ -858,6 +922,22 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
 			continue;
 		}
 
+		if (run->exit_reason == KVM_EXIT_MMIO) {
+			if (run->mmio.phys_addr == MEM_SIZE_MMIO_ADDRESS) {
+				// tell the guest the size of the memory
+				// it's been allocated
+				int shift_amount = 0;
+
+				for (int i = 0; i < sizeof(uint64_t); ++i) {
+					run->mmio.data[i] =
+						(test_mem_size >>
+							shift_amount) & BYTE_MASK;
+					shift_amount += CHAR_BIT;
+				}
+			}
+			continue;
+		}
+
 		if (run->exit_reason == KVM_EXIT_HYPERCALL) {
 			handle_vm_exit_hypercall(run, test_id);
 			continue;
@@ -896,7 +976,9 @@ static void priv_memory_region_add(struct kvm_vm *vm, void *mem, uint32_t slot,
 		guest_addr);
 }
 
-static void setup_and_execute_test(uint32_t test_id)
+static void setup_and_execute_test(uint32_t test_id,
+	const enum page_size shared,
+	const enum page_size private)
 {
 	struct kvm_vm *vm;
 	int priv_memfd;
@@ -907,27 +989,82 @@ static void setup_and_execute_test(uint32_t test_id)
 	vm = vm_create_default(VCPU_ID, 0,
 				priv_memfd_testsuite[test_id].guest_fn);
 
+	// use 2 pages by default
+	size_t mem_size = PAGE_SIZE_4KB * 2;
+	bool using_hugepages = false;
+
+	int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE;
+
+	switch (shared) {
+	case PAGE_4KB:
+		// no additional flags are needed
+		break;
+	case PAGE_2MB:
+		mmap_flags |= MAP_HUGETLB | MAP_HUGE_2MB | MAP_POPULATE;
+		mem_size = max(mem_size, PAGE_SIZE_2MB);
+		using_hugepages = true;
+		break;
+	case PAGE_1GB:
+		mmap_flags |= MAP_HUGETLB | MAP_HUGE_1GB | MAP_POPULATE;
+		mem_size = max(mem_size, PAGE_SIZE_1GB);
+		using_hugepages = true;
+		break;
+	default:
+		TEST_FAIL("unknown page size for shared memory\n");
+	}
+
+	unsigned int memfd_flags = MFD_INACCESSIBLE;
+
+	switch (private) {
+	case PAGE_4KB:
+		// no additional flags are needed
+		break;
+	case PAGE_2MB:
+		memfd_flags |= MFD_HUGETLB | MFD_HUGE_2MB;
+		mem_size = PAGE_SIZE_2MB;
+		using_hugepages = true;
+		break;
+	case PAGE_1GB:
+		memfd_flags |= MFD_HUGETLB | MFD_HUGE_1GB;
+		mem_size = PAGE_SIZE_1GB;
+		using_hugepages = true;
+		break;
+	default:
+		TEST_FAIL("unknown page size for private memory\n");
+	}
+
+	// set global for mem size to use later
+	test_mem_size = mem_size;
+
 	/* Allocate shared memory */
-	shared_mem = mmap(NULL, TEST_MEM_SIZE,
+	shared_mem = mmap(NULL, mem_size,
 			PROT_READ | PROT_WRITE,
-			MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
+			mmap_flags, -1, 0);
 	TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
 
+	if (using_hugepages) {
+		ret = madvise(shared_mem, mem_size, MADV_WILLNEED);
+		TEST_ASSERT(ret == 0, "madvise failed");
+	}
+
 	/* Allocate private memory */
-	priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE);
+	priv_memfd = memfd_create("vm_private_mem", memfd_flags);
 	TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd");
-	ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE);
+	ret = fallocate(priv_memfd, 0, 0, mem_size);
 	TEST_ASSERT(ret != -1, "fallocate failed");
 
 	priv_memory_region_add(vm, shared_mem,
-				TEST_MEM_SLOT, TEST_MEM_SIZE,
+				TEST_MEM_SLOT, mem_size,
 				TEST_MEM_GPA, priv_memfd, 0);
 
-	pr_info("Mapping test memory pages 0x%x page_size 0x%x\n",
-					TEST_MEM_SIZE/vm_get_page_size(vm),
+	pr_info("Mapping test memory pages 0x%zx page_size 0x%x\n",
+					mem_size/vm_get_page_size(vm),
 					vm_get_page_size(vm));
 	virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA,
-					(TEST_MEM_SIZE/vm_get_page_size(vm)));
+					(mem_size/vm_get_page_size(vm)));
+
+	// add mmio communication page
+	virt_map(vm, MEM_SIZE_MMIO_ADDRESS, MEM_SIZE_MMIO_ADDRESS, 1);
 
 	/* Enable exit on KVM_HC_MAP_GPA_RANGE */
 	pr_info("Enabling exit on map_gpa_range hypercall\n");
@@ -945,24 +1082,106 @@ static void setup_and_execute_test(uint32_t test_id)
 	priv_memfd_testsuite[test_id].priv_memfd = priv_memfd;
 	vcpu_work(vm, test_id);
 
-	munmap(shared_mem, TEST_MEM_SIZE);
+	munmap(shared_mem, mem_size);
 	priv_memfd_testsuite[test_id].shared_mem = NULL;
 	close(priv_memfd);
 	priv_memfd_testsuite[test_id].priv_memfd = -1;
 	kvm_vm_free(vm);
 }
 
+static void hugepage_requirements_text(const struct page_combo matrix)
+{
+	int pages_needed_2mb = 0;
+	int pages_needed_1gb = 0;
+	enum page_size sizes[] = { matrix.shared, matrix.private };
+
+	for (int i = 0; i < ARRAY_SIZE(sizes); ++i) {
+		if (sizes[i] == PAGE_2MB)
+			++pages_needed_2mb;
+		if (sizes[i] == PAGE_1GB)
+			++pages_needed_1gb;
+	}
+	if (pages_needed_2mb != 0 && pages_needed_1gb != 0) {
+		pr_info("This test requires %d 2MB page(s) and %d 1GB page(s)\n",
+				pages_needed_2mb, pages_needed_1gb);
+	} else if (pages_needed_2mb != 0) {
+		pr_info("This test requires %d 2MB page(s)\n", pages_needed_2mb);
+	} else if (pages_needed_1gb != 0) {
+		pr_info("This test requires %d 1GB page(s)\n", pages_needed_1gb);
+	}
+}
+
+static bool should_skip_test(const struct page_combo matrix,
+	const bool use_2mb_pages,
+	const bool use_1gb_pages)
+{
+	if ((matrix.shared == PAGE_2MB || matrix.private == PAGE_2MB)
+		&& !use_2mb_pages)
+		return true;
+	if ((matrix.shared == PAGE_1GB || matrix.private == PAGE_1GB)
+		&& !use_1gb_pages)
+		return true;
+	return false;
+}
+
+static void print_help(const char *const name)
+{
+	puts("");
+	printf("usage %s [-h] [-m] [-g]\n", name);
+	puts("");
+	printf(" -h: Display this help message\n");
+	printf(" -m: include test runs using 2MB page permutations\n");
+	printf(" -g: include test runs using 1GB page permutations\n");
+	exit(0);
+}
+
 int main(int argc, char *argv[])
 {
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
 
+	// arg parsing
+	int opt;
+	bool use_2mb_pages = false;
+	bool use_1gb_pages = false;
+
+	while ((opt = getopt(argc, argv, "mgh")) != -1) {
+		switch (opt) {
+		case 'm':
+			use_2mb_pages = true;
+			break;
+		case 'g':
+			use_1gb_pages = true;
+			break;
+		case 'h':
+		default:
+			print_help(argv[0]);
+		}
+	}
+
+	struct page_combo page_size_matrix[] = {
+		{ .shared = PAGE_4KB, .private = PAGE_4KB },
+		{ .shared = PAGE_2MB, .private = PAGE_4KB },
+	};
+
 	for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) {
-		pr_info("=== Starting test %s... ===\n",
-				priv_memfd_testsuite[i].test_desc);
-		setup_and_execute_test(i);
-		pr_info("--- completed test %s ---\n\n",
-				priv_memfd_testsuite[i].test_desc);
+		for (uint32_t j = 0; j < ARRAY_SIZE(page_size_matrix); j++) {
+			const struct page_combo current_page_matrix = page_size_matrix[j];
+
+			if (should_skip_test(current_page_matrix,
+				use_2mb_pages, use_1gb_pages))
+				break;
+			pr_info("=== Starting test %s... ===\n",
+					priv_memfd_testsuite[i].test_desc);
+			pr_info("using page sizes shared: %s private: %s\n",
+					page_size_to_str(current_page_matrix.shared),
+					page_size_to_str(current_page_matrix.private));
+			hugepage_requirements_text(current_page_matrix);
+			setup_and_execute_test(i, current_page_matrix.shared,
+				current_page_matrix.private);
+			pr_info("--- completed test %s ---\n\n",
+					priv_memfd_testsuite[i].test_desc);
+		}
 	}
 
 	return 0;
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (6 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 17:52   ` Shuah Khan
  2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test without " Vishal Annapurve
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add a memory conversion test without leading to double allocation
of memory backing gpa ranges.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 225 ++++++++++++++++--
 1 file changed, 211 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index dbe6ead92ba7..3b6e84cf6a44 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -63,6 +63,8 @@ struct test_run_helper {
 	guest_code_fn guest_fn;
 	void *shared_mem;
 	int priv_memfd;
+	bool disallow_boot_shared_access;
+	bool toggle_shared_mem_state;
 };
 
 enum page_size {
@@ -779,6 +781,151 @@ static void pspahct_guest_code(void)
 	GUEST_DONE();
 }
 
+/* Test to verify guest accesses without double allocation:
+ * Guest starts with shared memory access disallowed by default.
+ * 1) Guest writes the private memory privately via a known pattern
+ * 3) Guest reads the private memory privately and verifies that the contents
+ *    are same as written.
+ * 4) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared
+ *    and marks the range to be accessed via shared access.
+ * 5) Guest writes shared memory with another pattern and signals VMM
+ * 6) VMM verifies the memory contents to be same as written by guest in step
+ *    5 and updates the memory with a different pattern
+ * 7) Guest verifies the memory contents to be same as written in step 6.
+ * 8) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private
+ *    and marks the range to be accessed via private access.
+ * 9) Guest writes a known pattern to the test memory and verifies the contents
+ *    to be same as written.
+ * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared
+ *    and marks the range to be accessed via shared access.
+ * 11) Guest writes shared memory with another pattern and signals VMM
+ * 12) VMM verifies the memory contents to be same as written by guest in step
+ *    5 and updates the memory with a different pattern
+ * 13) Guest verifies the memory contents to be same as written in step 6.
+ */
+#define PSAWDAT_ID	7
+#define PSAWDAT_DESC	"PrivateSharedAccessWithoutDoubleAllocationTest"
+
+#define PSAWDAT_GUEST_SHARED_MEM_UPDATED1		1ULL
+#define PSAWDAT_GUEST_SHARED_MEM_UPDATED2		2ULL
+
+static bool psawdat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+
+	switch (stage) {
+	case PSAWDAT_GUEST_SHARED_MEM_UPDATED1: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, test_mem_size),
+			"Shared memory view mismatch");
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, test_mem_size),
+			"Shared mem update Failure");
+		VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED);
+		break;
+	}
+	case PSAWDAT_GUEST_SHARED_MEM_UPDATED2: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT3, test_mem_size),
+			"Shared memory view mismatch");
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT4, test_mem_size),
+			"Shared mem update Failure");
+		VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED2);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void psawdat_guest_code(void)
+{
+	void *test_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, mem_size));
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, mem_size));
+
+	/* Map the GPA range to be treated as shared */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via shared
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, mem_size));
+	GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED1);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, mem_size));
+
+	/* Map the GPA range to be treated as private */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via private
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, mem_size));
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, mem_size));
+
+	/* Map the GPA range to be treated as shared */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via shared
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT3, mem_size));
+	GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED2);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT4, mem_size));
+
+	GUEST_DONE();
+}
+
 static struct test_run_helper priv_memfd_testsuite[] = {
 	[PMPAT_ID] = {
 		.test_desc = PMPAT_DESC,
@@ -815,6 +962,13 @@ static struct test_run_helper priv_memfd_testsuite[] = {
 		.vmst_handler = pspahct_handle_vm_stage,
 		.guest_fn = pspahct_guest_code,
 	},
+	[PSAWDAT_ID] = {
+		.test_desc = PSAWDAT_DESC,
+		.vmst_handler = psawdat_handle_vm_stage,
+		.guest_fn = psawdat_guest_code,
+		.toggle_shared_mem_state = true,
+		.disallow_boot_shared_access = true,
+	},
 };
 
 static void handle_vm_exit_hypercall(struct kvm_run *run,
@@ -825,6 +979,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 		priv_memfd_testsuite[test_id].priv_memfd;
 	int ret;
 	int fallocate_mode;
+	void *shared_mem = priv_memfd_testsuite[test_id].shared_mem;
+	bool toggle_shared_mem_state =
+		priv_memfd_testsuite[test_id].toggle_shared_mem_state;
+	int mprotect_mode;
 
 	if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) {
 		TEST_FAIL("Unhandled Hypercall %lld\n",
@@ -842,11 +1000,13 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 			gpa, npages);
 	}
 
-	if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
+	if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) {
 		fallocate_mode = 0;
-	else {
+		mprotect_mode = PROT_NONE;
+	} else {
 		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
 			FALLOC_FL_KEEP_SIZE);
+		mprotect_mode = PROT_READ | PROT_WRITE;
 	}
 	pr_info("Converting off 0x%lx pages 0x%lx to %s\n",
 		(gpa - TEST_MEM_GPA), npages,
@@ -857,6 +1017,17 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 		npages << MIN_PAGE_SHIFT);
 	TEST_ASSERT(ret != -1,
 		"fallocate failed in hc handling");
+	if (toggle_shared_mem_state) {
+		if (fallocate_mode) {
+			ret = madvise(shared_mem, test_mem_size,
+				MADV_DONTNEED);
+			TEST_ASSERT(ret != -1,
+				"madvise failed in hc handling");
+		}
+		ret = mprotect(shared_mem, test_mem_size, mprotect_mode);
+		TEST_ASSERT(ret != -1,
+			"mprotect failed in hc handling");
+	}
 	run->hypercall.ret = 0;
 }
 
@@ -867,7 +1038,11 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 	int ret;
 	int priv_memfd =
 		priv_memfd_testsuite[test_id].priv_memfd;
+	void *shared_mem = priv_memfd_testsuite[test_id].shared_mem;
+	bool toggle_shared_mem_state =
+		priv_memfd_testsuite[test_id].toggle_shared_mem_state;
 	int fallocate_mode;
+	int mprotect_mode;
 
 	gpa = run->memory.gpa;
 	size = run->memory.size;
@@ -880,11 +1055,13 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 			gpa, size);
 	}
 
-	if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE)
+	if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) {
 		fallocate_mode = 0;
-	else {
+		mprotect_mode = PROT_NONE;
+	} else {
 		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
 				FALLOC_FL_KEEP_SIZE);
+		mprotect_mode = PROT_READ | PROT_WRITE;
 	}
 	pr_info("Converting off 0x%lx size 0x%lx to %s\n",
 		(gpa - TEST_MEM_GPA), size,
@@ -894,6 +1071,18 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 		(gpa - TEST_MEM_GPA), size);
 	TEST_ASSERT(ret != -1,
 		"fallocate failed in memory error handling");
+
+	if (toggle_shared_mem_state) {
+		if (fallocate_mode) {
+			ret = madvise(shared_mem, test_mem_size,
+				MADV_DONTNEED);
+			TEST_ASSERT(ret != -1,
+				"madvise failed in memory error handling");
+		}
+		ret = mprotect(shared_mem, test_mem_size, mprotect_mode);
+		TEST_ASSERT(ret != -1,
+			"mprotect failed in memory error handling");
+	}
 }
 
 static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
@@ -924,14 +1113,14 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
 
 		if (run->exit_reason == KVM_EXIT_MMIO) {
 			if (run->mmio.phys_addr == MEM_SIZE_MMIO_ADDRESS) {
-				// tell the guest the size of the memory
-				// it's been allocated
+				/* tell the guest the size of the memory it's
+				 * been allocated
+				 */
 				int shift_amount = 0;
 
 				for (int i = 0; i < sizeof(uint64_t); ++i) {
-					run->mmio.data[i] =
-						(test_mem_size >>
-							shift_amount) & BYTE_MASK;
+					run->mmio.data[i] = (test_mem_size >>
+						shift_amount) & BYTE_MASK;
 					shift_amount += CHAR_BIT;
 				}
 			}
@@ -985,6 +1174,9 @@ static void setup_and_execute_test(uint32_t test_id,
 	int ret;
 	void *shared_mem;
 	struct kvm_enable_cap cap;
+	bool disallow_boot_shared_access =
+		priv_memfd_testsuite[test_id].disallow_boot_shared_access;
+	int prot_flags = PROT_READ | PROT_WRITE;
 
 	vm = vm_create_default(VCPU_ID, 0,
 				priv_memfd_testsuite[test_id].guest_fn);
@@ -1036,10 +1228,12 @@ static void setup_and_execute_test(uint32_t test_id,
 	// set global for mem size to use later
 	test_mem_size = mem_size;
 
+	if (disallow_boot_shared_access)
+		prot_flags = PROT_NONE;
+
 	/* Allocate shared memory */
 	shared_mem = mmap(NULL, mem_size,
-			PROT_READ | PROT_WRITE,
-			mmap_flags, -1, 0);
+			prot_flags, mmap_flags, -1, 0);
 	TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
 
 	if (using_hugepages) {
@@ -1166,7 +1360,8 @@ int main(int argc, char *argv[])
 
 	for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) {
 		for (uint32_t j = 0; j < ARRAY_SIZE(page_size_matrix); j++) {
-			const struct page_combo current_page_matrix = page_size_matrix[j];
+			const struct page_combo current_page_matrix =
+							page_size_matrix[j];
 
 			if (should_skip_test(current_page_matrix,
 				use_2mb_pages, use_1gb_pages))
@@ -1174,8 +1369,10 @@ int main(int argc, char *argv[])
 			pr_info("=== Starting test %s... ===\n",
 					priv_memfd_testsuite[i].test_desc);
 			pr_info("using page sizes shared: %s private: %s\n",
-					page_size_to_str(current_page_matrix.shared),
-					page_size_to_str(current_page_matrix.private));
+					page_size_to_str(
+						current_page_matrix.shared),
+					page_size_to_str(
+						current_page_matrix.private));
 			hugepage_requirements_text(current_page_matrix);
 			setup_and_execute_test(i, current_page_matrix.shared,
 				current_page_matrix.private);
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test without double allocation
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (7 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation Vishal Annapurve
@ 2022-05-11  0:08 ` Vishal Annapurve
  2022-05-12 18:04 ` [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Shuah Khan
  2022-07-20 22:19 ` Sean Christopherson
  10 siblings, 0 replies; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-11  0:08 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Vishal Annapurve

Add a memory conversion test without leading to double allocation
of memory backing gpa ranges.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/priv_memfd_test.c | 225 ++++++++++++++++--
 1 file changed, 211 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
index dbe6ead92ba7..3b6e84cf6a44 100644
--- a/tools/testing/selftests/kvm/priv_memfd_test.c
+++ b/tools/testing/selftests/kvm/priv_memfd_test.c
@@ -63,6 +63,8 @@ struct test_run_helper {
 	guest_code_fn guest_fn;
 	void *shared_mem;
 	int priv_memfd;
+	bool disallow_boot_shared_access;
+	bool toggle_shared_mem_state;
 };
 
 enum page_size {
@@ -779,6 +781,151 @@ static void pspahct_guest_code(void)
 	GUEST_DONE();
 }
 
+/* Test to verify guest accesses without double allocation:
+ * Guest starts with shared memory access disallowed by default.
+ * 1) Guest writes the private memory privately via a known pattern
+ * 3) Guest reads the private memory privately and verifies that the contents
+ *    are same as written.
+ * 4) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared
+ *    and marks the range to be accessed via shared access.
+ * 5) Guest writes shared memory with another pattern and signals VMM
+ * 6) VMM verifies the memory contents to be same as written by guest in step
+ *    5 and updates the memory with a different pattern
+ * 7) Guest verifies the memory contents to be same as written in step 6.
+ * 8) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as private
+ *    and marks the range to be accessed via private access.
+ * 9) Guest writes a known pattern to the test memory and verifies the contents
+ *    to be same as written.
+ * 10) Guest invokes KVM_HC_MAP_GPA_RANGE to map the hpa range as shared
+ *    and marks the range to be accessed via shared access.
+ * 11) Guest writes shared memory with another pattern and signals VMM
+ * 12) VMM verifies the memory contents to be same as written by guest in step
+ *    5 and updates the memory with a different pattern
+ * 13) Guest verifies the memory contents to be same as written in step 6.
+ */
+#define PSAWDAT_ID	7
+#define PSAWDAT_DESC	"PrivateSharedAccessWithoutDoubleAllocationTest"
+
+#define PSAWDAT_GUEST_SHARED_MEM_UPDATED1		1ULL
+#define PSAWDAT_GUEST_SHARED_MEM_UPDATED2		2ULL
+
+static bool psawdat_handle_vm_stage(struct kvm_vm *vm,
+			void *test_info,
+			uint64_t stage)
+{
+	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
+
+	switch (stage) {
+	case PSAWDAT_GUEST_SHARED_MEM_UPDATED1: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT2, test_mem_size),
+			"Shared memory view mismatch");
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT1, test_mem_size),
+			"Shared mem update Failure");
+		VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED);
+		break;
+	}
+	case PSAWDAT_GUEST_SHARED_MEM_UPDATED2: {
+		/* verify data to be same as what guest wrote earlier */
+		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
+			TEST_MEM_DATA_PAT3, test_mem_size),
+			"Shared memory view mismatch");
+		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
+			TEST_MEM_DATA_PAT4, test_mem_size),
+			"Shared mem update Failure");
+		VM_STAGE_PROCESSED(PSAWDAT_GUEST_SHARED_MEM_UPDATED2);
+		break;
+	}
+	default:
+		printf("Unhandled VM stage %ld\n", stage);
+		return false;
+	}
+
+	return true;
+}
+
+static void psawdat_guest_code(void)
+{
+	void *test_mem = (void *)TEST_MEM_GPA;
+	int ret;
+
+	const size_t mem_size = *((size_t *)MEM_SIZE_MMIO_ADDRESS);
+
+	/* Mark the GPA range to be treated as always accessed privately */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, mem_size));
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, mem_size));
+
+	/* Map the GPA range to be treated as shared */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via shared
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, mem_size));
+	GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED1);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT1, mem_size));
+
+	/* Map the GPA range to be treated as private */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via private
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, mem_size));
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT2, mem_size));
+
+	/* Map the GPA range to be treated as shared */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
+		mem_size >> MIN_PAGE_SHIFT,
+		KVM_MAP_GPA_RANGE_DECRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	/* Mark the GPA range to be treated as always accessed via shared
+	 * access
+	 */
+	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, 0, 0,
+		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
+	GUEST_ASSERT_1(ret == 0, ret);
+
+	GUEST_ASSERT(do_mem_op(SET_PAT, test_mem,
+			TEST_MEM_DATA_PAT3, mem_size));
+	GUEST_SYNC(PSAWDAT_GUEST_SHARED_MEM_UPDATED2);
+
+	GUEST_ASSERT(do_mem_op(VERIFY_PAT, test_mem,
+			TEST_MEM_DATA_PAT4, mem_size));
+
+	GUEST_DONE();
+}
+
 static struct test_run_helper priv_memfd_testsuite[] = {
 	[PMPAT_ID] = {
 		.test_desc = PMPAT_DESC,
@@ -815,6 +962,13 @@ static struct test_run_helper priv_memfd_testsuite[] = {
 		.vmst_handler = pspahct_handle_vm_stage,
 		.guest_fn = pspahct_guest_code,
 	},
+	[PSAWDAT_ID] = {
+		.test_desc = PSAWDAT_DESC,
+		.vmst_handler = psawdat_handle_vm_stage,
+		.guest_fn = psawdat_guest_code,
+		.toggle_shared_mem_state = true,
+		.disallow_boot_shared_access = true,
+	},
 };
 
 static void handle_vm_exit_hypercall(struct kvm_run *run,
@@ -825,6 +979,10 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 		priv_memfd_testsuite[test_id].priv_memfd;
 	int ret;
 	int fallocate_mode;
+	void *shared_mem = priv_memfd_testsuite[test_id].shared_mem;
+	bool toggle_shared_mem_state =
+		priv_memfd_testsuite[test_id].toggle_shared_mem_state;
+	int mprotect_mode;
 
 	if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) {
 		TEST_FAIL("Unhandled Hypercall %lld\n",
@@ -842,11 +1000,13 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 			gpa, npages);
 	}
 
-	if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
+	if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) {
 		fallocate_mode = 0;
-	else {
+		mprotect_mode = PROT_NONE;
+	} else {
 		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
 			FALLOC_FL_KEEP_SIZE);
+		mprotect_mode = PROT_READ | PROT_WRITE;
 	}
 	pr_info("Converting off 0x%lx pages 0x%lx to %s\n",
 		(gpa - TEST_MEM_GPA), npages,
@@ -857,6 +1017,17 @@ static void handle_vm_exit_hypercall(struct kvm_run *run,
 		npages << MIN_PAGE_SHIFT);
 	TEST_ASSERT(ret != -1,
 		"fallocate failed in hc handling");
+	if (toggle_shared_mem_state) {
+		if (fallocate_mode) {
+			ret = madvise(shared_mem, test_mem_size,
+				MADV_DONTNEED);
+			TEST_ASSERT(ret != -1,
+				"madvise failed in hc handling");
+		}
+		ret = mprotect(shared_mem, test_mem_size, mprotect_mode);
+		TEST_ASSERT(ret != -1,
+			"mprotect failed in hc handling");
+	}
 	run->hypercall.ret = 0;
 }
 
@@ -867,7 +1038,11 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 	int ret;
 	int priv_memfd =
 		priv_memfd_testsuite[test_id].priv_memfd;
+	void *shared_mem = priv_memfd_testsuite[test_id].shared_mem;
+	bool toggle_shared_mem_state =
+		priv_memfd_testsuite[test_id].toggle_shared_mem_state;
 	int fallocate_mode;
+	int mprotect_mode;
 
 	gpa = run->memory.gpa;
 	size = run->memory.size;
@@ -880,11 +1055,13 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 			gpa, size);
 	}
 
-	if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE)
+	if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE) {
 		fallocate_mode = 0;
-	else {
+		mprotect_mode = PROT_NONE;
+	} else {
 		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
 				FALLOC_FL_KEEP_SIZE);
+		mprotect_mode = PROT_READ | PROT_WRITE;
 	}
 	pr_info("Converting off 0x%lx size 0x%lx to %s\n",
 		(gpa - TEST_MEM_GPA), size,
@@ -894,6 +1071,18 @@ static void handle_vm_exit_memory_error(struct kvm_run *run,
 		(gpa - TEST_MEM_GPA), size);
 	TEST_ASSERT(ret != -1,
 		"fallocate failed in memory error handling");
+
+	if (toggle_shared_mem_state) {
+		if (fallocate_mode) {
+			ret = madvise(shared_mem, test_mem_size,
+				MADV_DONTNEED);
+			TEST_ASSERT(ret != -1,
+				"madvise failed in memory error handling");
+		}
+		ret = mprotect(shared_mem, test_mem_size, mprotect_mode);
+		TEST_ASSERT(ret != -1,
+			"mprotect failed in memory error handling");
+	}
 }
 
 static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
@@ -924,14 +1113,14 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
 
 		if (run->exit_reason == KVM_EXIT_MMIO) {
 			if (run->mmio.phys_addr == MEM_SIZE_MMIO_ADDRESS) {
-				// tell the guest the size of the memory
-				// it's been allocated
+				/* tell the guest the size of the memory it's
+				 * been allocated
+				 */
 				int shift_amount = 0;
 
 				for (int i = 0; i < sizeof(uint64_t); ++i) {
-					run->mmio.data[i] =
-						(test_mem_size >>
-							shift_amount) & BYTE_MASK;
+					run->mmio.data[i] = (test_mem_size >>
+						shift_amount) & BYTE_MASK;
 					shift_amount += CHAR_BIT;
 				}
 			}
@@ -985,6 +1174,9 @@ static void setup_and_execute_test(uint32_t test_id,
 	int ret;
 	void *shared_mem;
 	struct kvm_enable_cap cap;
+	bool disallow_boot_shared_access =
+		priv_memfd_testsuite[test_id].disallow_boot_shared_access;
+	int prot_flags = PROT_READ | PROT_WRITE;
 
 	vm = vm_create_default(VCPU_ID, 0,
 				priv_memfd_testsuite[test_id].guest_fn);
@@ -1036,10 +1228,12 @@ static void setup_and_execute_test(uint32_t test_id,
 	// set global for mem size to use later
 	test_mem_size = mem_size;
 
+	if (disallow_boot_shared_access)
+		prot_flags = PROT_NONE;
+
 	/* Allocate shared memory */
 	shared_mem = mmap(NULL, mem_size,
-			PROT_READ | PROT_WRITE,
-			mmap_flags, -1, 0);
+			prot_flags, mmap_flags, -1, 0);
 	TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
 
 	if (using_hugepages) {
@@ -1166,7 +1360,8 @@ int main(int argc, char *argv[])
 
 	for (uint32_t i = 0; i < ARRAY_SIZE(priv_memfd_testsuite); i++) {
 		for (uint32_t j = 0; j < ARRAY_SIZE(page_size_matrix); j++) {
-			const struct page_combo current_page_matrix = page_size_matrix[j];
+			const struct page_combo current_page_matrix =
+							page_size_matrix[j];
 
 			if (should_skip_test(current_page_matrix,
 				use_2mb_pages, use_1gb_pages))
@@ -1174,8 +1369,10 @@ int main(int argc, char *argv[])
 			pr_info("=== Starting test %s... ===\n",
 					priv_memfd_testsuite[i].test_desc);
 			pr_info("using page sizes shared: %s private: %s\n",
-					page_size_to_str(current_page_matrix.shared),
-					page_size_to_str(current_page_matrix.private));
+					page_size_to_str(
+						current_page_matrix.shared),
+					page_size_to_str(
+						current_page_matrix.private));
 			hugepage_requirements_text(current_page_matrix);
 			setup_and_execute_test(i, current_page_matrix.shared,
 				current_page_matrix.private);
-- 
2.36.0.550.gb090851708-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory
  2022-05-11  0:08 ` [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory Vishal Annapurve
@ 2022-05-12 17:29   ` Shuah Khan
  2022-07-20 23:03   ` Sean Christopherson
  1 sibling, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:29 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Add KVM selftest to access private memory privately
> from the guest to test that memory updates from guest
> and userspace vmm don't affect each other.
> 
> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---
>   tools/testing/selftests/kvm/Makefile          |   1 +
>   tools/testing/selftests/kvm/priv_memfd_test.c | 283 ++++++++++++++++++
>   2 files changed, 284 insertions(+)
>   create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 21c2dbd21a81..f2f9a8546c66 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -97,6 +97,7 @@ TEST_GEN_PROGS_x86_64 += max_guest_memory_test
>   TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test
>   TEST_GEN_PROGS_x86_64 += memslot_perf_test
>   TEST_GEN_PROGS_x86_64 += rseq_test
> +TEST_GEN_PROGS_x86_64 += priv_memfd_test

Add this new exexcutable to .gitignore - Also make sure if there any
error paths that should return skip and not fail in case of unmet
dependencies.

Looks good otherwise.

Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall
  2022-05-11  0:08 ` [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall Vishal Annapurve
@ 2022-05-12 17:31   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:31 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Fix inline assembly for hypercall to explicitly set
> eax with hypercall number to allow the implementation
> to work even in cases where compiler would inline the
> function.
> 

Please explain what happens without this change as well.

> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---
>   tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 9f000dfb5594..4d88e1a553bf 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -1461,7 +1461,7 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
>   
>   	asm volatile("vmcall"
>   		     : "=a"(r)
> -		     : "b"(a0), "c"(a1), "d"(a2), "S"(a3));
> +		     : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
>   	return r;
>   }
>   
> 

With the above change to commit log:

Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion
  2022-05-11  0:08 ` [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion Vishal Annapurve
@ 2022-05-12 17:40   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:40 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Add handling of explicit private/shared memory conversion using
> KVM_HC_MAP_GPA_RANGE and implicit memory conversion by handling
> KVM_EXIT_MEMORY_ERROR.
> 
> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---
>   tools/testing/selftests/kvm/priv_memfd_test.c | 87 +++++++++++++++++++
>   1 file changed, 87 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
> index bbb58c62e186..55e24c893b07 100644
> --- a/tools/testing/selftests/kvm/priv_memfd_test.c
> +++ b/tools/testing/selftests/kvm/priv_memfd_test.c
> @@ -155,6 +155,83 @@ static struct test_run_helper priv_memfd_testsuite[] = {
>   	},
>   };
>   
> +static void handle_vm_exit_hypercall(struct kvm_run *run,
> +	uint32_t test_id)
> +{
> +	uint64_t gpa, npages, attrs;
> +	int priv_memfd =
> +		priv_memfd_testsuite[test_id].priv_memfd;

Do you need this on a separate line? Doesn't looks like it will exceed
the limit with the tab?

> +	int ret;
> +	int fallocate_mode;
> +
> +	if (run->hypercall.nr != KVM_HC_MAP_GPA_RANGE) {
> +		TEST_FAIL("Unhandled Hypercall %lld\n",
> +					run->hypercall.nr);

Is this considered test fail or skip because of unmet dependency?
Also do you need run->hypercall.nr os a separate line?

> +	}
> +
> +	gpa = run->hypercall.args[0];
> +	npages = run->hypercall.args[1];
> +	attrs = run->hypercall.args[2];
> +
> +	if ((gpa < TEST_MEM_GPA) || ((gpa +
> +		(npages << MIN_PAGE_SHIFT)) > TEST_MEM_END)) {
> +		TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n",
> +			gpa, npages);

Same question here about gpa, npages on a separate line? Also
align it with the previous line for readability.

TEST_FAIL("Unhandled gpa 0x%lx npages %ld\n",
	  gpa, npages);
  
> +	}
> +
> +	if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
> +		fallocate_mode = 0;
> +	else {
> +		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
> +			FALLOC_FL_KEEP_SIZE);
> +	}
> +	pr_info("Converting off 0x%lx pages 0x%lx to %s\n",
> +		(gpa - TEST_MEM_GPA), npages,
> +		fallocate_mode ?
> +			"shared" : "private");
> +	ret = fallocate(priv_memfd, fallocate_mode,
> +		(gpa - TEST_MEM_GPA),
> +		npages << MIN_PAGE_SHIFT);
> +	TEST_ASSERT(ret != -1,
> +		"fallocate failed in hc handling");
> +	run->hypercall.ret = 0;
> +}
> +
> +static void handle_vm_exit_memory_error(struct kvm_run *run,
> +	uint32_t test_id)
> +{
> +	uint64_t gpa, size, flags;
> +	int ret;
> +	int priv_memfd =
> +		priv_memfd_testsuite[test_id].priv_memfd;
> +	int fallocate_mode;
> +
> +	gpa = run->memory.gpa;
> +	size = run->memory.size;
> +	flags = run->memory.flags;
> +
> +	if ((gpa < TEST_MEM_GPA) || ((gpa + size)
> +					> TEST_MEM_END)) {
> +		TEST_FAIL("Unhandled gpa 0x%lx size 0x%lx\n",
> +			gpa, size);
> +	}
> +
> +	if (flags & KVM_MEMORY_EXIT_FLAG_PRIVATE)
> +		fallocate_mode = 0;
> +	else {
> +		fallocate_mode = (FALLOC_FL_PUNCH_HOLE |
> +				FALLOC_FL_KEEP_SIZE);
> +	}
> +	pr_info("Converting off 0x%lx size 0x%lx to %s\n",
> +		(gpa - TEST_MEM_GPA), size,
> +		fallocate_mode ?
> +			"shared" : "private");
> +	ret = fallocate(priv_memfd, fallocate_mode,
> +		(gpa - TEST_MEM_GPA), size);
> +	TEST_ASSERT(ret != -1,
> +		"fallocate failed in memory error handling");
> +}
> +
>   static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
>   {
>   	struct kvm_run *run;
> @@ -181,6 +258,16 @@ static void vcpu_work(struct kvm_vm *vm, uint32_t test_id)
>   			continue;
>   		}
>   
> +		if (run->exit_reason == KVM_EXIT_HYPERCALL) {
> +			handle_vm_exit_hypercall(run, test_id);
> +			continue;
> +		}
> +
> +		if (run->exit_reason == KVM_EXIT_MEMORY_ERROR) {
> +			handle_vm_exit_memory_error(run, test_id);
> +			continue;
> +		}
> +
>   		TEST_FAIL("Unhandled VCPU exit reason %d\n", run->exit_reason);
>   		break;
>   	}
> 

Looks like you can easily combine lines without running into # chars limit
for several lines of code in this patch. If you haven't already, run
checkpatch to make sure coding guidelines are honored.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test
  2022-05-11  0:08 ` [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test Vishal Annapurve
@ 2022-05-12 17:45   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:45 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Add a test to access private memory in shared fashion
> which should exercise implicit memory conversion path
> using KVM_EXIT_MEMORY_ERROR.
> 

This comment applies all patches in this series. Keep commit log
line length around 76 for readability in "git log" display.


Also same comment about combining lines of code when it isn't
necessary to split them, align the lines with parenthesis to
make it easier to read, and run checkpatch.

> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---
>   tools/testing/selftests/kvm/priv_memfd_test.c | 69 +++++++++++++++++++
>   1 file changed, 69 insertions(+)
> 
> diff --git a/tools/testing/selftests/kvm/priv_memfd_test.c b/tools/testing/selftests/kvm/priv_memfd_test.c
> index 55e24c893b07..48bc4343e7b5 100644
> --- a/tools/testing/selftests/kvm/priv_memfd_test.c
> +++ b/tools/testing/selftests/kvm/priv_memfd_test.c
> @@ -147,12 +147,81 @@ static void pmpat_guest_code(void)
>   	GUEST_DONE();
>   }
>   
> +/* Test to verify guest shared accesses on private memory with following steps:
> + * 1) Upon entry, guest signals VMM that it has started.
> + * 2) VMM populates the shared memory with known pattern and continues guest
> + *    execution.
> + * 3) Guest reads private gpa range in a shared fashion and verifies that it
> + *    reads what VMM has written in step2.
> + * 3) Guest writes a different pattern on the shared memory and signals VMM
> + *      that it has updated the shared memory.
> + * 4) VMM verifies shared memory contents to be same as the data populated
> + *      in step 3 and continues guest execution.
> + */
> +#define PMSAT_ID				1
> +#define PMSAT_DESC				"PrivateMemorySharedAccessTest"
> +
> +/* Guest code execution stages for private mem access test */
> +#define PMSAT_GUEST_STARTED			0ULL
> +#define PMSAT_GUEST_TEST_MEM_UPDATED		1ULL
> +
> +static bool pmsat_handle_vm_stage(struct kvm_vm *vm,
> +			void *test_info,
> +			uint64_t stage)
> +{
> +	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
> +
> +	switch (stage) {
> +	case PMSAT_GUEST_STARTED: {
> +		/* Initialize the contents of shared memory */
> +		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
> +			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> +			"Shared memory update failed");
> +		VM_STAGE_PROCESSED(PMSAT_GUEST_STARTED);
> +		break;
> +	}
> +	case PMSAT_GUEST_TEST_MEM_UPDATED: {
> +		/* verify data to be same as what guest wrote */
> +		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
> +			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE),
> +			"Shared memory view mismatch");
> +		VM_STAGE_PROCESSED(PMSAT_GUEST_TEST_MEM_UPDATED);
> +		break;
> +	}
> +	default:
> +		printf("Unhandled VM stage %ld\n", stage);

Is this a test failure? Add more information to use why it isn't handled.

> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static void pmsat_guest_code(void)
> +{
> +	void *shared_mem = (void *)TEST_MEM_GPA;
> +
> +	GUEST_SYNC(PMSAT_GUEST_STARTED);
> +	GUEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
> +			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE));
> +
> +	GUEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
> +			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
> +	GUEST_SYNC(PMSAT_GUEST_TEST_MEM_UPDATED);
> +
> +	GUEST_DONE();
> +}
> +
>   static struct test_run_helper priv_memfd_testsuite[] = {
>   	[PMPAT_ID] = {
>   		.test_desc = PMPAT_DESC,
>   		.vmst_handler = pmpat_handle_vm_stage,
>   		.guest_fn = pmpat_guest_code,
>   	},
> +	[PMSAT_ID] = {
> +		.test_desc = PMSAT_DESC,
> +		.vmst_handler = pmsat_handle_vm_stage,
> +		.guest_fn = pmsat_guest_code,
> +	},
>   };
>   
>   static void handle_vm_exit_hypercall(struct kvm_run *run,
> 

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests
  2022-05-11  0:08 ` [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests Vishal Annapurve
@ 2022-05-12 17:48   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:48 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Add tests to exercise implicit memory conversion path.
> 

Too cryptic. Add more information on what kind of tests it runs
and inlcude test example test output with how to run examples.
   
> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---

Please see comments about coding style related comments on other
patches in this series.

thanks,
-- Shuah


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test
  2022-05-11  0:08 ` [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test Vishal Annapurve
@ 2022-05-12 17:50   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:50 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Add test to exercise explicit memory conversion path using
> KVM_HC_MAP_GPA_RANGE hypercall.
> 

Add details on what this test does and sample output. patch 7/8
in this series is a good example of what I am looking for.

> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---

Please see comments about coding style related comments on other
patches in this series.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite.
  2022-05-11  0:08 ` [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite Vishal Annapurve
@ 2022-05-12 17:51   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:51 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> From: Austin Diviness <diviness@google.com>
> 
> Adds ability to run priv_memfd_test test suite across various sizes of pages
> for shared/private memory. Shared and private memory can be allocated
> with different sized pages. In order to verify that there isn't a
> behavior change based on different page sizes, this change runs the
> tests using the currently supported permutations.
> 
> Adds command line flags to adjust whether the tests should run with
> hugepages backing the test memory.
> 
> Signed-off-by: Austin Diviness <diviness@google.com>
> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---

Please see comments about coding style related comments on other
patches in this series.

thanks,
-- Shuah


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation
  2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation Vishal Annapurve
@ 2022-05-12 17:52   ` Shuah Khan
  0 siblings, 0 replies; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 17:52 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shauh, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, Shuah Khan

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> Add a memory conversion test without leading to double allocation
> of memory backing gpa ranges.
> 

Rather cryptic. Please add more details on why this test is needed
and what it does.

> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---

Please see comments about coding style related comments on other
patches in this series.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (8 preceding siblings ...)
  2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test without " Vishal Annapurve
@ 2022-05-12 18:04 ` Shuah Khan
  2022-05-12 18:18   ` Vishal Annapurve
  2022-07-20 22:19 ` Sean Christopherson
  10 siblings, 1 reply; 24+ messages in thread
From: Shuah Khan @ 2022-05-12 18:04 UTC (permalink / raw)
  To: Vishal Annapurve, x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, yang.zhong, drjones, ricarkol, aaronlewis,
	wei.w.wang, kirill.shutemov, corbet, hughd, jlayton, bfields,
	akpm, chao.p.peng, yu.c.zhang, jun.nakajima, dave.hansen,
	michael.roth, qperret, steven.price, ak, david, luto, vbabka,
	marcorr, erdemaktas, pgonda, nikunj, seanjc, diviness,
	Shuah Khan, shuah

On 5/10/22 6:08 PM, Vishal Annapurve wrote:
> This v2 series implements selftests targeting the feature floated by Chao
> via:
> https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
> 
> Below changes aim to test the fd based approach for guest private memory
> in context of normal (non-confidential) VMs executing on non-confidential
> platforms.
> 
> priv_memfd_test.c file adds a suite of selftests to access private memory
> from the guest via private/shared accesses and checking if the contents
> can be leaked to/accessed by vmm via shared memory view.
> 
> Updates in V2:
> 1) Tests are added to exercise implicit/explicit memory conversion paths.
> 2) Test is added to exercise UPM feature without double memory allocation.
> 

+ shuah <shuah@kernel.org>

Adding  my correct kernel.org address. Please make sure to run
get_maintainers.pl to get the right addresses for miantianers.

thanks,
-- Shuah

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory
  2022-05-12 18:04 ` [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Shuah Khan
@ 2022-05-12 18:18   ` Vishal Annapurve
  0 siblings, 0 replies; 24+ messages in thread
From: Vishal Annapurve @ 2022-05-12 18:18 UTC (permalink / raw)
  To: Shuah Khan
  Cc: x86, kvm, linux-kernel, linux-kselftest, Paolo Bonzini,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, dave.hansen,
	H . Peter Anvin, yang.zhong, drjones, ricarkol, aaronlewis,
	wei.w.wang, Kirill A . Shutemov, Jonathan Corbet, Hugh Dickins,
	Jeff Layton, J . Bruce Fields, Andrew Morton, Chao Peng,
	Yu Zhang, Jun Nakajima, dave.hansen, michael.roth, qperret,
	Steven Price, ak, david, Andy Lutomirski, Vlastimil Babka,
	Marc Orr, Erdem Aktas, Peter Gonda, nikunj, Sean Christopherson,
	Austin Diviness, shuah

On Thu, May 12, 2022 at 11:04 AM Shuah Khan <skhan@linuxfoundation.org> wrote:
>
> + shuah <shuah@kernel.org>
>
> Adding  my correct kernel.org address. Please make sure to run
> get_maintainers.pl to get the right addresses for miantianers.
>

Thanks Shuah for the feedback. Apologies for the typo in the email address.

I will address the comments in the next version.

Regards,
Vishal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory
  2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
                   ` (9 preceding siblings ...)
  2022-05-12 18:04 ` [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Shuah Khan
@ 2022-07-20 22:19 ` Sean Christopherson
  10 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2022-07-20 22:19 UTC (permalink / raw)
  To: Vishal Annapurve
  Cc: x86, kvm, linux-kernel, linux-kselftest, pbonzini, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, dave.hansen, hpa,
	shauh, yang.zhong, drjones, ricarkol, aaronlewis, wei.w.wang,
	kirill.shutemov, corbet, hughd, jlayton, bfields, akpm,
	chao.p.peng, yu.c.zhang, jun.nakajima, dave.hansen, michael.roth,
	qperret, steven.price, ak, david, luto, vbabka, marcorr,
	erdemaktas, pgonda, nikunj, diviness

On Wed, May 11, 2022, Vishal Annapurve wrote:
>  tools/testing/selftests/kvm/priv_memfd_test.c | 1359 +++++++++++++++++

Please don't create a megatest.  We have megatests for nVMX and nSVM in KVM-Unit-Test
and IMO they are a mistake.  E.g. to run a single test instead of the entire suite,
the KUT tests provide a funky wildcard/filter syntax.  But the names of the tests
aren't discoverable, so inevitably I have to look at the source code to figure out
the exact name of the test I want to run.  And don't get me started on sub-tests
within sub-tests...

AFAICT, what you've proposed here doesn't provide any such filter mechanism.  And
I would rather we NOT start adding those to selftests, because we'd effectively be
reinventing the wheel _and_ dealing with strings in C is a pain.  Writing a script
to find and run all tests is trivial, e.g. grep the .gitignore to find tests for
the target arch.  Or when the system under test is different than the build system,
copy the binaries to a dedicated directory and run every binary in that directory.

Discovering and running a single test is similarly trivial.  For KUT, it's less
trivial because running a test involves invoking a VMM command line, and some of
the tests need specific command line parameters.  But for selftests, except for the
NX huge page test, they're all standalone and don't need additional setup.

And unlike KUT's nVMX and nSVM tests, which involve running hundreds of little
sub-tests with only minor differences in setup, these tests are largely independent,
i.e. you're not really getting much code reuse.

And if you split the tests up, then all of the inter-test namespacing goes away,
e.g. there is zero chance I will ever remember what "PSPAHCT" stands for.

+#define PSPAHCT_GUEST_STARTED                          0ULL
+#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED              1ULL
+#define PSPAHCT_GUEST_SHARED_MEM_UPDATED               2ULL
+#define PSPAHCT_GUEST_PRIVATE_MEM_UPDATED2             3ULL

If you find yourself doing a lot of copy+paste, then we should enhance the APIs
provided by the core infrastructure.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory
  2022-05-11  0:08 ` [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory Vishal Annapurve
  2022-05-12 17:29   ` Shuah Khan
@ 2022-07-20 23:03   ` Sean Christopherson
  2022-07-21 20:24     ` Vishal Annapurve
  1 sibling, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-07-20 23:03 UTC (permalink / raw)
  To: Vishal Annapurve
  Cc: x86, kvm, linux-kernel, linux-kselftest, pbonzini, vkuznets,
	wanpengli, jmattson, joro, tglx, mingo, bp, dave.hansen, hpa,
	shauh, yang.zhong, drjones, ricarkol, aaronlewis, wei.w.wang,
	kirill.shutemov, corbet, hughd, jlayton, bfields, akpm,
	chao.p.peng, yu.c.zhang, jun.nakajima, dave.hansen, michael.roth,
	qperret, steven.price, ak, david, luto, vbabka, marcorr,
	erdemaktas, pgonda, nikunj, diviness

On Wed, May 11, 2022, Vishal Annapurve wrote:
> Add KVM selftest to access private memory privately
> from the guest to test that memory updates from guest
> and userspace vmm don't affect each other.
> 
> Signed-off-by: Vishal Annapurve <vannapurve@google.com>
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  tools/testing/selftests/kvm/priv_memfd_test.c | 283 ++++++++++++++++++

If this name stays around in any form, just spell out "private".  The file system
can handle three more characters.

>  2 files changed, 284 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c
> +/* Guest code in selftests is loaded to guest memory using kvm_vm_elf_load

Kernel style (except for net/ apparently?) for multi-line comments is to have a
"blank" first line:

	/*
	 * blahal;sdkfjas;flkjasd;flkj;aslkfjdsa;lkfjsa;lkfjsa;dlkfjas;dlkfj
	 * as;dflkjasdf;lkasjdf;lkasdjf;lkasdjf;lkjsad;flkjasd;flkjas;dflkj
	 */

And if you haven't already read through Documentation/process/coding-style.rst,
though I thikn this and indentation are the only glaring issues.

> + * which doesn't handle global offset table updates. Calling standard libc
> + * functions would normally result in referring to the global offset table.
> + * Adding O1 here seems to prohibit compiler from replacing the memory
> + * operations with standard libc functions such as memset.
> + */

Eww.  We should either fix kvm_vm_elf_load() or override the problematic libc
variants.  Playing games with per-function attributes is not maintainable.

> +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op,
> +		void *mem, uint64_t pat, uint32_t size)

Oof.  Don't be so agressive in shortening names, _especially_ when there's no
established/universal abbreviation.  It took me forever to figure out that "pat"
is "pattern".  And for x86, "pat" is especially confusing because it already
a very well-established name that just so happens to be relevant to memory types,
just a different kind of a memory type...

> +{
> +	uint64_t *buf = (uint64_t *)mem;
> +	uint32_t chunk_size = sizeof(pat);
> +	uint64_t mem_addr = (uint64_t)mem;
> +
> +	if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0))

All the patterns are a repeating byte, why restrict this to 8-byte chunks?  Then
this confusing assert-but-not-an-assert goes away.

> +		return false;
> +
> +	for (uint32_t i = 0; i < (size / chunk_size); i++) {
> +		if (op == SET_PAT)
> +			buf[i] = pat;
> +		if (op == VERIFY_PAT) {
> +			if (buf[i] != pat)
> +				return false;

If overriding memset() and memcmp() doesn't work for whatever reason, add proper
helpers instead of a do_stuff() wrapper. 

> +		}
> +	}
> +
> +	return true;
> +}
> +
> +/* Test to verify guest private accesses on private memory with following steps:
> + * 1) Upon entry, guest signals VMM that it has started.
> + * 2) VMM populates the shared memory with known pattern and continues guest
> + *    execution.
> + * 3) Guest writes a different pattern on the private memory and signals VMM
> + *      that it has updated private memory.
> + * 4) VMM verifies its shared memory contents to be same as the data populated
> + *      in step 2 and continues guest execution.
> + * 5) Guest verifies its private memory contents to be same as the data
> + *      populated in step 3 and marks the end of the guest execution.
> + */
> +#define PMPAT_ID				0
> +#define PMPAT_DESC				"PrivateMemoryPrivateAccessTest"
> +
> +/* Guest code execution stages for private mem access test */
> +#define PMPAT_GUEST_STARTED			0ULL
> +#define PMPAT_GUEST_PRIV_MEM_UPDATED		1ULL
> +
> +static bool pmpat_handle_vm_stage(struct kvm_vm *vm,
> +			void *test_info,
> +			uint64_t stage)


Align parameters, both in prototypes and in invocations.  And don't wrap unnecessarily.

static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info,
				  uint64_t stage)


Or even let that poke out (probably not in this case, but do keep in mind that the
80 char "limit" is a soft limit that can be broken if doing so yields more readable
code).

static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info, uint64_t stage)

> +{
> +	void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
> +
> +	switch (stage) {
> +	case PMPAT_GUEST_STARTED: {
> +		/* Initialize the contents of shared memory */
> +		TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
> +			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> +			"Shared memory update failure");

Align indentation (here and many other places).

> +		VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED);
> +		break;
> +	}
> +	case PMPAT_GUEST_PRIV_MEM_UPDATED: {
> +		/* verify host updated data is still intact */
> +		TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
> +			TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> +			"Shared memory view mismatch");
> +		VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED);
> +		break;
> +	}
> +	default:
> +		printf("Unhandled VM stage %ld\n", stage);
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static void pmpat_guest_code(void)
> +{
> +	void *priv_mem = (void *)TEST_MEM_GPA;
> +	int ret;
> +
> +	GUEST_SYNC(PMPAT_GUEST_STARTED);
> +
> +	/* Mark the GPA range to be treated as always accessed privately */
> +	ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
> +		TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
> +		KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
> +	GUEST_ASSERT_1(ret == 0, ret);

"!ret" instead of "ret == 0"

> +
> +	GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2,
> +			TEST_MEM_SIZE));
> +	GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED);
> +
> +	GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem,
> +			TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
> +
> +	GUEST_DONE();
> +}
> +
> +static struct test_run_helper priv_memfd_testsuite[] = {
> +	[PMPAT_ID] = {
> +		.test_desc = PMPAT_DESC,
> +		.vmst_handler = pmpat_handle_vm_stage,
> +		.guest_fn = pmpat_guest_code,
> +	},
> +};

...

> +/* Do private access to the guest's private memory */
> +static void setup_and_execute_test(uint32_t test_id)

This helper appears to be the bulk of the shared code between tests.  This can
and should be a helper to create a VM with private memory.  Not sure what to call
such a helper, maybe vm_create_with_private_memory()?  A little verbose, but
literal isn't always bad.

> +{
> +	struct kvm_vm *vm;
> +	int priv_memfd;
> +	int ret;
> +	void *shared_mem;
> +	struct kvm_enable_cap cap;
> +
> +	vm = vm_create_default(VCPU_ID, 0,
> +				priv_memfd_testsuite[test_id].guest_fn);
> +
> +	/* Allocate shared memory */
> +	shared_mem = mmap(NULL, TEST_MEM_SIZE,
> +			PROT_READ | PROT_WRITE,
> +			MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
> +	TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
> +
> +	/* Allocate private memory */
> +	priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE);
> +	TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd");
> +	ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE);
> +	TEST_ASSERT(ret != -1, "fallocate failed");
> +
> +	priv_memory_region_add(vm, shared_mem,
> +				TEST_MEM_SLOT, TEST_MEM_SIZE,
> +				TEST_MEM_GPA, priv_memfd, 0);
> +
> +	pr_info("Mapping test memory pages 0x%x page_size 0x%x\n",
> +					TEST_MEM_SIZE/vm_get_page_size(vm),
> +					vm_get_page_size(vm));
> +	virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA,
> +					(TEST_MEM_SIZE/vm_get_page_size(vm)));
> +
> +	/* Enable exit on KVM_HC_MAP_GPA_RANGE */
> +	pr_info("Enabling exit on map_gpa_range hypercall\n");
> +	ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL);
> +	TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE),
> +				"VM exit on MAP_GPA_RANGE HC not supported");

Impressively bizarre indentation :-)

> +	cap.cap = KVM_CAP_EXIT_HYPERCALL;
> +	cap.flags = 0;
> +	cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE);
> +	ret = ioctl(vm_get_fd(vm), KVM_ENABLE_CAP, &cap);
> +	TEST_ASSERT(ret == 0,
> +		"Failed to enable exit on MAP_GPA_RANGE hypercall\n");
> +
> +	priv_memfd_testsuite[test_id].shared_mem = shared_mem;
> +	priv_memfd_testsuite[test_id].priv_memfd = priv_memfd;
> +	vcpu_work(vm, test_id);
> +
> +	munmap(shared_mem, TEST_MEM_SIZE);
> +	priv_memfd_testsuite[test_id].shared_mem = NULL;
> +	close(priv_memfd);
> +	priv_memfd_testsuite[test_id].priv_memfd = -1;
> +	kvm_vm_free(vm);
> +}

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory
  2022-07-20 23:03   ` Sean Christopherson
@ 2022-07-21 20:24     ` Vishal Annapurve
  2022-07-21 21:16       ` Sean Christopherson
  0 siblings, 1 reply; 24+ messages in thread
From: Vishal Annapurve @ 2022-07-21 20:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: x86, kvm list, LKML, linux-kselftest, Paolo Bonzini,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, dave.hansen,
	H . Peter Anvin, shauh, yang.zhong, drjones, Ricardo Koller,
	Aaron Lewis, wei.w.wang, Kirill A . Shutemov, Jonathan Corbet,
	Hugh Dickins, Jeff Layton, J . Bruce Fields, Andrew Morton,
	Chao Peng, Yu Zhang, Jun Nakajima, Dave Hansen, Michael Roth,
	Quentin Perret, Steven Price, Andi Kleen, David Hildenbrand,
	Andy Lutomirski, Vlastimil Babka, Marc Orr, Erdem Aktas,
	Peter Gonda, Nikunj A. Dadhania, Austin Diviness

On Wed, Jul 20, 2022 at 4:03 PM Sean Christopherson <seanjc@google.com> wrote:
> ...
> > + * which doesn't handle global offset table updates. Calling standard libc
> > + * functions would normally result in referring to the global offset table.
> > + * Adding O1 here seems to prohibit compiler from replacing the memory
> > + * operations with standard libc functions such as memset.
> > + */
>
> Eww.  We should either fix kvm_vm_elf_load() or override the problematic libc
> variants.  Playing games with per-function attributes is not maintainable.
>

I will try to spend more time on how kvm_vm_elf_load can be modified
to handle GOT fixups in different scenarios including
statically/dynamically linked sefltest binaries as I currently recall
limited information here.

But modifying kvm_vm_elf_load to fixup GOT entries will be
insufficient as guest VM code (possibly whole selftest binary) will
need to be compiled with flags that allow memset/memcpy
implementations to work with specific guest VM configurations e.g. AVX
extension. Same concern is outlined in
https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kvm/lib/x86_64/svm.c#L64.

Would it be ok to maintain selftest binary compilation flags based on
guest VM configurations?

> > +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op,
> > +             void *mem, uint64_t pat, uint32_t size)
>
> Oof.  Don't be so agressive in shortening names, _especially_ when there's no
> established/universal abbreviation.  It took me forever to figure out that "pat"
> is "pattern".  And for x86, "pat" is especially confusing because it already
> a very well-established name that just so happens to be relevant to memory types,
> just a different kind of a memory type...
>
> > +{
> > +     uint64_t *buf = (uint64_t *)mem;
> > +     uint32_t chunk_size = sizeof(pat);
> > +     uint64_t mem_addr = (uint64_t)mem;
> > +
> > +     if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0))
>
> All the patterns are a repeating byte, why restrict this to 8-byte chunks?  Then
> this confusing assert-but-not-an-assert goes away.
>
> > +             return false;
> > +
> > +     for (uint32_t i = 0; i < (size / chunk_size); i++) {
> > +             if (op == SET_PAT)
> > +                     buf[i] = pat;
> > +             if (op == VERIFY_PAT) {
> > +                     if (buf[i] != pat)
> > +                             return false;
>
> If overriding memset() and memcmp() doesn't work for whatever reason, add proper
> helpers instead of a do_stuff() wrapper.
>
> > +             }
> > +     }
> > +
> > +     return true;
> > +}
> > +
> > +/* Test to verify guest private accesses on private memory with following steps:
> > + * 1) Upon entry, guest signals VMM that it has started.
> > + * 2) VMM populates the shared memory with known pattern and continues guest
> > + *    execution.
> > + * 3) Guest writes a different pattern on the private memory and signals VMM
> > + *      that it has updated private memory.
> > + * 4) VMM verifies its shared memory contents to be same as the data populated
> > + *      in step 2 and continues guest execution.
> > + * 5) Guest verifies its private memory contents to be same as the data
> > + *      populated in step 3 and marks the end of the guest execution.
> > + */
> > +#define PMPAT_ID                             0
> > +#define PMPAT_DESC                           "PrivateMemoryPrivateAccessTest"
> > +
> > +/* Guest code execution stages for private mem access test */
> > +#define PMPAT_GUEST_STARTED                  0ULL
> > +#define PMPAT_GUEST_PRIV_MEM_UPDATED         1ULL
> > +
> > +static bool pmpat_handle_vm_stage(struct kvm_vm *vm,
> > +                     void *test_info,
> > +                     uint64_t stage)
>
>
> Align parameters, both in prototypes and in invocations.  And don't wrap unnecessarily.
>
> static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info,
>                                   uint64_t stage)
>
>
> Or even let that poke out (probably not in this case, but do keep in mind that the
> 80 char "limit" is a soft limit that can be broken if doing so yields more readable
> code).
>
> static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info, uint64_t stage)
>
> > +{
> > +     void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
> > +
> > +     switch (stage) {
> > +     case PMPAT_GUEST_STARTED: {
> > +             /* Initialize the contents of shared memory */
> > +             TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
> > +                     TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> > +                     "Shared memory update failure");
>
> Align indentation (here and many other places).
>
> > +             VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED);
> > +             break;
> > +     }
> > +     case PMPAT_GUEST_PRIV_MEM_UPDATED: {
> > +             /* verify host updated data is still intact */
> > +             TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
> > +                     TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> > +                     "Shared memory view mismatch");
> > +             VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED);
> > +             break;
> > +     }
> > +     default:
> > +             printf("Unhandled VM stage %ld\n", stage);
> > +             return false;
> > +     }
> > +
> > +     return true;
> > +}
> > +
> > +static void pmpat_guest_code(void)
> > +{
> > +     void *priv_mem = (void *)TEST_MEM_GPA;
> > +     int ret;
> > +
> > +     GUEST_SYNC(PMPAT_GUEST_STARTED);
> > +
> > +     /* Mark the GPA range to be treated as always accessed privately */
> > +     ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
> > +             TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
> > +             KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
> > +     GUEST_ASSERT_1(ret == 0, ret);
>
> "!ret" instead of "ret == 0"
>
> > +
> > +     GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2,
> > +                     TEST_MEM_SIZE));
> > +     GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED);
> > +
> > +     GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem,
> > +                     TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
> > +
> > +     GUEST_DONE();
> > +}
> > +
> > +static struct test_run_helper priv_memfd_testsuite[] = {
> > +     [PMPAT_ID] = {
> > +             .test_desc = PMPAT_DESC,
> > +             .vmst_handler = pmpat_handle_vm_stage,
> > +             .guest_fn = pmpat_guest_code,
> > +     },
> > +};
>
> ...
>
> > +/* Do private access to the guest's private memory */
> > +static void setup_and_execute_test(uint32_t test_id)
>
> This helper appears to be the bulk of the shared code between tests.  This can
> and should be a helper to create a VM with private memory.  Not sure what to call
> such a helper, maybe vm_create_with_private_memory()?  A little verbose, but
> literal isn't always bad.
>
> > +{
> > +     struct kvm_vm *vm;
> > +     int priv_memfd;
> > +     int ret;
> > +     void *shared_mem;
> > +     struct kvm_enable_cap cap;
> > +
> > +     vm = vm_create_default(VCPU_ID, 0,
> > +                             priv_memfd_testsuite[test_id].guest_fn);
> > +
> > +     /* Allocate shared memory */
> > +     shared_mem = mmap(NULL, TEST_MEM_SIZE,
> > +                     PROT_READ | PROT_WRITE,
> > +                     MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
> > +     TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
> > +
> > +     /* Allocate private memory */
> > +     priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE);
> > +     TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd");
> > +     ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE);
> > +     TEST_ASSERT(ret != -1, "fallocate failed");
> > +
> > +     priv_memory_region_add(vm, shared_mem,
> > +                             TEST_MEM_SLOT, TEST_MEM_SIZE,
> > +                             TEST_MEM_GPA, priv_memfd, 0);
> > +
> > +     pr_info("Mapping test memory pages 0x%x page_size 0x%x\n",
> > +                                     TEST_MEM_SIZE/vm_get_page_size(vm),
> > +                                     vm_get_page_size(vm));
> > +     virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA,
> > +                                     (TEST_MEM_SIZE/vm_get_page_size(vm)));
> > +
> > +     /* Enable exit on KVM_HC_MAP_GPA_RANGE */
> > +     pr_info("Enabling exit on map_gpa_range hypercall\n");
> > +     ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL);
> > +     TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE),
> > +                             "VM exit on MAP_GPA_RANGE HC not supported");
>
> Impressively bizarre indentation :-)
>

Thanks Sean for all the feedback here. I will address the comments in
the next series.

Regards,
Vishal

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory
  2022-07-21 20:24     ` Vishal Annapurve
@ 2022-07-21 21:16       ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2022-07-21 21:16 UTC (permalink / raw)
  To: Vishal Annapurve
  Cc: x86, kvm list, LKML, linux-kselftest, Paolo Bonzini,
	Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, dave.hansen,
	H . Peter Anvin, shauh, yang.zhong, drjones, Ricardo Koller,
	Aaron Lewis, wei.w.wang, Kirill A . Shutemov, Jonathan Corbet,
	Hugh Dickins, Jeff Layton, J . Bruce Fields, Andrew Morton,
	Chao Peng, Yu Zhang, Jun Nakajima, Dave Hansen, Michael Roth,
	Quentin Perret, Steven Price, Andi Kleen, David Hildenbrand,
	Andy Lutomirski, Vlastimil Babka, Marc Orr, Erdem Aktas,
	Peter Gonda, Nikunj A. Dadhania, Austin Diviness

On Thu, Jul 21, 2022, Vishal Annapurve wrote:
> On Wed, Jul 20, 2022 at 4:03 PM Sean Christopherson <seanjc@google.com> wrote:
> > ...
> > > + * which doesn't handle global offset table updates. Calling standard libc
> > > + * functions would normally result in referring to the global offset table.
> > > + * Adding O1 here seems to prohibit compiler from replacing the memory
> > > + * operations with standard libc functions such as memset.
> > > + */
> >
> > Eww.  We should either fix kvm_vm_elf_load() or override the problematic libc
> > variants.  Playing games with per-function attributes is not maintainable.
> >
> 
> I will try to spend more time on how kvm_vm_elf_load can be modified
> to handle GOT fixups in different scenarios including
> statically/dynamically linked sefltest binaries as I currently recall
> limited information here.
> 
> But modifying kvm_vm_elf_load to fixup GOT entries will be
> insufficient as guest VM code (possibly whole selftest binary) will
> need to be compiled with flags that allow memset/memcpy
> implementations to work with specific guest VM configurations e.g. AVX
> extension. Same concern is outlined in
> https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kvm/lib/x86_64/svm.c#L64.
> 
> Would it be ok to maintain selftest binary compilation flags based on
> guest VM configurations?

No, we should instead define/implement versions of memset/memcpy that are
guaranteed to be guest-friendly, either explicitly by selftests are implicitly
by compiler builtins, e.g. see arch/x86/boot/string.h.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2022-07-21 21:16 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-11  0:08 [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Vishal Annapurve
2022-05-11  0:08 ` [RFC V2 PATCH 1/8] selftests: kvm: Fix inline assembly for hypercall Vishal Annapurve
2022-05-12 17:31   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory Vishal Annapurve
2022-05-12 17:29   ` Shuah Khan
2022-07-20 23:03   ` Sean Christopherson
2022-07-21 20:24     ` Vishal Annapurve
2022-07-21 21:16       ` Sean Christopherson
2022-05-11  0:08 ` [RFC V2 PATCH 3/8] selftests: kvm: priv_memfd_test: Add support for memory conversion Vishal Annapurve
2022-05-12 17:40   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 4/8] selftests: kvm: priv_memfd_test: Add shared access test Vishal Annapurve
2022-05-12 17:45   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 5/8] selftests: kvm: Add implicit memory conversion tests Vishal Annapurve
2022-05-12 17:48   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 6/8] selftests: kvm: Add KVM_HC_MAP_GPA_RANGE hypercall test Vishal Annapurve
2022-05-12 17:50   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 7/8] selftests: kvm: Add hugepage support to priv_memfd_test suite Vishal Annapurve
2022-05-12 17:51   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test avoiding double allocation Vishal Annapurve
2022-05-12 17:52   ` Shuah Khan
2022-05-11  0:08 ` [RFC V2 PATCH 8/8] selftests: kvm: priv_memfd: Add test without " Vishal Annapurve
2022-05-12 18:04 ` [RFC V2 PATCH 0/8] selftests: KVM: selftests for fd-based approach of supporting private memory Shuah Khan
2022-05-12 18:18   ` Vishal Annapurve
2022-07-20 22:19 ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).