linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/10] Additional selftests for restrictedmem
@ 2023-03-16  0:30 Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 01/10] KVM: selftests: Test error message fixes for memfd_restricted selftests Ackerley Tng
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Hello,

This is a series containing additional selftests for restrictedmem,
prepared to be used with the next iteration of the restrictedmem
series after v10.

restrictedmem v10 is available at
https://lore.kernel.org/lkml/20221202061347.1070246-1-chao.p.peng@linux.intel.com/T/.

The tree can be found at
https://github.com/googleprodkernel/linux-cc/tree/restrictedmem-additional-selftests-rfc-v1/.

Dependencies
+ The next iteration of the restrictedmem series
    + branch: https://github.com/chao-p/linux/commits/privmem-v11.4
    + commit: https://github.com/chao-p/linux/tree/ddd2c92b268a2fdc6158f82a6169ad1a57f2a01d
+ Proposed fix to adjust VM's initial stack address to align with SysV
  ABI spec: https://lore.kernel.org/lkml/20230227180601.104318-1-ackerleytng@google.com/

Ackerley Tng (10):
  KVM: selftests: Test error message fixes for memfd_restricted
    selftests
  KVM: selftests: Test that ftruncate to non-page-aligned size on a
    restrictedmem fd should fail
  KVM: selftests: Test that VM private memory should not be readable
    from host
  KVM: selftests: Exercise restrictedmem allocation and truncation code
    after KVM invalidation code has been unbound
  KVM: selftests: Generalize private_mem_conversions_test for parallel
    execution
  KVM: selftests: Default private_mem_conversions_test to use 1 memslot
    for test data
  KVM: selftests: Add vm_userspace_mem_region_add_with_restrictedmem
  KVM: selftests: Default private_mem_conversions_test to use 1
    restrictedmem file for test data
  KVM: selftests: Add tests around sharing a restrictedmem fd
  KVM: selftests: Test KVM exit behavior for private memory/access

 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/kvm_util_base.h     |   4 +
 tools/testing/selftests/kvm/lib/kvm_util.c    |  46 ++-
 .../selftests/kvm/set_memory_region_test.c    |  29 +-
 .../kvm/x86_64/private_mem_conversions_test.c | 295 +++++++++++++++---
 .../kvm/x86_64/private_mem_kvm_exits_test.c   | 124 ++++++++
 tools/testing/selftests/vm/memfd_restricted.c |   9 +-
 7 files changed, 455 insertions(+), 53 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c

--
2.40.0.rc2.332.ga46443480c-goog

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 01/10] KVM: selftests: Test error message fixes for memfd_restricted selftests
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
@ 2023-03-16  0:30 ` Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 02/10] KVM: selftests: Test that ftruncate to non-page-aligned size on a restrictedmem fd should fail Ackerley Tng
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 tools/testing/selftests/vm/memfd_restricted.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/vm/memfd_restricted.c b/tools/testing/selftests/vm/memfd_restricted.c
index 3a556b570129..43a512f273f7 100644
--- a/tools/testing/selftests/vm/memfd_restricted.c
+++ b/tools/testing/selftests/vm/memfd_restricted.c
@@ -49,12 +49,12 @@ static void test_file_size(int fd)
 	}
 
 	if (sb.st_size != page_size) {
-		fail("unexpected file size after ftruncate");
+		fail("unexpected file size after ftruncate\n");
 		return;
 	}
 
 	if (!ftruncate(fd, page_size * 2)) {
-		fail("unexpected ftruncate\n");
+		fail("size of file cannot be changed once set\n");
 		return;
 	}
 
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 02/10] KVM: selftests: Test that ftruncate to non-page-aligned size on a restrictedmem fd should fail
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 01/10] KVM: selftests: Test error message fixes for memfd_restricted selftests Ackerley Tng
@ 2023-03-16  0:30 ` Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 03/10] KVM: selftests: Test that VM private memory should not be readable from host Ackerley Tng
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 tools/testing/selftests/vm/memfd_restricted.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/testing/selftests/vm/memfd_restricted.c b/tools/testing/selftests/vm/memfd_restricted.c
index 43a512f273f7..9c4e6a0becbc 100644
--- a/tools/testing/selftests/vm/memfd_restricted.c
+++ b/tools/testing/selftests/vm/memfd_restricted.c
@@ -38,6 +38,11 @@ static void test_file_size(int fd)
 {
 	struct stat sb;
 
+	if (!ftruncate(fd, page_size + 1)) {
+		fail("ftruncate to non page-aligned sizes should fail\n");
+		return;
+	}
+
 	if (ftruncate(fd, page_size)) {
 		fail("ftruncate failed\n");
 		return;
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 03/10] KVM: selftests: Test that VM private memory should not be readable from host
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 01/10] KVM: selftests: Test error message fixes for memfd_restricted selftests Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 02/10] KVM: selftests: Test that ftruncate to non-page-aligned size on a restrictedmem fd should fail Ackerley Tng
@ 2023-03-16  0:30 ` Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 04/10] KVM: selftests: Exercise restrictedmem allocation and truncation code after KVM invalidation code has been unbound Ackerley Tng
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

After VM memory is remapped as private memory and guest has written to
private memory, request the host to read the corresponding hva for
that private memory.

The host should not be able to read the value in private memory.

This selftest shows that private memory contents of the guest are not
accessible to host userspace via the HVA.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../kvm/x86_64/private_mem_conversions_test.c | 54 ++++++++++++++++---
 1 file changed, 48 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index ef9894340a2b..f2c1e4450b0e 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -47,6 +47,16 @@ static void memcmp_h(uint8_t *mem, uint8_t pattern, size_t size)
 			    pattern, i, mem[i]);
 }
 
+static void memcmp_ne_h(uint8_t *mem, uint8_t pattern, size_t size)
+{
+	size_t i;
+
+	for (i = 0; i < size; i++)
+		TEST_ASSERT(mem[i] != pattern,
+			    "Expected not to find 0x%x at offset %lu but got 0x%x",
+			    pattern, i, mem[i]);
+}
+
 /*
  * Run memory conversion tests with explicit conversion:
  * Execute KVM hypercall to map/unmap gpa range which will cause userspace exit
@@ -64,8 +74,14 @@ static void memcmp_h(uint8_t *mem, uint8_t pattern, size_t size)
 
 #define GUEST_STAGE(o, s) { .offset = o, .size = s }
 
-#define GUEST_SYNC4(gpa, size, current_pattern, new_pattern) \
-	ucall(UCALL_SYNC, 4, gpa, size, current_pattern, new_pattern)
+#define UCALL_RW_SHARED (0xca11 - 0)
+#define UCALL_R_PRIVATE (0xca11 - 1)
+
+#define REQUEST_HOST_RW_SHARED(gpa, size, current_pattern, new_pattern) \
+	ucall(UCALL_RW_SHARED, 4, gpa, size, current_pattern, new_pattern)
+
+#define REQUEST_HOST_R_PRIVATE(gpa, size, expected_pattern) \
+	ucall(UCALL_R_PRIVATE, 3, gpa, size, expected_pattern)
 
 static void guest_code(void)
 {
@@ -86,7 +102,7 @@ static void guest_code(void)
 
 	/* Memory should be shared by default. */
 	memset((void *)DATA_GPA, ~init_p, DATA_SIZE);
-	GUEST_SYNC4(DATA_GPA, DATA_SIZE, ~init_p, init_p);
+	REQUEST_HOST_RW_SHARED(DATA_GPA, DATA_SIZE, ~init_p, init_p);
 	memcmp_g(DATA_GPA, init_p, DATA_SIZE);
 
 	for (i = 0; i < ARRAY_SIZE(stages); i++) {
@@ -113,6 +129,12 @@ static void guest_code(void)
 		kvm_hypercall_map_private(gpa, size);
 		memset((void *)gpa, p2, size);
 
+		/*
+		 * Host should not be able to read the values written to private
+		 * memory
+		 */
+		REQUEST_HOST_R_PRIVATE(gpa, size, p2);
+
 		/*
 		 * Verify that the private memory was set to pattern two, and
 		 * that shared memory still holds the initial pattern.
@@ -133,11 +155,20 @@ static void guest_code(void)
 				continue;
 
 			kvm_hypercall_map_shared(gpa + j, PAGE_SIZE);
-			GUEST_SYNC4(gpa + j, PAGE_SIZE, p1, p3);
+			REQUEST_HOST_RW_SHARED(gpa + j, PAGE_SIZE, p1, p3);
 
 			memcmp_g(gpa + j, p3, PAGE_SIZE);
 		}
 
+		/*
+		 * Even-number pages are still mapped as private, host should
+		 * not be able to read those values.
+		 */
+		for (j = 0; j < size; j += PAGE_SIZE) {
+			if (!((j >> PAGE_SHIFT) & 1))
+				REQUEST_HOST_R_PRIVATE(gpa + j, PAGE_SIZE, p2);
+		}
+
 		/*
 		 * Convert the entire region back to shared, explicitly write
 		 * pattern three to fill in the even-number frames before
@@ -145,7 +176,7 @@ static void guest_code(void)
 		 */
 		kvm_hypercall_map_shared(gpa, size);
 		memset((void *)gpa, p3, size);
-		GUEST_SYNC4(gpa, size, p3, p4);
+		REQUEST_HOST_RW_SHARED(gpa, size, p3, p4);
 		memcmp_g(gpa, p4, size);
 
 		/* Reset the shared memory back to the initial pattern. */
@@ -209,7 +240,18 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type)
 		switch (get_ucall(vcpu, &uc)) {
 		case UCALL_ABORT:
 			REPORT_GUEST_ASSERT_4(uc, "%lx %lx %lx %lx");
-		case UCALL_SYNC: {
+		case UCALL_R_PRIVATE: {
+			uint8_t *hva = addr_gpa2hva(vm, uc.args[0]);
+			uint64_t size = uc.args[1];
+
+			/*
+			 * Try to read hva for private gpa from host, should not
+			 * be able to read private data
+			 */
+			memcmp_ne_h(hva, uc.args[2], size);
+			break;
+		}
+		case UCALL_RW_SHARED: {
 			uint8_t *hva = addr_gpa2hva(vm, uc.args[0]);
 			uint64_t size = uc.args[1];
 
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 04/10] KVM: selftests: Exercise restrictedmem allocation and truncation code after KVM invalidation code has been unbound
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (2 preceding siblings ...)
  2023-03-16  0:30 ` [RFC PATCH 03/10] KVM: selftests: Test that VM private memory should not be readable from host Ackerley Tng
@ 2023-03-16  0:30 ` Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 05/10] KVM: selftests: Generalize private_mem_conversions_test for parallel execution Ackerley Tng
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

The kernel interfaces restrictedmem_bind and restrictedmem_unbind are
used by KVM to bind/unbind kvm functions to restrictedmem's
invalidate_start and invalidate_end callbacks.

After the KVM VM is freed, the KVM functions should have been unbound
from the restrictedmem_fd's callbacks.

In this test, we exercise fallocate to back and unback memory using
the restrictedmem fd, and we expect no problems (crashes) after the
KVM functions have been unbound.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../kvm/x86_64/private_mem_conversions_test.c | 26 ++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index f2c1e4450b0e..7741916818db 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -203,6 +203,30 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu)
 	run->hypercall.ret = 0;
 }
 
+static void test_invalidation_code_unbound(struct kvm_vm *vm)
+{
+	uint32_t fd;
+	uint64_t offset;
+	struct userspace_mem_region *region;
+
+	region = memslot2region(vm, DATA_SLOT);
+	fd = region->region.restrictedmem_fd;
+	offset = region->region.restrictedmem_offset;
+
+	kvm_vm_free(vm);
+
+	/*
+	 * At this point the KVM invalidation code should have been unbound from
+	 * the vm. We do allocation and truncation to exercise the restrictedmem
+	 * code. There should be no issues after the unbinding happens.
+	 */
+	if (fallocate(fd, 0, offset, DATA_SIZE))
+		TEST_FAIL("Unexpected error in fallocate");
+	if (fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+		      offset, DATA_SIZE))
+		TEST_FAIL("Unexpected error in fallocate");
+}
+
 static void test_mem_conversions(enum vm_mem_backing_src_type src_type)
 {
 	struct kvm_vcpu *vcpu;
@@ -270,7 +294,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type)
 	}
 
 done:
-	kvm_vm_free(vm);
+	test_invalidation_code_unbound(vm);
 }
 
 int main(int argc, char *argv[])
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 05/10] KVM: selftests: Generalize private_mem_conversions_test for parallel execution
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (3 preceding siblings ...)
  2023-03-16  0:30 ` [RFC PATCH 04/10] KVM: selftests: Exercise restrictedmem allocation and truncation code after KVM invalidation code has been unbound Ackerley Tng
@ 2023-03-16  0:30 ` Ackerley Tng
  2023-03-16  0:30 ` [RFC PATCH 06/10] KVM: selftests: Default private_mem_conversions_test to use 1 memslot for test data Ackerley Tng
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

By running the private/shared memory conversion tests on multiple
vCPUs in parallel, we stress-test the restrictedmem subsystem to
test conversion of non-overlapping GPA ranges in multiple memslots.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../kvm/x86_64/private_mem_conversions_test.c | 203 +++++++++++++-----
 1 file changed, 150 insertions(+), 53 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index 7741916818db..14aa90e9a89b 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -5,6 +5,7 @@
 #define _GNU_SOURCE /* for program_invocation_short_name */
 #include <fcntl.h>
 #include <limits.h>
+#include <pthread.h>
 #include <sched.h>
 #include <signal.h>
 #include <stdio.h>
@@ -22,9 +23,10 @@
 #include <kvm_util.h>
 #include <processor.h>
 
-#define DATA_SLOT	10
-#define DATA_GPA	((uint64_t)(1ull << 32))
-#define DATA_SIZE	((uint64_t)(SZ_2M + PAGE_SIZE))
+#define DATA_SLOT_BASE   10
+#define DATA_GPA_BASE    ((uint64_t)(1ull << 32))
+#define DATA_SIZE        ((uint64_t)(SZ_2M + PAGE_SIZE))
+#define DATA_GPA_SPACING DATA_SIZE
 
 /* Horrific macro so that the line info is captured accurately :-( */
 #define memcmp_g(gpa, pattern,  size)				\
@@ -83,7 +85,9 @@ static void memcmp_ne_h(uint8_t *mem, uint8_t pattern, size_t size)
 #define REQUEST_HOST_R_PRIVATE(gpa, size, expected_pattern) \
 	ucall(UCALL_R_PRIVATE, 3, gpa, size, expected_pattern)
 
-static void guest_code(void)
+const uint8_t init_p = 0xcc;
+
+static void guest_test_conversions(uint64_t gpa_base)
 {
 	struct {
 		uint64_t offset;
@@ -96,17 +100,11 @@ static void guest_code(void)
 		GUEST_STAGE(PAGE_SIZE, SZ_2M),
 		GUEST_STAGE(SZ_2M, PAGE_SIZE),
 	};
-	const uint8_t init_p = 0xcc;
 	uint64_t j;
 	int i;
 
-	/* Memory should be shared by default. */
-	memset((void *)DATA_GPA, ~init_p, DATA_SIZE);
-	REQUEST_HOST_RW_SHARED(DATA_GPA, DATA_SIZE, ~init_p, init_p);
-	memcmp_g(DATA_GPA, init_p, DATA_SIZE);
-
 	for (i = 0; i < ARRAY_SIZE(stages); i++) {
-		uint64_t gpa = DATA_GPA + stages[i].offset;
+		uint64_t gpa = gpa_base + stages[i].offset;
 		uint64_t size = stages[i].size;
 		uint8_t p1 = 0x11;
 		uint8_t p2 = 0x22;
@@ -140,11 +138,11 @@ static void guest_code(void)
 		 * that shared memory still holds the initial pattern.
 		 */
 		memcmp_g(gpa, p2, size);
-		if (gpa > DATA_GPA)
-			memcmp_g(DATA_GPA, init_p, gpa - DATA_GPA);
-		if (gpa + size < DATA_GPA + DATA_SIZE)
+		if (gpa > gpa_base)
+			memcmp_g(gpa_base, init_p, gpa - gpa_base);
+		if (gpa + size < gpa_base + DATA_SIZE)
 			memcmp_g(gpa + size, init_p,
-				 (DATA_GPA + DATA_SIZE) - (gpa + size));
+				 (gpa_base + DATA_SIZE) - (gpa + size));
 
 		/*
 		 * Convert odd-number page frames back to shared to verify KVM
@@ -182,6 +180,19 @@ static void guest_code(void)
 		/* Reset the shared memory back to the initial pattern. */
 		memset((void *)gpa, init_p, size);
 	}
+}
+
+static void guest_code(uint64_t gpa_base, uint32_t iterations)
+{
+	int i;
+
+	/* Memory should be shared by default. */
+	memset((void *)gpa_base, ~init_p, DATA_SIZE);
+	REQUEST_HOST_RW_SHARED(gpa_base, DATA_SIZE, ~init_p, init_p);
+	memcmp_g(gpa_base, init_p, DATA_SIZE);
+
+	for (i = 0; i < iterations; i++)
+		guest_test_conversions(gpa_base);
 
 	GUEST_DONE();
 }
@@ -203,15 +214,27 @@ static void handle_exit_hypercall(struct kvm_vcpu *vcpu)
 	run->hypercall.ret = 0;
 }
 
-static void test_invalidation_code_unbound(struct kvm_vm *vm)
+static uint64_t data_gpa_base_for_vcpu_id(uint8_t n)
+{
+	return DATA_GPA_BASE + n * DATA_GPA_SPACING;
+}
+
+static void test_invalidation_code_unbound(struct kvm_vm *vm, uint8_t nr_memslots,
+					   off_t data_size)
 {
-	uint32_t fd;
-	uint64_t offset;
-	struct userspace_mem_region *region;
+	struct {
+		uint32_t fd;
+		uint64_t offset;
+	} params[KVM_MAX_VCPUS];
+	int i;
+
+	for (i = 0; i < nr_memslots; i++) {
+		struct userspace_mem_region *region;
 
-	region = memslot2region(vm, DATA_SLOT);
-	fd = region->region.restrictedmem_fd;
-	offset = region->region.restrictedmem_offset;
+		region = memslot2region(vm, DATA_SLOT_BASE + i);
+		params[i].fd = region->region.restrictedmem_fd;
+		params[i].offset = region->region.restrictedmem_offset;
+	}
 
 	kvm_vm_free(vm);
 
@@ -220,33 +243,24 @@ static void test_invalidation_code_unbound(struct kvm_vm *vm)
 	 * the vm. We do allocation and truncation to exercise the restrictedmem
 	 * code. There should be no issues after the unbinding happens.
 	 */
-	if (fallocate(fd, 0, offset, DATA_SIZE))
-		TEST_FAIL("Unexpected error in fallocate");
-	if (fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
-		      offset, DATA_SIZE))
-		TEST_FAIL("Unexpected error in fallocate");
+	for (i = 0; i < nr_memslots; i++) {
+		if (fallocate(params[i].fd, 0, params[i].offset, data_size))
+			TEST_FAIL("Unexpected error in fallocate");
+		if (fallocate(params[i].fd,
+			      FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+			      params[i].offset, data_size))
+			TEST_FAIL("Unexpected error in fallocate");
+	}
+
 }
 
-static void test_mem_conversions(enum vm_mem_backing_src_type src_type)
+static void test_mem_conversions_for_vcpu(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
+					  uint32_t iterations)
 {
-	struct kvm_vcpu *vcpu;
 	struct kvm_run *run;
-	struct kvm_vm *vm;
 	struct ucall uc;
 
-	const struct vm_shape shape = {
-		.mode = VM_MODE_DEFAULT,
-		.type = KVM_X86_PROTECTED_VM,
-	};
-
-	vm = vm_create_shape_with_one_vcpu(shape, &vcpu, guest_code);
-
-	vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
-
-	vm_userspace_mem_region_add(vm, src_type, DATA_GPA, DATA_SLOT,
-				    DATA_SIZE / vm->page_size, KVM_MEM_PRIVATE);
-
-	virt_map(vm, DATA_GPA, DATA_GPA, DATA_SIZE / vm->page_size);
+	vcpu_args_set(vcpu, 2, data_gpa_base_for_vcpu_id(vcpu->id), iterations);
 
 	run = vcpu->run;
 	for ( ;; ) {
@@ -287,40 +301,123 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type)
 			break;
 		}
 		case UCALL_DONE:
-			goto done;
+			return;
 		default:
 			TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd);
 		}
 	}
+}
+
+struct thread_args {
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	uint32_t iterations;
+};
+
+void *thread_function(void *input)
+{
+	struct thread_args *args = (struct thread_args *)input;
+
+	test_mem_conversions_for_vcpu(args->vm, args->vcpu, args->iterations);
+
+	return NULL;
+}
+
+static void add_memslot_for_vcpu(
+	struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint8_t vcpu_id)
+{
+	uint64_t gpa = data_gpa_base_for_vcpu_id(vcpu_id);
+	uint32_t slot = DATA_SLOT_BASE + vcpu_id;
+	uint64_t npages = DATA_SIZE / vm->page_size;
+
+	vm_userspace_mem_region_add(vm, src_type, gpa, slot, npages,
+				    KVM_MEM_PRIVATE);
+}
+
+static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
+				 uint8_t nr_vcpus, uint32_t iterations)
+{
+	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
+	pthread_t threads[KVM_MAX_VCPUS];
+	struct thread_args args[KVM_MAX_VCPUS];
+	struct kvm_vm *vm;
+
+	int i;
+	int npages_for_all_vcpus;
+
+	const struct vm_shape shape = {
+		.mode = VM_MODE_DEFAULT,
+		.type = KVM_X86_PROTECTED_VM,
+	};
+
+	vm = __vm_create_with_vcpus(shape, nr_vcpus, 0, guest_code, vcpus);
+
+	vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
+
+	npages_for_all_vcpus = DATA_SIZE / vm->page_size * nr_vcpus;
+	virt_map(vm, DATA_GPA_BASE, DATA_GPA_BASE, npages_for_all_vcpus);
+
+	for (i = 0; i < nr_vcpus; i++)
+		add_memslot_for_vcpu(vm, src_type, i);
+
+	for (i = 0; i < nr_vcpus; i++) {
+		args[i].vm = vm;
+		args[i].vcpu = vcpus[i];
+		args[i].iterations = iterations;
+
+		pthread_create(&threads[i], NULL, thread_function, &args[i]);
+	}
+
+	for (i = 0; i < nr_vcpus; i++)
+		pthread_join(threads[i], NULL);
+
+	test_invalidation_code_unbound(vm, nr_vcpus, DATA_SIZE);
+}
 
-done:
-	test_invalidation_code_unbound(vm);
+static void usage(const char *command)
+{
+	puts("");
+	printf("usage: %s [-h] [-s mem-type] [-n number-of-vcpus] [-i number-of-iterations]\n",
+	       command);
+	puts("");
+	backing_src_help("-s");
+	puts("");
+	puts(" -n: specify the number of vcpus to run memory conversion");
+	puts("     tests in parallel on. (default: 2)");
+	puts("");
+	puts(" -i: specify the number iterations of memory conversion");
+	puts("     tests to run. (default: 10)");
+	puts("");
 }
 
 int main(int argc, char *argv[])
 {
 	enum vm_mem_backing_src_type src_type = DEFAULT_VM_MEM_SRC;
+	uint8_t nr_vcpus = 2;
+	uint32_t iterations = 10;
 	int opt;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_EXIT_HYPERCALL));
 	TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_PROTECTED_VM));
 
-	while ((opt = getopt(argc, argv, "hs:")) != -1) {
+	while ((opt = getopt(argc, argv, "hs:n:i:")) != -1) {
 		switch (opt) {
+		case 'n':
+			nr_vcpus = atoi_positive("nr_vcpus", optarg);
+			break;
+		case 'i':
+			iterations = atoi_positive("iterations", optarg);
+			break;
 		case 's':
 			src_type = parse_backing_src_type(optarg);
 			break;
 		case 'h':
 		default:
-			puts("");
-			printf("usage: %s [-h] [-s mem-type]\n", argv[0]);
-			puts("");
-			backing_src_help("-s");
-			puts("");
+			usage(argv[0]);
 			exit(0);
 		}
 	}
 
-	test_mem_conversions(src_type);
+	test_mem_conversions(src_type, nr_vcpus, iterations);
 	return 0;
 }
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 06/10] KVM: selftests: Default private_mem_conversions_test to use 1 memslot for test data
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (4 preceding siblings ...)
  2023-03-16  0:30 ` [RFC PATCH 05/10] KVM: selftests: Generalize private_mem_conversions_test for parallel execution Ackerley Tng
@ 2023-03-16  0:30 ` Ackerley Tng
  2023-03-16  0:31 ` [RFC PATCH 07/10] KVM: selftests: Add vm_userspace_mem_region_add_with_restrictedmem Ackerley Tng
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:30 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Default the private/shared memory conversion tests to use a single
memslot, while executing on multiple vCPUs in parallel, to stress-test
the restrictedmem subsystem.

Also add a flag to allow multiple memslots to be used.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../kvm/x86_64/private_mem_conversions_test.c | 30 +++++++++++++++----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index 14aa90e9a89b..afaf8d0e52e6 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -335,7 +335,8 @@ static void add_memslot_for_vcpu(
 }
 
 static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
-				 uint8_t nr_vcpus, uint32_t iterations)
+				 uint8_t nr_vcpus, uint32_t iterations,
+				 bool use_multiple_memslots)
 {
 	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
 	pthread_t threads[KVM_MAX_VCPUS];
@@ -355,6 +356,16 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
 	vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
 
 	npages_for_all_vcpus = DATA_SIZE / vm->page_size * nr_vcpus;
+
+	if (use_multiple_memslots) {
+		for (i = 0; i < nr_vcpus; i++)
+			add_memslot_for_vcpu(vm, src_type, i);
+	} else {
+		vm_userspace_mem_region_add(
+			vm, src_type, DATA_GPA_BASE, DATA_SLOT_BASE,
+			npages_for_all_vcpus, KVM_MEM_PRIVATE);
+	}
+
 	virt_map(vm, DATA_GPA_BASE, DATA_GPA_BASE, npages_for_all_vcpus);
 
 	for (i = 0; i < nr_vcpus; i++)
@@ -371,13 +382,16 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
 	for (i = 0; i < nr_vcpus; i++)
 		pthread_join(threads[i], NULL);
 
-	test_invalidation_code_unbound(vm, nr_vcpus, DATA_SIZE);
+	if (!use_multiple_memslots)
+		test_invalidation_code_unbound(vm, 1, DATA_SIZE * nr_vcpus);
+	else
+		test_invalidation_code_unbound(vm, nr_vcpus, DATA_SIZE);
 }
 
 static void usage(const char *command)
 {
 	puts("");
-	printf("usage: %s [-h] [-s mem-type] [-n number-of-vcpus] [-i number-of-iterations]\n",
+	printf("usage: %s [-h] [-m] [-s mem-type] [-n number-of-vcpus] [-i number-of-iterations]\n",
 	       command);
 	puts("");
 	backing_src_help("-s");
@@ -388,6 +402,8 @@ static void usage(const char *command)
 	puts(" -i: specify the number iterations of memory conversion");
 	puts("     tests to run. (default: 10)");
 	puts("");
+	puts(" -m: use multiple memslots (default: use 1 memslot)");
+	puts("");
 }
 
 int main(int argc, char *argv[])
@@ -395,12 +411,13 @@ int main(int argc, char *argv[])
 	enum vm_mem_backing_src_type src_type = DEFAULT_VM_MEM_SRC;
 	uint8_t nr_vcpus = 2;
 	uint32_t iterations = 10;
+	bool use_multiple_memslots = false;
 	int opt;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_EXIT_HYPERCALL));
 	TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_PROTECTED_VM));
 
-	while ((opt = getopt(argc, argv, "hs:n:i:")) != -1) {
+	while ((opt = getopt(argc, argv, "mhs:n:i:")) != -1) {
 		switch (opt) {
 		case 'n':
 			nr_vcpus = atoi_positive("nr_vcpus", optarg);
@@ -411,6 +428,9 @@ int main(int argc, char *argv[])
 		case 's':
 			src_type = parse_backing_src_type(optarg);
 			break;
+		case 'm':
+			use_multiple_memslots = true;
+			break;
 		case 'h':
 		default:
 			usage(argv[0]);
@@ -418,6 +438,6 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	test_mem_conversions(src_type, nr_vcpus, iterations);
+	test_mem_conversions(src_type, nr_vcpus, iterations, use_multiple_memslots);
 	return 0;
 }
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 07/10] KVM: selftests: Add vm_userspace_mem_region_add_with_restrictedmem
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (5 preceding siblings ...)
  2023-03-16  0:30 ` [RFC PATCH 06/10] KVM: selftests: Default private_mem_conversions_test to use 1 memslot for test data Ackerley Tng
@ 2023-03-16  0:31 ` Ackerley Tng
  2023-03-16  0:31 ` [RFC PATCH 08/10] KVM: selftests: Default private_mem_conversions_test to use 1 restrictedmem file for test data Ackerley Tng
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:31 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Provide new function to allow restrictedmem's fd and offset to be
specified in selftests.

No functional change intended to vm_userspace_mem_region_add.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     |  4 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 46 +++++++++++++++++--
 2 files changed, 46 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index b6531a4063bb..c1ac82332ca4 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -486,6 +486,10 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	enum vm_mem_backing_src_type src_type,
 	uint64_t guest_paddr, uint32_t slot, uint64_t npages,
 	uint32_t flags);
+void vm_userspace_mem_region_add_with_restrictedmem(struct kvm_vm *vm,
+	enum vm_mem_backing_src_type src_type,
+	uint64_t guest_paddr, uint32_t slot, uint64_t npages,
+	uint32_t flags, int restrictedmem_fd, uint64_t restrictedmem_offset);
 
 void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
 void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index d0e6b10f140f..d6bfcfc5cdea 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -898,6 +898,43 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	enum vm_mem_backing_src_type src_type,
 	uint64_t guest_paddr, uint32_t slot, uint64_t npages,
 	uint32_t flags)
+{
+	int restrictedmem_fd;
+
+	restrictedmem_fd = flags & KVM_MEM_PRIVATE ? memfd_restricted(0) : 0;
+	vm_userspace_mem_region_add_with_restrictedmem(
+		vm, src_type, guest_paddr, slot, npages, flags,
+		restrictedmem_fd, 0);
+}
+
+/*
+ * VM Userspace Memory Region Add With restrictedmem
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   src_type - Storage source for this region.
+ *              NULL to use anonymous memory.
+ *   guest_paddr - Starting guest physical address
+ *   slot - KVM region slot
+ *   npages - Number of physical pages
+ *   flags - KVM memory region flags (e.g. KVM_MEM_LOG_DIRTY_PAGES)
+ *   restrictedmem_fd - restrictedmem_fd for use with restrictedmem
+ *   restrictedmem_offset - offset within restrictedmem_fd to be used
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Allocates a memory area of the number of pages specified by npages
+ * and maps it to the VM specified by vm, at a starting physical address
+ * given by guest_paddr.  The region is created with a KVM region slot
+ * given by slot, which must be unique and < KVM_MEM_SLOTS_NUM.  The
+ * region is created with the flags given by flags.
+ */
+void vm_userspace_mem_region_add_with_restrictedmem(struct kvm_vm *vm,
+	enum vm_mem_backing_src_type src_type,
+	uint64_t guest_paddr, uint32_t slot, uint64_t npages,
+	uint32_t flags, int restrictedmem_fd, uint64_t restrictedmem_offset)
 {
 	int ret;
 	struct userspace_mem_region *region;
@@ -1011,8 +1048,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	region->backing_src_type = src_type;
 
 	if (flags & KVM_MEM_PRIVATE) {
-		region->region.restrictedmem_fd = memfd_restricted(0);
-		region->region.restrictedmem_offset = 0;
+		region->region.restrictedmem_fd = restrictedmem_fd;
+		region->region.restrictedmem_offset = restrictedmem_offset;
 
 		TEST_ASSERT(region->region.restrictedmem_fd >= 0,
 			    "Failed to create restricted memfd");
@@ -1030,10 +1067,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n"
 		"  rc: %i errno: %i\n"
 		"  slot: %u flags: 0x%x\n"
-		"  guest_phys_addr: 0x%lx size: 0x%lx restricted fd: %d\n",
+		"  guest_phys_addr: 0x%lx size: 0x%lx\n"
+		"  restricted fd: %d restricted_offset: 0x%llx\n",
 		ret, errno, slot, flags,
 		guest_paddr, (uint64_t) region->region.memory_size,
-		region->region.restrictedmem_fd);
+		region->region.restrictedmem_fd, region->region.restrictedmem_offset);
 
 	/* Add to quick lookup data structures */
 	vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region);
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 08/10] KVM: selftests: Default private_mem_conversions_test to use 1 restrictedmem file for test data
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (6 preceding siblings ...)
  2023-03-16  0:31 ` [RFC PATCH 07/10] KVM: selftests: Add vm_userspace_mem_region_add_with_restrictedmem Ackerley Tng
@ 2023-03-16  0:31 ` Ackerley Tng
  2023-03-16  0:31 ` [RFC PATCH 09/10] KVM: selftests: Add tests around sharing a restrictedmem fd Ackerley Tng
  2023-03-16  0:31 ` [RFC PATCH 10/10] KVM: selftests: Test KVM exit behavior for private memory/access Ackerley Tng
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:31 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Default the private/shared memory conversion tests to use a single
file (when multiple memslots are requested), while executing on
multiple vCPUs in parallel, to stress-test the restrictedmem subsystem.

Also add a flag to allow multiple files to be used.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../kvm/x86_64/private_mem_conversions_test.c | 52 ++++++++++++++-----
 1 file changed, 38 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index afaf8d0e52e6..ca30f0f05c39 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -324,7 +324,8 @@ void *thread_function(void *input)
 }
 
 static void add_memslot_for_vcpu(
-	struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint8_t vcpu_id)
+	struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint8_t vcpu_id,
+	int restrictedmem_fd, uint64_t restrictedmem_offset)
 {
 	uint64_t gpa = data_gpa_base_for_vcpu_id(vcpu_id);
 	uint32_t slot = DATA_SLOT_BASE + vcpu_id;
@@ -336,7 +337,8 @@ static void add_memslot_for_vcpu(
 
 static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
 				 uint8_t nr_vcpus, uint32_t iterations,
-				 bool use_multiple_memslots)
+				 bool use_multiple_memslots,
+				 bool use_different_restrictedmem_files)
 {
 	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
 	pthread_t threads[KVM_MAX_VCPUS];
@@ -356,21 +358,28 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
 	vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
 
 	npages_for_all_vcpus = DATA_SIZE / vm->page_size * nr_vcpus;
+	virt_map(vm, DATA_GPA_BASE, DATA_GPA_BASE, npages_for_all_vcpus);
 
 	if (use_multiple_memslots) {
-		for (i = 0; i < nr_vcpus; i++)
-			add_memslot_for_vcpu(vm, src_type, i);
+		int fd = memfd_restricted(0);
+		int offset = 0;
+
+		for (i = 0; i < nr_vcpus; i++) {
+			if (use_different_restrictedmem_files) {
+				if (i > 0)
+					fd = memfd_restricted(0);
+			} else {
+				offset = i * DATA_GPA_SPACING;
+			}
+
+			add_memslot_for_vcpu(vm, src_type, i, fd, offset);
+		}
 	} else {
 		vm_userspace_mem_region_add(
 			vm, src_type, DATA_GPA_BASE, DATA_SLOT_BASE,
 			npages_for_all_vcpus, KVM_MEM_PRIVATE);
 	}
 
-	virt_map(vm, DATA_GPA_BASE, DATA_GPA_BASE, npages_for_all_vcpus);
-
-	for (i = 0; i < nr_vcpus; i++)
-		add_memslot_for_vcpu(vm, src_type, i);
-
 	for (i = 0; i < nr_vcpus; i++) {
 		args[i].vm = vm;
 		args[i].vcpu = vcpus[i];
@@ -382,7 +391,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
 	for (i = 0; i < nr_vcpus; i++)
 		pthread_join(threads[i], NULL);
 
-	if (!use_multiple_memslots)
+	if (!use_multiple_memslots || !use_different_restrictedmem_files)
 		test_invalidation_code_unbound(vm, 1, DATA_SIZE * nr_vcpus);
 	else
 		test_invalidation_code_unbound(vm, nr_vcpus, DATA_SIZE);
@@ -391,8 +400,9 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type,
 static void usage(const char *command)
 {
 	puts("");
-	printf("usage: %s [-h] [-m] [-s mem-type] [-n number-of-vcpus] [-i number-of-iterations]\n",
-	       command);
+	printf("usage: %s\n", command);
+	printf("       [-h] [-m] [-f] [-s mem-type]\n");
+	printf("       [-n number-of-vcpus] [-i number-of-iterations]\n");
 	puts("");
 	backing_src_help("-s");
 	puts("");
@@ -404,6 +414,9 @@ static void usage(const char *command)
 	puts("");
 	puts(" -m: use multiple memslots (default: use 1 memslot)");
 	puts("");
+	puts(" -f: use different restrictedmem files for each memslot");
+	puts("     (default: use 1 restrictedmem file for all memslots)");
+	puts("");
 }
 
 int main(int argc, char *argv[])
@@ -412,12 +425,13 @@ int main(int argc, char *argv[])
 	uint8_t nr_vcpus = 2;
 	uint32_t iterations = 10;
 	bool use_multiple_memslots = false;
+	bool use_different_restrictedmem_files = false;
 	int opt;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_EXIT_HYPERCALL));
 	TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_PROTECTED_VM));
 
-	while ((opt = getopt(argc, argv, "mhs:n:i:")) != -1) {
+	while ((opt = getopt(argc, argv, "fmhs:n:i:")) != -1) {
 		switch (opt) {
 		case 'n':
 			nr_vcpus = atoi_positive("nr_vcpus", optarg);
@@ -431,6 +445,9 @@ int main(int argc, char *argv[])
 		case 'm':
 			use_multiple_memslots = true;
 			break;
+		case 'f':
+			use_different_restrictedmem_files = true;
+			break;
 		case 'h':
 		default:
 			usage(argv[0]);
@@ -438,6 +455,13 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	test_mem_conversions(src_type, nr_vcpus, iterations, use_multiple_memslots);
+	if (!use_multiple_memslots && use_different_restrictedmem_files) {
+		printf("Overriding -f flag: ");
+		puts("Using just 1 restrictedmem file since only 1 memslot is to be used.");
+		use_different_restrictedmem_files = false;
+	}
+
+	test_mem_conversions(src_type, nr_vcpus, iterations, use_multiple_memslots,
+			     use_different_restrictedmem_files);
 	return 0;
 }
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 09/10] KVM: selftests: Add tests around sharing a restrictedmem fd
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (7 preceding siblings ...)
  2023-03-16  0:31 ` [RFC PATCH 08/10] KVM: selftests: Default private_mem_conversions_test to use 1 restrictedmem file for test data Ackerley Tng
@ 2023-03-16  0:31 ` Ackerley Tng
  2023-03-16  0:31 ` [RFC PATCH 10/10] KVM: selftests: Test KVM exit behavior for private memory/access Ackerley Tng
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:31 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

Tests that

+ Different memslots in the same VM should be able to share a
  restrictedmem_fd
+ A second VM cannot share the same offsets in a restrictedmem_fd
+ Different VMs should be able to share the same restrictedmem_fd, as
  long as the offsets in the restrictedmem_fd are different

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 .../selftests/kvm/set_memory_region_test.c    | 29 +++++++++++++++++--
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index cc727d11569e..789c413e2a67 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -401,7 +401,7 @@ static bool set_private_region_failed(struct kvm_vm *vm, void *hva,
 static void test_private_regions(void)
 {
 	int ret;
-	struct kvm_vm *vm;
+	struct kvm_vm *vm, *vm2;
 	void *mem;
 	int fd;
 
@@ -416,7 +416,7 @@ static void test_private_regions(void)
 
 	vm = __vm_create(shape, 1, 0);
 
-	mem = mmap(NULL, MEM_REGION_SIZE * 2, PROT_READ | PROT_WRITE,
+	mem = mmap(NULL, MEM_REGION_SIZE * 3, PROT_READ | PROT_WRITE,
 		   MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
 	TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host");
 
@@ -448,8 +448,31 @@ static void test_private_regions(void)
 	TEST_ASSERT(ret == -1 && errno == EINVAL,
 		    "Set overlapping restrictedmem_offset should fail");
 
-	munmap(mem, MEM_REGION_SIZE * 2);
+	ret = __vm_set_user_memory_region2(vm, MEM_REGION_SLOT + 1,
+					   KVM_MEM_PRIVATE,
+					   MEM_REGION_GPA + MEM_REGION_SIZE,
+					   MEM_REGION_SIZE,
+					   mem + MEM_REGION_SIZE,
+					   fd, MEM_REGION_SIZE);
+	TEST_ASSERT(!ret,
+		    "Different memslots should be able to share a restrictedmem_fd");
+
+	vm2 = __vm_create(shape, 1, 0);
+	TEST_ASSERT(set_private_region_failed(vm2, mem + 2 * MEM_REGION_SIZE, fd, 0),
+		    "Pages (offsets) of a restrictedmem_fd should be exclusive to a VM");
+
+	ret = __vm_set_user_memory_region2(vm2, MEM_REGION_SLOT,
+					   KVM_MEM_PRIVATE,
+					   MEM_REGION_GPA + 2 * MEM_REGION_SIZE,
+					   MEM_REGION_SIZE,
+					   mem + 2 * MEM_REGION_SIZE,
+					   fd, 2 * MEM_REGION_SIZE);
+	TEST_ASSERT(!ret,
+		    "Different VMs should be able to share a restrictedmem_fd");
+
+	munmap(mem, MEM_REGION_SIZE * 3);
 	kvm_vm_free(vm);
+	kvm_vm_free(vm2);
 }
 
 int main(int argc, char *argv[])
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH 10/10] KVM: selftests: Test KVM exit behavior for private memory/access
  2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
                   ` (8 preceding siblings ...)
  2023-03-16  0:31 ` [RFC PATCH 09/10] KVM: selftests: Add tests around sharing a restrictedmem fd Ackerley Tng
@ 2023-03-16  0:31 ` Ackerley Tng
  9 siblings, 0 replies; 11+ messages in thread
From: Ackerley Tng @ 2023-03-16  0:31 UTC (permalink / raw)
  To: kvm, linux-api, linux-arch, linux-doc, linux-fsdevel,
	linux-kernel, linux-mm, qemu-devel
  Cc: aarcange, ak, akpm, arnd, bfields, bp, chao.p.peng, corbet,
	dave.hansen, david, ddutile, dhildenb, hpa, hughd, jlayton,
	jmattson, joro, jun.nakajima, kirill.shutemov, linmiaohe, luto,
	mail, mhocko, michael.roth, mingo, naoya.horiguchi, pbonzini,
	qperret, rppt, seanjc, shuah, steven.price, tabba, tglx,
	vannapurve, vbabka, vkuznets, wanpengli, wei.w.wang, x86,
	yu.c.zhang, Ackerley Tng

"Testing private access when memslot gets deleted" tests the behavior
of KVM when a private memslot gets deleted while the VM is using the
private memslot. When KVM looks up the deleted (slot = NULL) memslot,
KVM should exit to userspace with KVM_EXIT_MEMORY_FAULT.

In the second test, upon a private access to non-private memslot, KVM
should also exit to userspace with KVM_EXIT_MEMORY_FAULT.

Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/x86_64/private_mem_kvm_exits_test.c   | 124 ++++++++++++++++++
 2 files changed, 125 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index bafee3c43b2e..0ad588852a1d 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -80,6 +80,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/nested_exceptions_test
 TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
 TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
 TEST_GEN_PROGS_x86_64 += x86_64/private_mem_conversions_test
+TEST_GEN_PROGS_x86_64 += x86_64/private_mem_kvm_exits_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
 TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test
diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c
new file mode 100644
index 000000000000..c8667dfbbf0a
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022, Google LLC.
+ */
+#include "kvm_util_base.h"
+#include <linux/kvm.h>
+#include <pthread.h>
+#include <stdint.h>
+#include "kvm_util.h"
+#include "processor.h"
+#include "test_util.h"
+
+/* Arbitrarily selected to avoid overlaps with anything else */
+#define EXITS_TEST_GVA 0xc0000000
+#define EXITS_TEST_GPA EXITS_TEST_GVA
+#define EXITS_TEST_NPAGES 1
+#define EXITS_TEST_SIZE (EXITS_TEST_NPAGES * PAGE_SIZE)
+#define EXITS_TEST_SLOT 10
+
+static uint64_t guest_repeatedly_read(void)
+{
+	volatile uint64_t value;
+
+	while (true)
+		value = *((uint64_t *) EXITS_TEST_GVA);
+
+	return value;
+}
+
+static uint32_t run_vcpu_get_exit_reason(struct kvm_vcpu *vcpu)
+{
+	vcpu_run(vcpu);
+
+	return vcpu->run->exit_reason;
+}
+
+const struct vm_shape protected_vm_shape = {
+	.mode = VM_MODE_DEFAULT,
+	.type = KVM_X86_PROTECTED_VM,
+};
+
+static void test_private_access_memslot_deleted(void)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	pthread_t vm_thread;
+	void *thread_return;
+	uint32_t exit_reason;
+
+	vm = vm_create_shape_with_one_vcpu(protected_vm_shape, &vcpu,
+					   guest_repeatedly_read);
+
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+				    EXITS_TEST_GPA, EXITS_TEST_SLOT,
+				    EXITS_TEST_NPAGES,
+				    KVM_MEM_PRIVATE);
+
+	virt_map(vm, EXITS_TEST_GVA, EXITS_TEST_GPA, EXITS_TEST_NPAGES);
+
+	/* Request to access page privately */
+	vm_mem_map_shared_or_private(vm, EXITS_TEST_GPA, EXITS_TEST_SIZE, false);
+
+	pr_info("Testing private access when memslot gets deleted\n");
+
+	pthread_create(&vm_thread, NULL,
+		       (void *(*)(void *))run_vcpu_get_exit_reason,
+		       (void *)vcpu);
+
+	vm_mem_region_delete(vm, EXITS_TEST_SLOT);
+
+	pthread_join(vm_thread, &thread_return);
+	exit_reason = (uint32_t)(uint64_t)thread_return;
+
+	ASSERT_EQ(exit_reason, KVM_EXIT_MEMORY_FAULT);
+	ASSERT_EQ(vcpu->run->memory.flags, KVM_MEMORY_EXIT_FLAG_PRIVATE);
+	ASSERT_EQ(vcpu->run->memory.gpa, EXITS_TEST_GPA);
+	ASSERT_EQ(vcpu->run->memory.size, EXITS_TEST_SIZE);
+
+	pr_info("\t ... PASSED\n");
+
+	kvm_vm_free(vm);
+}
+
+static void test_private_access_memslot_not_private(void)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	uint32_t exit_reason;
+
+	vm = vm_create_shape_with_one_vcpu(protected_vm_shape, &vcpu,
+					   guest_repeatedly_read);
+
+	/* Add a non-private memslot (flags = 0) */
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+				    EXITS_TEST_GPA, EXITS_TEST_SLOT,
+				    EXITS_TEST_NPAGES, 0);
+
+	virt_map(vm, EXITS_TEST_GVA, EXITS_TEST_GPA, EXITS_TEST_NPAGES);
+
+	/* Request to access page privately */
+	vm_set_memory_attributes(vm, EXITS_TEST_GPA, EXITS_TEST_SIZE,
+				 KVM_MEMORY_ATTRIBUTE_PRIVATE);
+
+	pr_info("Testing private access to non-private memslot\n");
+
+	exit_reason = run_vcpu_get_exit_reason(vcpu);
+
+	ASSERT_EQ(exit_reason, KVM_EXIT_MEMORY_FAULT);
+	ASSERT_EQ(vcpu->run->memory.flags, KVM_MEMORY_EXIT_FLAG_PRIVATE);
+	ASSERT_EQ(vcpu->run->memory.gpa, EXITS_TEST_GPA);
+	ASSERT_EQ(vcpu->run->memory.size, EXITS_TEST_SIZE);
+
+	pr_info("\t ... PASSED\n");
+
+	kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+	TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_PROTECTED_VM));
+
+	test_private_access_memslot_deleted();
+	test_private_access_memslot_not_private();
+}
-- 
2.40.0.rc2.332.ga46443480c-goog


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-03-16  0:33 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-16  0:30 [RFC PATCH 00/10] Additional selftests for restrictedmem Ackerley Tng
2023-03-16  0:30 ` [RFC PATCH 01/10] KVM: selftests: Test error message fixes for memfd_restricted selftests Ackerley Tng
2023-03-16  0:30 ` [RFC PATCH 02/10] KVM: selftests: Test that ftruncate to non-page-aligned size on a restrictedmem fd should fail Ackerley Tng
2023-03-16  0:30 ` [RFC PATCH 03/10] KVM: selftests: Test that VM private memory should not be readable from host Ackerley Tng
2023-03-16  0:30 ` [RFC PATCH 04/10] KVM: selftests: Exercise restrictedmem allocation and truncation code after KVM invalidation code has been unbound Ackerley Tng
2023-03-16  0:30 ` [RFC PATCH 05/10] KVM: selftests: Generalize private_mem_conversions_test for parallel execution Ackerley Tng
2023-03-16  0:30 ` [RFC PATCH 06/10] KVM: selftests: Default private_mem_conversions_test to use 1 memslot for test data Ackerley Tng
2023-03-16  0:31 ` [RFC PATCH 07/10] KVM: selftests: Add vm_userspace_mem_region_add_with_restrictedmem Ackerley Tng
2023-03-16  0:31 ` [RFC PATCH 08/10] KVM: selftests: Default private_mem_conversions_test to use 1 restrictedmem file for test data Ackerley Tng
2023-03-16  0:31 ` [RFC PATCH 09/10] KVM: selftests: Add tests around sharing a restrictedmem fd Ackerley Tng
2023-03-16  0:31 ` [RFC PATCH 10/10] KVM: selftests: Test KVM exit behavior for private memory/access Ackerley Tng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).