linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC V1 00/10] *** KVM: selftests: Add simple SEV test
@ 2022-07-15 19:29 Peter Gonda
  2022-07-15 19:29 ` [PATCH V2] KVM: selftests: Add simple sev vm testing Peter Gonda
                   ` (10 more replies)
  0 siblings, 11 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

This patch series combines the work Michael Roth has done in supporting
SEV guests in selftests and the work Sean Christopherson suggested to
allow ucalls from SEV guests. And the work Sean has sent to consolidate
the ucall boilerplate code. Along with a very simple version of the
SEV selftests Michael originally proposed.

Michael Roth (7):
  KVM: selftests: move vm_phy_pages_alloc() earlier in file
  KVM: selftests: sparsebit: add const where appropriate
  KVM: selftests: add hooks for managing encrypted guest memory
  KVM: selftests: handle encryption bits in page tables
  KVM: selftests: add support for encrypted vm_vaddr_* allocations
  KVM: selftests: add library for creating/interacting with SEV guests
  KVM: selftests: Add simple sev vm testing

Sean Christopherson (1):
  KVM: selftests: Consolidate boilerplate code in get_ucall()

Peter Gonda (2):
  tools: Add atomic_test_and_set_bit()
  KVM: selftests: Make ucall work with encrypted guests

 tools/arch/x86/include/asm/atomic.h           |   7 +
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   4 +
 .../selftests/kvm/include/kvm_util_base.h     |  50 +++-
 .../testing/selftests/kvm/include/sparsebit.h |  36 +--
 .../selftests/kvm/include/ucall_common.h      |  15 +-
 .../selftests/kvm/include/x86_64/sev.h        |  44 +++
 .../testing/selftests/kvm/lib/aarch64/ucall.c |  36 +--
 tools/testing/selftests/kvm/lib/kvm_util.c    | 268 +++++++++++++-----
 .../selftests/kvm/lib/perf_test_util.c        |   2 +-
 tools/testing/selftests/kvm/lib/riscv/ucall.c |  40 +--
 tools/testing/selftests/kvm/lib/s390x/ucall.c |  37 +--
 tools/testing/selftests/kvm/lib/sparsebit.c   |  48 ++--
 .../testing/selftests/kvm/lib/ucall_common.c  | 140 +++++++++
 .../selftests/kvm/lib/x86_64/processor.c      |  15 +-
 tools/testing/selftests/kvm/lib/x86_64/sev.c  | 251 ++++++++++++++++
 .../testing/selftests/kvm/lib/x86_64/ucall.c  |  37 +--
 .../selftests/kvm/x86_64/sev_all_boot_test.c  | 134 +++++++++
 18 files changed, 903 insertions(+), 262 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
 create mode 100644 tools/testing/selftests/kvm/lib/ucall_common.c
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH V2] KVM: selftests: Add simple sev vm testing
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 01/10] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini

From: Michael Roth <michael.roth@amd.com>

A very simple of booting SEV guests that checks related CPUID bits. This
is a stripped down version of "[PATCH v2 08/13] KVM: selftests: add SEV
boot tests" from Michael but much simpler.

Signed-off-by: Michael Roth <michael.roth@amd.com>

---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/sev_all_boot_test.c  | 134 ++++++++++++++++++
 3 files changed, 136 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index f44ebf401310..ec084a61a819 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -34,6 +34,7 @@
 /x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
+/x86_64/sev_all_boot_test
 /x86_64/sev_migrate_tests
 /x86_64/smm_test
 /x86_64/state_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index f3f29a64e4b2..2b89e6bcb5b0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -123,6 +123,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test
 TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
 TEST_GEN_PROGS_x86_64 += x86_64/amx_test
 TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
new file mode 100644
index 000000000000..222ce41d6f68
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
@@ -0,0 +1,134 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Basic SEV boot tests.
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "linux/psp-sev.h"
+#include "sev.h"
+
+#define VCPU_ID			2
+#define PAGE_STRIDE		32
+
+#define SHARED_PAGES		8192
+#define SHARED_VADDR_MIN	0x1000000
+
+#define PRIVATE_PAGES		2048
+#define PRIVATE_VADDR_MIN	(SHARED_VADDR_MIN + SHARED_PAGES * PAGE_SIZE)
+
+#define TOTAL_PAGES		(512 + SHARED_PAGES + PRIVATE_PAGES)
+
+#define NR_SYNCS 1
+
+static void guest_run_loop(struct kvm_vcpu *vcpu)
+{
+	struct ucall uc;
+	int i;
+
+	for (i = 0; i <= NR_SYNCS; ++i) {
+		vcpu_run(vcpu);
+		switch (get_ucall(vcpu, &uc)) {
+		case UCALL_SYNC:
+			continue;
+		case UCALL_DONE:
+			return;
+		case UCALL_ABORT:
+			TEST_ASSERT(false, "%s at %s:%ld\n\tvalues: %#lx, %#lx",
+				    (const char *)uc.args[0], __FILE__,
+				    uc.args[1], uc.args[2], uc.args[3]);
+		default:
+			TEST_ASSERT(
+				false, "Unexpected exit: %s",
+				exit_reason_str(vcpu->run->exit_reason));
+		}
+	}
+}
+
+static void __attribute__((__flatten__))
+guest_sev_code(void)
+{
+	uint32_t eax, ebx, ecx, edx;
+	uint64_t sev_status;
+
+	GUEST_SYNC(1);
+
+	eax = 0x8000001f;
+	ecx = 0;
+	cpuid(&eax, &ebx, &ecx, &edx);
+	GUEST_ASSERT(eax & (1 << 1));
+
+	sev_status = rdmsr(MSR_AMD64_SEV);
+	GUEST_ASSERT((sev_status & 0x1) == 1);
+
+	GUEST_DONE();
+}
+
+static struct sev_vm *
+setup_test_common(void *guest_code, uint64_t policy, struct kvm_vcpu **vcpu)
+{
+	uint8_t measurement[512];
+	struct sev_vm *sev;
+	struct kvm_vm *vm;
+	int i;
+
+	sev = sev_vm_create(policy, TOTAL_PAGES);
+	if (!sev)
+		return NULL;
+	vm = sev_get_vm(sev);
+
+	/* Set up VCPU and initial guest kernel. */
+	*vcpu = vm_vcpu_add(vm, VCPU_ID, guest_code);
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	/* Allocations/setup done. Encrypt initial guest payload. */
+	sev_vm_launch(sev);
+
+	/* Dump the initial measurement. A test to actually verify it would be nice. */
+	sev_vm_launch_measure(sev, measurement);
+	pr_info("guest measurement: ");
+	for (i = 0; i < 32; ++i)
+		pr_info("%02x", measurement[i]);
+	pr_info("\n");
+
+	sev_vm_launch_finish(sev);
+
+	return sev;
+}
+
+static void test_sev(void *guest_code, uint64_t policy)
+{
+	struct sev_vm *sev;
+	struct kvm_vcpu *vcpu;
+
+	sev = setup_test_common(guest_code, policy, &vcpu);
+	if (!sev)
+		return;
+
+	/* Guest is ready to run. Do the tests. */
+	guest_run_loop(vcpu);
+
+	pr_info("guest ran successfully\n");
+
+	sev_vm_free(sev);
+}
+
+int main(int argc, char *argv[])
+{
+	/* SEV tests */
+	test_sev(guest_sev_code, SEV_POLICY_NO_DBG);
+	test_sev(guest_sev_code, 0);
+
+	return 0;
+}
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 01/10] KVM: selftests: move vm_phy_pages_alloc() earlier in file
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
  2022-07-15 19:29 ` [PATCH V2] KVM: selftests: Add simple sev vm testing Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 02/10] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

Subsequent patches will break some of this code out into file-local
helper functions, which will be used by functions like vm_vaddr_alloc(),
which currently are defined earlier in the file, so a forward
declaration would be needed.

Instead, move it earlier in the file, just above vm_vaddr_alloc() and
and friends, which are the main users.

Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 tools/testing/selftests/kvm/lib/kvm_util.c | 146 ++++++++++-----------
 1 file changed, 73 insertions(+), 73 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 768f3bce0161..b07c372b9b37 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1076,6 +1076,79 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
 	return vcpu;
 }
 
+/*
+ * Physical Contiguous Page Allocator
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   num - number of pages
+ *   paddr_min - Physical address minimum
+ *   memslot - Memory region to allocate page from
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Starting physical address
+ *
+ * Within the VM specified by vm, locates a range of available physical
+ * pages at or above paddr_min. If found, the pages are marked as in use
+ * and their base address is returned. A TEST_ASSERT failure occurs if
+ * not enough pages are available at or above paddr_min.
+ */
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot)
+{
+	struct userspace_mem_region *region;
+	sparsebit_idx_t pg, base;
+
+	TEST_ASSERT(num > 0, "Must allocate at least one page");
+
+	TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address "
+		"not divisible by page size.\n"
+		"  paddr_min: 0x%lx page_size: 0x%x",
+		paddr_min, vm->page_size);
+
+	region = memslot2region(vm, memslot);
+	base = pg = paddr_min >> vm->page_shift;
+
+	do {
+		for (; pg < base + num; ++pg) {
+			if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
+				base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
+				break;
+			}
+		}
+	} while (pg && pg != base + num);
+
+	if (pg == 0) {
+		fprintf(stderr, "No guest physical page available, "
+			"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
+			paddr_min, vm->page_size, memslot);
+		fputs("---- vm dump ----\n", stderr);
+		vm_dump(stderr, vm, 2);
+		abort();
+	}
+
+	for (pg = base; pg < base + num; ++pg)
+		sparsebit_clear(region->unused_phy_pages, pg);
+
+	return base * vm->page_size;
+}
+
+vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
+			     uint32_t memslot)
+{
+	return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
+}
+
+/* Arbitrary minimum physical address used for virtual translation tables. */
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
+
+vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
+{
+	return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+}
+
 /*
  * VM Virtual Address Unused Gap
  *
@@ -1722,79 +1795,6 @@ const char *exit_reason_str(unsigned int exit_reason)
 	return "Unknown";
 }
 
-/*
- * Physical Contiguous Page Allocator
- *
- * Input Args:
- *   vm - Virtual Machine
- *   num - number of pages
- *   paddr_min - Physical address minimum
- *   memslot - Memory region to allocate page from
- *
- * Output Args: None
- *
- * Return:
- *   Starting physical address
- *
- * Within the VM specified by vm, locates a range of available physical
- * pages at or above paddr_min. If found, the pages are marked as in use
- * and their base address is returned. A TEST_ASSERT failure occurs if
- * not enough pages are available at or above paddr_min.
- */
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
-			      vm_paddr_t paddr_min, uint32_t memslot)
-{
-	struct userspace_mem_region *region;
-	sparsebit_idx_t pg, base;
-
-	TEST_ASSERT(num > 0, "Must allocate at least one page");
-
-	TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address "
-		"not divisible by page size.\n"
-		"  paddr_min: 0x%lx page_size: 0x%x",
-		paddr_min, vm->page_size);
-
-	region = memslot2region(vm, memslot);
-	base = pg = paddr_min >> vm->page_shift;
-
-	do {
-		for (; pg < base + num; ++pg) {
-			if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
-				base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
-				break;
-			}
-		}
-	} while (pg && pg != base + num);
-
-	if (pg == 0) {
-		fprintf(stderr, "No guest physical page available, "
-			"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
-			paddr_min, vm->page_size, memslot);
-		fputs("---- vm dump ----\n", stderr);
-		vm_dump(stderr, vm, 2);
-		abort();
-	}
-
-	for (pg = base; pg < base + num; ++pg)
-		sparsebit_clear(region->unused_phy_pages, pg);
-
-	return base * vm->page_size;
-}
-
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
-			     uint32_t memslot)
-{
-	return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
-}
-
-/* Arbitrary minimum physical address used for virtual translation tables. */
-#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
-
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
-{
-	return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
-}
-
 /*
  * Address Guest Virtual to Host Virtual
  *
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 02/10] KVM: selftests: sparsebit: add const where appropriate
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
  2022-07-15 19:29 ` [PATCH V2] KVM: selftests: Add simple sev vm testing Peter Gonda
  2022-07-15 19:29 ` [RFC V1 01/10] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 03/10] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

Subsequent patches will introduce an encryption bitmap in kvm_util that
would be useful to allow tests to access in read-only fashion. This
will be done via a const sparsebit*. To avoid warnings or the need to
add casts everywhere, add const to the various sparsebit functions that
are applicable for read-only usage of sparsebit.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 .../testing/selftests/kvm/include/sparsebit.h | 36 +++++++-------
 tools/testing/selftests/kvm/lib/sparsebit.c   | 48 +++++++++----------
 2 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h
index 12a9a4b9cead..fb5170d57fcb 100644
--- a/tools/testing/selftests/kvm/include/sparsebit.h
+++ b/tools/testing/selftests/kvm/include/sparsebit.h
@@ -30,26 +30,26 @@ typedef uint64_t sparsebit_num_t;
 
 struct sparsebit *sparsebit_alloc(void);
 void sparsebit_free(struct sparsebit **sbitp);
-void sparsebit_copy(struct sparsebit *dstp, struct sparsebit *src);
+void sparsebit_copy(struct sparsebit *dstp, const struct sparsebit *src);
 
-bool sparsebit_is_set(struct sparsebit *sbit, sparsebit_idx_t idx);
-bool sparsebit_is_set_num(struct sparsebit *sbit,
+bool sparsebit_is_set(const struct sparsebit *sbit, sparsebit_idx_t idx);
+bool sparsebit_is_set_num(const struct sparsebit *sbit,
 			  sparsebit_idx_t idx, sparsebit_num_t num);
-bool sparsebit_is_clear(struct sparsebit *sbit, sparsebit_idx_t idx);
-bool sparsebit_is_clear_num(struct sparsebit *sbit,
+bool sparsebit_is_clear(const struct sparsebit *sbit, sparsebit_idx_t idx);
+bool sparsebit_is_clear_num(const struct sparsebit *sbit,
 			    sparsebit_idx_t idx, sparsebit_num_t num);
-sparsebit_num_t sparsebit_num_set(struct sparsebit *sbit);
-bool sparsebit_any_set(struct sparsebit *sbit);
-bool sparsebit_any_clear(struct sparsebit *sbit);
-bool sparsebit_all_set(struct sparsebit *sbit);
-bool sparsebit_all_clear(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_first_set(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_first_clear(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_next_set(struct sparsebit *sbit, sparsebit_idx_t prev);
-sparsebit_idx_t sparsebit_next_clear(struct sparsebit *sbit, sparsebit_idx_t prev);
-sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *sbit,
+sparsebit_num_t sparsebit_num_set(const struct sparsebit *sbit);
+bool sparsebit_any_set(const struct sparsebit *sbit);
+bool sparsebit_any_clear(const struct sparsebit *sbit);
+bool sparsebit_all_set(const struct sparsebit *sbit);
+bool sparsebit_all_clear(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_first_set(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_next_set(const struct sparsebit *sbit, sparsebit_idx_t prev);
+sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *sbit, sparsebit_idx_t prev);
+sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *sbit,
 				       sparsebit_idx_t start, sparsebit_num_t num);
-sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *sbit,
+sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *sbit,
 					 sparsebit_idx_t start, sparsebit_num_t num);
 
 void sparsebit_set(struct sparsebit *sbitp, sparsebit_idx_t idx);
@@ -62,9 +62,9 @@ void sparsebit_clear_num(struct sparsebit *sbitp,
 			 sparsebit_idx_t start, sparsebit_num_t num);
 void sparsebit_clear_all(struct sparsebit *sbitp);
 
-void sparsebit_dump(FILE *stream, struct sparsebit *sbit,
+void sparsebit_dump(FILE *stream, const struct sparsebit *sbit,
 		    unsigned int indent);
-void sparsebit_validate_internal(struct sparsebit *sbit);
+void sparsebit_validate_internal(const struct sparsebit *sbit);
 
 #ifdef __cplusplus
 }
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index 50e0cf41a7dd..6777a5b1fbd2 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -202,7 +202,7 @@ static sparsebit_num_t node_num_set(struct node *nodep)
 /* Returns a pointer to the node that describes the
  * lowest bit index.
  */
-static struct node *node_first(struct sparsebit *s)
+static struct node *node_first(const struct sparsebit *s)
 {
 	struct node *nodep;
 
@@ -216,7 +216,7 @@ static struct node *node_first(struct sparsebit *s)
  * lowest bit index > the index of the node pointed to by np.
  * Returns NULL if no node with a higher index exists.
  */
-static struct node *node_next(struct sparsebit *s, struct node *np)
+static struct node *node_next(const struct sparsebit *s, struct node *np)
 {
 	struct node *nodep = np;
 
@@ -244,7 +244,7 @@ static struct node *node_next(struct sparsebit *s, struct node *np)
  * highest index < the index of the node pointed to by np.
  * Returns NULL if no node with a lower index exists.
  */
-static struct node *node_prev(struct sparsebit *s, struct node *np)
+static struct node *node_prev(const struct sparsebit *s, struct node *np)
 {
 	struct node *nodep = np;
 
@@ -273,7 +273,7 @@ static struct node *node_prev(struct sparsebit *s, struct node *np)
  * subtree and duplicates the bit settings to the newly allocated nodes.
  * Returns the newly allocated copy of subtree.
  */
-static struct node *node_copy_subtree(struct node *subtree)
+static struct node *node_copy_subtree(const struct node *subtree)
 {
 	struct node *root;
 
@@ -307,7 +307,7 @@ static struct node *node_copy_subtree(struct node *subtree)
  * index is within the bits described by the mask bits or the number of
  * contiguous bits set after the mask.  Returns NULL if there is no such node.
  */
-static struct node *node_find(struct sparsebit *s, sparsebit_idx_t idx)
+static struct node *node_find(const struct sparsebit *s, sparsebit_idx_t idx)
 {
 	struct node *nodep;
 
@@ -393,7 +393,7 @@ static struct node *node_add(struct sparsebit *s, sparsebit_idx_t idx)
 }
 
 /* Returns whether all the bits in the sparsebit array are set.  */
-bool sparsebit_all_set(struct sparsebit *s)
+bool sparsebit_all_set(const struct sparsebit *s)
 {
 	/*
 	 * If any nodes there must be at least one bit set.  Only case
@@ -776,7 +776,7 @@ static void node_reduce(struct sparsebit *s, struct node *nodep)
 /* Returns whether the bit at the index given by idx, within the
  * sparsebit array is set or not.
  */
-bool sparsebit_is_set(struct sparsebit *s, sparsebit_idx_t idx)
+bool sparsebit_is_set(const struct sparsebit *s, sparsebit_idx_t idx)
 {
 	struct node *nodep;
 
@@ -922,7 +922,7 @@ static inline sparsebit_idx_t node_first_clear(struct node *nodep, int start)
  * used by test cases after they detect an unexpected condition, as a means
  * to capture diagnostic information.
  */
-static void sparsebit_dump_internal(FILE *stream, struct sparsebit *s,
+static void sparsebit_dump_internal(FILE *stream, const struct sparsebit *s,
 	unsigned int indent)
 {
 	/* Dump the contents of s */
@@ -970,7 +970,7 @@ void sparsebit_free(struct sparsebit **sbitp)
  * sparsebit_alloc().  It can though already have bits set, which
  * if different from src will be cleared.
  */
-void sparsebit_copy(struct sparsebit *d, struct sparsebit *s)
+void sparsebit_copy(struct sparsebit *d, const struct sparsebit *s)
 {
 	/* First clear any bits already set in the destination */
 	sparsebit_clear_all(d);
@@ -982,7 +982,7 @@ void sparsebit_copy(struct sparsebit *d, struct sparsebit *s)
 }
 
 /* Returns whether num consecutive bits starting at idx are all set.  */
-bool sparsebit_is_set_num(struct sparsebit *s,
+bool sparsebit_is_set_num(const struct sparsebit *s,
 	sparsebit_idx_t idx, sparsebit_num_t num)
 {
 	sparsebit_idx_t next_cleared;
@@ -1006,14 +1006,14 @@ bool sparsebit_is_set_num(struct sparsebit *s,
 }
 
 /* Returns whether the bit at the index given by idx.  */
-bool sparsebit_is_clear(struct sparsebit *s,
+bool sparsebit_is_clear(const struct sparsebit *s,
 	sparsebit_idx_t idx)
 {
 	return !sparsebit_is_set(s, idx);
 }
 
 /* Returns whether num consecutive bits starting at idx are all cleared.  */
-bool sparsebit_is_clear_num(struct sparsebit *s,
+bool sparsebit_is_clear_num(const struct sparsebit *s,
 	sparsebit_idx_t idx, sparsebit_num_t num)
 {
 	sparsebit_idx_t next_set;
@@ -1042,13 +1042,13 @@ bool sparsebit_is_clear_num(struct sparsebit *s,
  * value.  Use sparsebit_any_set(), instead of sparsebit_num_set() > 0,
  * to determine if the sparsebit array has any bits set.
  */
-sparsebit_num_t sparsebit_num_set(struct sparsebit *s)
+sparsebit_num_t sparsebit_num_set(const struct sparsebit *s)
 {
 	return s->num_set;
 }
 
 /* Returns whether any bit is set in the sparsebit array.  */
-bool sparsebit_any_set(struct sparsebit *s)
+bool sparsebit_any_set(const struct sparsebit *s)
 {
 	/*
 	 * Nodes only describe set bits.  If any nodes then there
@@ -1071,20 +1071,20 @@ bool sparsebit_any_set(struct sparsebit *s)
 }
 
 /* Returns whether all the bits in the sparsebit array are cleared.  */
-bool sparsebit_all_clear(struct sparsebit *s)
+bool sparsebit_all_clear(const struct sparsebit *s)
 {
 	return !sparsebit_any_set(s);
 }
 
 /* Returns whether all the bits in the sparsebit array are set.  */
-bool sparsebit_any_clear(struct sparsebit *s)
+bool sparsebit_any_clear(const struct sparsebit *s)
 {
 	return !sparsebit_all_set(s);
 }
 
 /* Returns the index of the first set bit.  Abort if no bits are set.
  */
-sparsebit_idx_t sparsebit_first_set(struct sparsebit *s)
+sparsebit_idx_t sparsebit_first_set(const struct sparsebit *s)
 {
 	struct node *nodep;
 
@@ -1098,7 +1098,7 @@ sparsebit_idx_t sparsebit_first_set(struct sparsebit *s)
 /* Returns the index of the first cleared bit.  Abort if
  * no bits are cleared.
  */
-sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s)
+sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *s)
 {
 	struct node *nodep1, *nodep2;
 
@@ -1152,7 +1152,7 @@ sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s)
 /* Returns index of next bit set within s after the index given by prev.
  * Returns 0 if there are no bits after prev that are set.
  */
-sparsebit_idx_t sparsebit_next_set(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_set(const struct sparsebit *s,
 	sparsebit_idx_t prev)
 {
 	sparsebit_idx_t lowest_possible = prev + 1;
@@ -1245,7 +1245,7 @@ sparsebit_idx_t sparsebit_next_set(struct sparsebit *s,
 /* Returns index of next bit cleared within s after the index given by prev.
  * Returns 0 if there are no bits after prev that are cleared.
  */
-sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *s,
 	sparsebit_idx_t prev)
 {
 	sparsebit_idx_t lowest_possible = prev + 1;
@@ -1301,7 +1301,7 @@ sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s,
  * and returns the index of the first sequence of num consecutively set
  * bits.  Returns a value of 0 of no such sequence exists.
  */
-sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *s,
 	sparsebit_idx_t start, sparsebit_num_t num)
 {
 	sparsebit_idx_t idx;
@@ -1336,7 +1336,7 @@ sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s,
  * and returns the index of the first sequence of num consecutively cleared
  * bits.  Returns a value of 0 of no such sequence exists.
  */
-sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *s,
 	sparsebit_idx_t start, sparsebit_num_t num)
 {
 	sparsebit_idx_t idx;
@@ -1584,7 +1584,7 @@ static size_t display_range(FILE *stream, sparsebit_idx_t low,
  * contiguous bits.  This is done because '-' is used to specify command-line
  * options, and sometimes ranges are specified as command-line arguments.
  */
-void sparsebit_dump(FILE *stream, struct sparsebit *s,
+void sparsebit_dump(FILE *stream, const struct sparsebit *s,
 	unsigned int indent)
 {
 	size_t current_line_len = 0;
@@ -1682,7 +1682,7 @@ void sparsebit_dump(FILE *stream, struct sparsebit *s,
  * s.  On error, diagnostic information is printed to stderr and
  * abort is called.
  */
-void sparsebit_validate_internal(struct sparsebit *s)
+void sparsebit_validate_internal(const struct sparsebit *s)
 {
 	bool error_detected = false;
 	struct node *nodep, *prev = NULL;
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 03/10] KVM: selftests: add hooks for managing encrypted guest memory
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (2 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 02/10] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 04/10] KVM: selftests: handle encryption bits in page tables Peter Gonda
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

VM implementations that make use of encrypted memory need a way to
configure things like the encryption/shared bit position for page
table handling, the default encryption policy for internal allocations
made by the core library, and a way to fetch the list/bitmap of
encrypted pages to do the actual memory encryption. Add an interface to
configure these parameters. Also introduce a sparsebit map to track
allocations/mappings that should be treated as encrypted, and provide
a way for VM implementations to retrieve it to handle operations
related memory encryption.

Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 .../selftests/kvm/include/kvm_util_base.h     | 17 ++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 52 +++++++++++++++++--
 2 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index b78e3c7a2566..3acb1552942b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -32,6 +32,7 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
 struct userspace_mem_region {
 	struct kvm_userspace_memory_region region;
 	struct sparsebit *unused_phy_pages;
+	struct sparsebit *encrypted_phy_pages;
 	int fd;
 	off_t offset;
 	void *host_mem;
@@ -61,6 +62,14 @@ struct userspace_mem_regions {
 	DECLARE_HASHTABLE(slot_hash, 9);
 };
 
+/* Memory encryption policy/configuration. */
+struct vm_memcrypt {
+	bool enabled;
+	int8_t enc_by_default;
+	bool has_enc_bit;
+	int8_t enc_bit;
+};
+
 struct kvm_vm {
 	int mode;
 	unsigned long type;
@@ -84,6 +93,7 @@ struct kvm_vm {
 	vm_vaddr_t idt;
 	vm_vaddr_t handlers;
 	uint32_t dirty_ring_size;
+	struct vm_memcrypt memcrypt;
 
 	/* Cache of information for binary stats interface */
 	int stats_fd;
@@ -820,4 +830,11 @@ static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
 	return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
 }
 
+void vm_set_memory_encryption(struct kvm_vm *vm, bool enc_by_default, bool has_enc_bit,
+			      uint8_t enc_bit);
+
+const struct sparsebit *vm_get_encrypted_phy_pages(struct kvm_vm *vm, int slot,
+						   vm_paddr_t *gpa_start,
+						   uint64_t *size);
+
 #endif /* SELFTEST_KVM_UTIL_BASE_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b07c372b9b37..6f96d1c51f75 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -529,6 +529,7 @@ static void __vm_mem_region_delete(struct kvm_vm *vm,
 	vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, &region->region);
 
 	sparsebit_free(&region->unused_phy_pages);
+	sparsebit_free(&region->encrypted_phy_pages);
 	ret = munmap(region->mmap_start, region->mmap_size);
 	TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret));
 
@@ -869,6 +870,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	}
 
 	region->unused_phy_pages = sparsebit_alloc();
+	region->encrypted_phy_pages = sparsebit_alloc();
 	sparsebit_set_num(region->unused_phy_pages,
 		guest_paddr >> vm->page_shift, npages);
 	region->region.slot = slot;
@@ -1084,6 +1086,7 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
  *   num - number of pages
  *   paddr_min - Physical address minimum
  *   memslot - Memory region to allocate page from
+ *   encrypt - Whether to treat the pages as encrypted
  *
  * Output Args: None
  *
@@ -1095,8 +1098,9 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
  * and their base address is returned. A TEST_ASSERT failure occurs if
  * not enough pages are available at or above paddr_min.
  */
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
-			      vm_paddr_t paddr_min, uint32_t memslot)
+static vm_paddr_t
+_vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min,
+		    uint32_t memslot, bool encrypt)
 {
 	struct userspace_mem_region *region;
 	sparsebit_idx_t pg, base;
@@ -1129,12 +1133,22 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 		abort();
 	}
 
-	for (pg = base; pg < base + num; ++pg)
+	for (pg = base; pg < base + num; ++pg) {
 		sparsebit_clear(region->unused_phy_pages, pg);
+		if (encrypt)
+			sparsebit_set(region->encrypted_phy_pages, pg);
+	}
 
 	return base * vm->page_size;
 }
 
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot)
+{
+	return _vm_phy_pages_alloc(vm, num, paddr_min, memslot,
+				   vm->memcrypt.enc_by_default);
+}
+
 vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 			     uint32_t memslot)
 {
@@ -1718,6 +1732,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 			region->host_mem);
 		fprintf(stream, "%*sunused_phy_pages: ", indent + 2, "");
 		sparsebit_dump(stream, region->unused_phy_pages, 0);
+		if (vm->memcrypt.enabled) {
+			fprintf(stream, "%*sencrypted_phy_pages: ", indent + 2, "");
+			sparsebit_dump(stream, region->encrypted_phy_pages, 0);
+		}
 	}
 	fprintf(stream, "%*sMapped Virtual Pages:\n", indent, "");
 	sparsebit_dump(stream, vm->vpages_mapped, indent + 2);
@@ -1966,3 +1984,31 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
 		break;
 	}
 }
+
+void vm_set_memory_encryption(struct kvm_vm *vm, bool enc_by_default, bool has_enc_bit,
+			      uint8_t enc_bit)
+{
+	vm->memcrypt.enabled = true;
+	vm->memcrypt.enc_by_default = enc_by_default;
+	vm->memcrypt.has_enc_bit = has_enc_bit;
+	vm->memcrypt.enc_bit = enc_bit;
+}
+
+const struct sparsebit *
+vm_get_encrypted_phy_pages(struct kvm_vm *vm, int slot, vm_paddr_t *gpa_start,
+			   uint64_t *size)
+{
+	struct userspace_mem_region *region;
+
+	if (!vm->memcrypt.enabled)
+		return NULL;
+
+	region = memslot2region(vm, slot);
+	if (!region)
+		return NULL;
+
+	*size = region->region.memory_size;
+	*gpa_start = region->region.guest_phys_addr;
+
+	return region->encrypted_phy_pages;
+}
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 04/10] KVM: selftests: handle encryption bits in page tables
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (3 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 03/10] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 05/10] KVM: selftests: add support for encrypted vm_vaddr_* allocations Peter Gonda
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

SEV guests rely on an encyption bit which resides within the range that
current code treats as address bits. Guest code will expect these bits
to be set appropriately in their page tables, whereas the rest of the
kvm_util functions will generally expect these bits to not be present.
Introduce addr_gpa2raw()/addr_raw2gpa() to add/remove these bits, then
use them where appropriate.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 .../selftests/kvm/include/kvm_util_base.h     |  2 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 55 ++++++++++++++++++-
 .../selftests/kvm/lib/x86_64/processor.c      | 15 +++--
 3 files changed, 66 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 3acb1552942b..17eb5fb70867 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -396,6 +396,8 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
 void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
 vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
 void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
+vm_paddr_t addr_raw2gpa(struct kvm_vm *vm, vm_vaddr_t gpa_raw);
+vm_paddr_t addr_gpa2raw(struct kvm_vm *vm, vm_vaddr_t gpa);
 
 void vcpu_run(struct kvm_vcpu *vcpu);
 int _vcpu_run(struct kvm_vcpu *vcpu);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 6f96d1c51f75..5b473a8c90ae 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1365,6 +1365,58 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 	}
 }
 
+/*
+ * Mask off any special bits from raw GPA
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa_raw - Raw VM physical address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   GPA with special bits (e.g. shared/encrypted) masked off.
+ */
+vm_paddr_t addr_raw2gpa(struct kvm_vm *vm, vm_paddr_t gpa_raw)
+{
+	if (!vm->memcrypt.has_enc_bit)
+		return gpa_raw;
+
+	return gpa_raw & ~(1ULL << vm->memcrypt.enc_bit);
+}
+
+/*
+ * Add special/encryption bits to a GPA based on encryption bitmap.
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM physical address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   GPA with special bits (e.g. shared/encrypted) added in.
+ */
+vm_paddr_t addr_gpa2raw(struct kvm_vm *vm, vm_paddr_t gpa)
+{
+	struct userspace_mem_region *region;
+	sparsebit_idx_t pg;
+	vm_paddr_t gpa_raw = gpa;
+
+	TEST_ASSERT(addr_raw2gpa(vm, gpa) == gpa, "Unexpected bits in GPA: %lx",
+		    gpa);
+
+	if (!vm->memcrypt.has_enc_bit)
+		return gpa;
+
+	region = userspace_mem_region_find(vm, gpa, gpa);
+	pg = gpa >> vm->page_shift;
+	if (sparsebit_is_set(region->encrypted_phy_pages, pg))
+		gpa_raw |= (1ULL << vm->memcrypt.enc_bit);
+
+	return gpa_raw;
+}
+
 /*
  * Address VM Physical to Host Virtual
  *
@@ -1382,9 +1434,10 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
  * address providing the memory to the vm physical address is returned.
  * A TEST_ASSERT failure occurs if no region containing gpa exists.
  */
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa_raw)
 {
 	struct userspace_mem_region *region;
+	vm_paddr_t gpa = addr_raw2gpa(vm, gpa_raw);
 
 	region = userspace_mem_region_find(vm, gpa, gpa);
 	if (!region) {
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 1a32b1c75e9a..53b115876417 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -116,7 +116,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
 
 	/* If needed, create page map l4 table. */
 	if (!vm->pgd_created) {
-		vm->pgd = vm_alloc_page_table(vm);
+		vm->pgd = addr_gpa2raw(vm, vm_alloc_page_table(vm));
 		vm->pgd_created = true;
 	}
 }
@@ -138,13 +138,15 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
 				       int target_level)
 {
 	uint64_t *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level);
+	uint64_t paddr_raw = addr_gpa2raw(vm, paddr);
 
 	if (!(*pte & PTE_PRESENT_MASK)) {
 		*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK;
 		if (current_level == target_level)
-			*pte |= PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+			*pte |= PTE_LARGE_MASK | (paddr_raw & PHYSICAL_PAGE_MASK);
 		else
-			*pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
+			*pte |= addr_gpa2raw(vm, vm_alloc_page_table(vm)) & PHYSICAL_PAGE_MASK;
+
 	} else {
 		/*
 		 * Entry already present.  Assert that the caller doesn't want
@@ -182,6 +184,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
 		    "Physical address beyond maximum supported,\n"
 		    "  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
 		    paddr, vm->max_gfn, vm->page_size);
+	TEST_ASSERT(addr_raw2gpa(vm, paddr) == paddr,
+		    "Unexpected bits in paddr: %lx", paddr);
 
 	/*
 	 * Allocate upper level page tables, if not already present.  Return
@@ -204,7 +208,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
 	pte = virt_get_pte(vm, PTE_GET_PFN(*pde), vaddr, PG_LEVEL_4K);
 	TEST_ASSERT(!(*pte & PTE_PRESENT_MASK),
 		    "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
-	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK |
+	       (addr_gpa2raw(vm, paddr) & PHYSICAL_PAGE_MASK);
 }
 
 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
@@ -517,7 +522,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 	if (!(pte[index[0]] & PTE_PRESENT_MASK))
 		goto unmapped_gva;
 
-	return (PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
+	return addr_raw2gpa(vm, PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
 
 unmapped_gva:
 	TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 05/10] KVM: selftests: add support for encrypted vm_vaddr_* allocations
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (4 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 04/10] KVM: selftests: handle encryption bits in page tables Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 06/10] KVM: selftests: Consolidate boilerplate code in get_ucall() Peter Gonda
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

The default policy for whether to handle allocations as encrypted or
shared pages is currently determined by vm_phy_pages_alloc(), which in
turn uses the policy defined by vm->memcrypt.enc_by_default.

Test programs may wish to allocate shared vaddrs for things like
sharing memory with the guest. Since enc_by_default will be true in the
case of SEV guests (since it's required in order to have the initial
ELF binary and page table become part of the initial guest payload), an
interface is needed to explicitly request shared pages.

Implement this by splitting the common code out from vm_vaddr_alloc()
and introducing a new vm_vaddr_alloc_shared().

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 .../selftests/kvm/include/kvm_util_base.h     |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 21 +++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 17eb5fb70867..60b604ac9fa9 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -387,6 +387,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
 void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
 vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 5b473a8c90ae..17d9a2606cc8 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1250,12 +1250,13 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
 }
 
 /*
- * VM Virtual Address Allocate
+ * VM Virtual Address Allocate Shared/Encrypted
  *
  * Input Args:
  *   vm - Virtual Machine
  *   sz - Size in bytes
  *   vaddr_min - Minimum starting virtual address
+ *   encrypt - Whether the region should be handled as encrypted
  *
  * Output Args: None
  *
@@ -1268,13 +1269,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
  * a unique set of pages, with the minimum real allocation being at least
  * a page.
  */
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+static vm_vaddr_t
+_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, bool encrypt)
 {
 	uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
 
 	virt_pgd_alloc(vm);
-	vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages,
-					      KVM_UTIL_MIN_PFN * vm->page_size, 0);
+	vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages,
+					       KVM_UTIL_MIN_PFN * vm->page_size,
+					       0, encrypt);
 
 	/*
 	 * Find an unused range of virtual page addresses of at least
@@ -1295,6 +1298,16 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
 	return vaddr_start;
 }
 
+vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+{
+	return _vm_vaddr_alloc(vm, sz, vaddr_min, vm->memcrypt.enc_by_default);
+}
+
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+{
+	return _vm_vaddr_alloc(vm, sz, vaddr_min, false);
+}
+
 /*
  * VM Virtual Address Allocate Pages
  *
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 06/10] KVM: selftests: Consolidate boilerplate code in get_ucall()
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (5 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 05/10] KVM: selftests: add support for encrypted vm_vaddr_* allocations Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-19 14:42   ` Andrew Jones
  2022-07-15 19:29 ` [RFC V1 07/10] tools: Add atomic_test_and_set_bit() Peter Gonda
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Sean Christopherson <seanjc@google.com>

Consolidate the actual copying of a ucall struct from guest=>host into
the common get_ucall().  Return a host virtual address instead of a guest
virtual address even though the addr_gva2hva() part could be moved to
get_ucall() too.  Conceptually, get_ucall() is invoked from the host and
should return a host virtual address (and returning NULL for "nothing to
see here" is far superior to returning 0).

Use pointer shenanigans instead of an unnecessary bounce buffer when the
caller of get_ucall() provides a valid pointer.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../selftests/kvm/include/ucall_common.h      | 17 +++++++-
 .../testing/selftests/kvm/lib/aarch64/ucall.c | 36 ++++-------------
 .../selftests/kvm/lib/perf_test_util.c        |  2 +-
 tools/testing/selftests/kvm/lib/riscv/ucall.c | 40 ++++---------------
 tools/testing/selftests/kvm/lib/s390x/ucall.c | 37 ++++-------------
 .../testing/selftests/kvm/lib/ucall_common.c  | 39 ++++++++++++++++++
 .../testing/selftests/kvm/lib/x86_64/ucall.c  | 37 ++++-------------
 8 files changed, 84 insertions(+), 125 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/lib/ucall_common.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 4d6753aadfa0..61e85892dd9b 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -46,6 +46,7 @@ LIBKVM += lib/perf_test_util.c
 LIBKVM += lib/rbtree.c
 LIBKVM += lib/sparsebit.c
 LIBKVM += lib/test_util.c
+LIBKVM += lib/ucall_common.c
 
 LIBKVM_x86_64 += lib/x86_64/apic.c
 LIBKVM_x86_64 += lib/x86_64/handlers.S
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index 98562f685151..cb9b37282701 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -23,11 +23,24 @@ struct ucall {
 	uint64_t args[UCALL_MAX_ARGS];
 };
 
-void ucall_init(struct kvm_vm *vm, void *arg);
-void ucall_uninit(struct kvm_vm *vm);
+void ucall_arch_init(struct kvm_vm *vm, void *arg);
+void ucall_arch_uninit(struct kvm_vm *vm);
+void ucall_arch_do_ucall(vm_vaddr_t uc);
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
+
 void ucall(uint64_t cmd, int nargs, ...);
 uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
 
+static inline void ucall_init(struct kvm_vm *vm, void *arg)
+{
+	ucall_arch_init(vm, arg);
+}
+
+static inline void ucall_uninit(struct kvm_vm *vm)
+{
+	ucall_arch_uninit(vm);
+}
+
 #define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4)	\
 				ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
 #define GUEST_SYNC(stage)	ucall(UCALL_SYNC, 2, "hello", stage)
diff --git a/tools/testing/selftests/kvm/lib/aarch64/ucall.c b/tools/testing/selftests/kvm/lib/aarch64/ucall.c
index 0b949ee06b5e..9c124adbb560 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/ucall.c
@@ -21,7 +21,7 @@ static bool ucall_mmio_init(struct kvm_vm *vm, vm_paddr_t gpa)
 	return true;
 }
 
-void ucall_init(struct kvm_vm *vm, void *arg)
+void ucall_arch_init(struct kvm_vm *vm, void *arg)
 {
 	vm_paddr_t gpa, start, end, step, offset;
 	unsigned int bits;
@@ -64,37 +64,20 @@ void ucall_init(struct kvm_vm *vm, void *arg)
 	TEST_FAIL("Can't find a ucall mmio address");
 }
 
-void ucall_uninit(struct kvm_vm *vm)
+void ucall_arch_uninit(struct kvm_vm *vm)
 {
 	ucall_exit_mmio_addr = 0;
 	sync_global_to_guest(vm, ucall_exit_mmio_addr);
 }
 
-void ucall(uint64_t cmd, int nargs, ...)
+void ucall_arch_do_ucall(vm_vaddr_t uc)
 {
-	struct ucall uc = {
-		.cmd = cmd,
-	};
-	va_list va;
-	int i;
-
-	nargs = min(nargs, UCALL_MAX_ARGS);
-
-	va_start(va, nargs);
-	for (i = 0; i < nargs; ++i)
-		uc.args[i] = va_arg(va, uint64_t);
-	va_end(va);
-
-	*ucall_exit_mmio_addr = (vm_vaddr_t)&uc;
+	*ucall_exit_mmio_addr = uc;
 }
 
-uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	struct ucall ucall = {};
-
-	if (uc)
-		memset(uc, 0, sizeof(*uc));
 
 	if (run->exit_reason == KVM_EXIT_MMIO &&
 	    run->mmio.phys_addr == (uint64_t)ucall_exit_mmio_addr) {
@@ -103,12 +86,7 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
 		TEST_ASSERT(run->mmio.is_write && run->mmio.len == 8,
 			    "Unexpected ucall exit mmio address access");
 		memcpy(&gva, run->mmio.data, sizeof(gva));
-		memcpy(&ucall, addr_gva2hva(vcpu->vm, gva), sizeof(ucall));
-
-		vcpu_run_complete_io(vcpu);
-		if (uc)
-			memcpy(uc, &ucall, sizeof(ucall));
+		return addr_gva2hva(vcpu->vm, gva);
 	}
-
-	return ucall.cmd;
+	return NULL;
 }
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 9618b37c66f7..14f88f3ffa80 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -209,7 +209,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 		perf_test_setup_nested(vm, nr_vcpus, vcpus);
 	}
 
-	ucall_init(vm, NULL);
+	ucall_arch_init(vm, NULL);
 
 	/* Export the shared variables to the guest. */
 	sync_global_to_guest(vm, perf_test_args);
diff --git a/tools/testing/selftests/kvm/lib/riscv/ucall.c b/tools/testing/selftests/kvm/lib/riscv/ucall.c
index 087b9740bc8f..37e091d4366e 100644
--- a/tools/testing/selftests/kvm/lib/riscv/ucall.c
+++ b/tools/testing/selftests/kvm/lib/riscv/ucall.c
@@ -10,11 +10,11 @@
 #include "kvm_util.h"
 #include "processor.h"
 
-void ucall_init(struct kvm_vm *vm, void *arg)
+void ucall_arch_init(struct kvm_vm *vm, void *arg)
 {
 }
 
-void ucall_uninit(struct kvm_vm *vm)
+void ucall_arch_uninit(struct kvm_vm *vm)
 {
 }
 
@@ -44,47 +44,22 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
 	return ret;
 }
 
-void ucall(uint64_t cmd, int nargs, ...)
+void ucall_arch_do_ucall(vm_vaddr_t uc)
 {
-	struct ucall uc = {
-		.cmd = cmd,
-	};
-	va_list va;
-	int i;
-
-	nargs = min(nargs, UCALL_MAX_ARGS);
-
-	va_start(va, nargs);
-	for (i = 0; i < nargs; ++i)
-		uc.args[i] = va_arg(va, uint64_t);
-	va_end(va);
-
 	sbi_ecall(KVM_RISCV_SELFTESTS_SBI_EXT,
 		  KVM_RISCV_SELFTESTS_SBI_UCALL,
-		  (vm_vaddr_t)&uc, 0, 0, 0, 0, 0);
+		  uc, 0, 0, 0, 0, 0);
 }
 
-uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	struct ucall ucall = {};
-
-	if (uc)
-		memset(uc, 0, sizeof(*uc));
 
 	if (run->exit_reason == KVM_EXIT_RISCV_SBI &&
 	    run->riscv_sbi.extension_id == KVM_RISCV_SELFTESTS_SBI_EXT) {
 		switch (run->riscv_sbi.function_id) {
 		case KVM_RISCV_SELFTESTS_SBI_UCALL:
-			memcpy(&ucall,
-			       addr_gva2hva(vcpu->vm, run->riscv_sbi.args[0]),
-			       sizeof(ucall));
-
-			vcpu_run_complete_io(vcpu);
-			if (uc)
-				memcpy(uc, &ucall, sizeof(ucall));
-
-			break;
+			return addr_gva2hva(vcpu->vm, run->riscv_sbi.args[0]);
 		case KVM_RISCV_SELFTESTS_SBI_UNEXP:
 			vcpu_dump(stderr, vcpu, 2);
 			TEST_ASSERT(0, "Unexpected trap taken by guest");
@@ -93,6 +68,5 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
 			break;
 		}
 	}
-
-	return ucall.cmd;
+	return NULL;
 }
diff --git a/tools/testing/selftests/kvm/lib/s390x/ucall.c b/tools/testing/selftests/kvm/lib/s390x/ucall.c
index 73dc4e21190f..0f695a031d35 100644
--- a/tools/testing/selftests/kvm/lib/s390x/ucall.c
+++ b/tools/testing/selftests/kvm/lib/s390x/ucall.c
@@ -6,40 +6,23 @@
  */
 #include "kvm_util.h"
 
-void ucall_init(struct kvm_vm *vm, void *arg)
+void ucall_arch_init(struct kvm_vm *vm, void *arg)
 {
 }
 
-void ucall_uninit(struct kvm_vm *vm)
+void ucall_arch_uninit(struct kvm_vm *vm)
 {
 }
 
-void ucall(uint64_t cmd, int nargs, ...)
+void ucall_arch_do_ucall(vm_vaddr_t uc)
 {
-	struct ucall uc = {
-		.cmd = cmd,
-	};
-	va_list va;
-	int i;
-
-	nargs = min(nargs, UCALL_MAX_ARGS);
-
-	va_start(va, nargs);
-	for (i = 0; i < nargs; ++i)
-		uc.args[i] = va_arg(va, uint64_t);
-	va_end(va);
-
 	/* Exit via DIAGNOSE 0x501 (normally used for breakpoints) */
-	asm volatile ("diag 0,%0,0x501" : : "a"(&uc) : "memory");
+	asm volatile ("diag 0,%0,0x501" : : "a"(uc) : "memory");
 }
 
-uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	struct ucall ucall = {};
-
-	if (uc)
-		memset(uc, 0, sizeof(*uc));
 
 	if (run->exit_reason == KVM_EXIT_S390_SIEIC &&
 	    run->s390_sieic.icptcode == 4 &&
@@ -47,13 +30,7 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
 	    (run->s390_sieic.ipb >> 16) == 0x501) {
 		int reg = run->s390_sieic.ipa & 0xf;
 
-		memcpy(&ucall, addr_gva2hva(vcpu->vm, run->s.regs.gprs[reg]),
-		       sizeof(ucall));
-
-		vcpu_run_complete_io(vcpu);
-		if (uc)
-			memcpy(uc, &ucall, sizeof(ucall));
+		return addr_gva2hva(vcpu->vm, run->s.regs.gprs[reg]);
 	}
-
-	return ucall.cmd;
+	return NULL;
 }
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
new file mode 100644
index 000000000000..c488ed23d0dd
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include "kvm_util.h"
+
+void ucall(uint64_t cmd, int nargs, ...)
+{
+	struct ucall uc = {
+		.cmd = cmd,
+	};
+	va_list va;
+	int i;
+
+	nargs = min(nargs, UCALL_MAX_ARGS);
+
+	va_start(va, nargs);
+	for (i = 0; i < nargs; ++i)
+		uc.args[i] = va_arg(va, uint64_t);
+	va_end(va);
+
+	ucall_arch_do_ucall((vm_vaddr_t)&uc);
+}
+
+uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
+{
+	struct ucall ucall;
+	void *addr;
+
+	if (!uc)
+		uc = &ucall;
+
+	addr = ucall_arch_get_ucall(vcpu);
+	if (addr) {
+		memcpy(uc, addr, sizeof(*uc));
+		vcpu_run_complete_io(vcpu);
+	} else {
+		memset(uc, 0, sizeof(*uc));
+	}
+
+	return uc->cmd;
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/ucall.c b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
index e5f0f9e0d3ee..ec53a406f689 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
@@ -8,52 +8,29 @@
 
 #define UCALL_PIO_PORT ((uint16_t)0x1000)
 
-void ucall_init(struct kvm_vm *vm, void *arg)
+void ucall_arch_init(struct kvm_vm *vm, void *arg)
 {
 }
 
-void ucall_uninit(struct kvm_vm *vm)
+void ucall_arch_uninit(struct kvm_vm *vm)
 {
 }
 
-void ucall(uint64_t cmd, int nargs, ...)
+void ucall_arch_do_ucall(vm_vaddr_t uc)
 {
-	struct ucall uc = {
-		.cmd = cmd,
-	};
-	va_list va;
-	int i;
-
-	nargs = min(nargs, UCALL_MAX_ARGS);
-
-	va_start(va, nargs);
-	for (i = 0; i < nargs; ++i)
-		uc.args[i] = va_arg(va, uint64_t);
-	va_end(va);
-
 	asm volatile("in %[port], %%al"
-		: : [port] "d" (UCALL_PIO_PORT), "D" (&uc) : "rax", "memory");
+		: : [port] "d" (UCALL_PIO_PORT), "D" (uc) : "rax", "memory");
 }
 
-uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
+void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
 {
 	struct kvm_run *run = vcpu->run;
-	struct ucall ucall = {};
-
-	if (uc)
-		memset(uc, 0, sizeof(*uc));
 
 	if (run->exit_reason == KVM_EXIT_IO && run->io.port == UCALL_PIO_PORT) {
 		struct kvm_regs regs;
 
 		vcpu_regs_get(vcpu, &regs);
-		memcpy(&ucall, addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi),
-		       sizeof(ucall));
-
-		vcpu_run_complete_io(vcpu);
-		if (uc)
-			memcpy(uc, &ucall, sizeof(ucall));
+		return addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi);
 	}
-
-	return ucall.cmd;
+	return NULL;
 }
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 07/10] tools: Add atomic_test_and_set_bit()
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (6 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 06/10] KVM: selftests: Consolidate boilerplate code in get_ucall() Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests Peter Gonda
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

atomic_test_and_set_bit() allows for atomic bitmap usage from KVM
selftests.

Signed-off-by: Peter Gonda <pgonda@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>

---
 tools/arch/x86/include/asm/atomic.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/arch/x86/include/asm/atomic.h b/tools/arch/x86/include/asm/atomic.h
index 1f5e26aae9fc..45ea15c98957 100644
--- a/tools/arch/x86/include/asm/atomic.h
+++ b/tools/arch/x86/include/asm/atomic.h
@@ -8,6 +8,7 @@
 
 #define LOCK_PREFIX "\n\tlock; "
 
+#include <asm/asm.h>
 #include <asm/cmpxchg.h>
 
 /*
@@ -70,4 +71,10 @@ static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 	return cmpxchg(&v->counter, old, new);
 }
 
+static inline int atomic_test_and_set_bit(long nr, volatile unsigned long *addr)
+{
+	GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(bts), *addr, "Ir", nr, "%0", "c");
+
+}
+
 #endif /* _TOOLS_LINUX_ASM_X86_ATOMIC_H */
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (7 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 07/10] tools: Add atomic_test_and_set_bit() Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-19 15:43   ` Andrew Jones
  2022-07-15 19:29 ` [RFC V1 09/10] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
  2022-07-15 19:29 ` [RFC V1 10/10] KVM: selftests: Add simple sev vm testing Peter Gonda
  10 siblings, 1 reply; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

Add support for encrypted, SEV, guests in the ucall framework. If
encryption is enabled set up a pool of ucall structs in the guests'
shared memory region. This was suggested in the thread on "[RFC PATCH
00/10] KVM: selftests: Add support for test-selectable ucall
implementations". Using a listed as suggested there doesn't work well
because the list is setup using HVAs not GVAs so use a bitmap + array
solution instead to get the same pool result.

Suggested-by:Sean Christopherson <seanjc@google.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/kvm_util_base.h     |  30 +++--
 .../selftests/kvm/include/ucall_common.h      |  14 +--
 .../testing/selftests/kvm/lib/ucall_common.c  | 115 ++++++++++++++++--
 .../testing/selftests/kvm/lib/x86_64/ucall.c  |   2 +-
 5 files changed, 134 insertions(+), 28 deletions(-)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 61e85892dd9b..3d9f2a017389 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -47,6 +47,7 @@ LIBKVM += lib/rbtree.c
 LIBKVM += lib/sparsebit.c
 LIBKVM += lib/test_util.c
 LIBKVM += lib/ucall_common.c
+LIBKVM += $(top_srcdir)/tools/lib/find_bit.c
 
 LIBKVM_x86_64 += lib/x86_64/apic.c
 LIBKVM_x86_64 += lib/x86_64/handlers.S
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 60b604ac9fa9..77aff2356d64 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -65,6 +65,7 @@ struct userspace_mem_regions {
 /* Memory encryption policy/configuration. */
 struct vm_memcrypt {
 	bool enabled;
+	bool encrypted;
 	int8_t enc_by_default;
 	bool has_enc_bit;
 	int8_t enc_bit;
@@ -99,6 +100,8 @@ struct kvm_vm {
 	int stats_fd;
 	struct kvm_stats_header stats_header;
 	struct kvm_stats_desc *stats_desc;
+
+	struct list_head ucall_list;
 };
 
 
@@ -141,21 +144,21 @@ enum vm_guest_mode {
 
 extern enum vm_guest_mode vm_mode_default;
 
-#define VM_MODE_DEFAULT			vm_mode_default
-#define MIN_PAGE_SHIFT			12U
-#define ptes_per_page(page_size)	((page_size) / 8)
+#define VM_MODE_DEFAULT            vm_mode_default
+#define MIN_PAGE_SHIFT            12U
+#define ptes_per_page(page_size)    ((page_size) / 8)
 
 #elif defined(__x86_64__)
 
-#define VM_MODE_DEFAULT			VM_MODE_PXXV48_4K
-#define MIN_PAGE_SHIFT			12U
-#define ptes_per_page(page_size)	((page_size) / 8)
+#define VM_MODE_DEFAULT            VM_MODE_PXXV48_4K
+#define MIN_PAGE_SHIFT            12U
+#define ptes_per_page(page_size)    ((page_size) / 8)
 
 #elif defined(__s390x__)
 
-#define VM_MODE_DEFAULT			VM_MODE_P44V64_4K
-#define MIN_PAGE_SHIFT			12U
-#define ptes_per_page(page_size)	((page_size) / 16)
+#define VM_MODE_DEFAULT            VM_MODE_P44V64_4K
+#define MIN_PAGE_SHIFT            12U
+#define ptes_per_page(page_size)    ((page_size) / 16)
 
 #elif defined(__riscv)
 
@@ -163,9 +166,9 @@ extern enum vm_guest_mode vm_mode_default;
 #error "RISC-V 32-bit kvm selftests not supported"
 #endif
 
-#define VM_MODE_DEFAULT			VM_MODE_P40V48_4K
-#define MIN_PAGE_SHIFT			12U
-#define ptes_per_page(page_size)	((page_size) / 8)
+#define VM_MODE_DEFAULT            VM_MODE_P40V48_4K
+#define MIN_PAGE_SHIFT            12U
+#define ptes_per_page(page_size)    ((page_size) / 8)
 
 #endif
 
@@ -802,6 +805,9 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
 
 static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 {
+	TEST_ASSERT(
+		!vm->memcrypt.encrypted,
+		"Encrypted guests have their page tables encrypted so gva2* conversions are not possible.");
 	return addr_arch_gva2gpa(vm, gva);
 }
 
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index cb9b37282701..d8ac16a68c0a 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -21,6 +21,10 @@ enum {
 struct ucall {
 	uint64_t cmd;
 	uint64_t args[UCALL_MAX_ARGS];
+
+	/* For encrypted guests. */
+	uint64_t idx;
+	struct ucall *hva;
 };
 
 void ucall_arch_init(struct kvm_vm *vm, void *arg);
@@ -31,15 +35,9 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
 void ucall(uint64_t cmd, int nargs, ...);
 uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
 
-static inline void ucall_init(struct kvm_vm *vm, void *arg)
-{
-	ucall_arch_init(vm, arg);
-}
+void ucall_init(struct kvm_vm *vm, void *arg);
 
-static inline void ucall_uninit(struct kvm_vm *vm)
-{
-	ucall_arch_uninit(vm);
-}
+void ucall_uninit(struct kvm_vm *vm);
 
 #define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4)	\
 				ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index c488ed23d0dd..8e660b10f9b2 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -1,22 +1,123 @@
 // SPDX-License-Identifier: GPL-2.0-only
 #include "kvm_util.h"
+#include "linux/types.h"
+#include "linux/bitmap.h"
+#include "linux/bitops.h"
+#include "linux/atomic.h"
+
+struct ucall_header {
+	DECLARE_BITMAP(in_use, KVM_MAX_VCPUS);
+	struct ucall ucalls[KVM_MAX_VCPUS];
+};
+
+static bool encrypted_guest;
+static struct ucall_header *ucall_hdr;
+
+void ucall_init(struct kvm_vm *vm, void *arg)
+{
+	struct ucall *uc;
+	struct ucall_header *hdr;
+	vm_vaddr_t vaddr;
+	int i;
+
+	encrypted_guest = vm->memcrypt.enabled;
+	sync_global_to_guest(vm, encrypted_guest);
+	if (!encrypted_guest)
+		goto out;
+
+	TEST_ASSERT(!ucall_hdr,
+		    "Only a single encrypted guest at a time for ucalls.");
+	vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), vm->page_size);
+	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
+	memset(hdr, 0, sizeof(*hdr));
+
+	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
+		uc = &hdr->ucalls[i];
+		uc->hva = uc;
+		uc->idx = i;
+	}
+
+	ucall_hdr = (struct ucall_header *)vaddr;
+	sync_global_to_guest(vm, ucall_hdr);
+
+out:
+	ucall_arch_init(vm, arg);
+}
+
+void ucall_uninit(struct kvm_vm *vm)
+{
+	encrypted_guest = false;
+	ucall_hdr = NULL;
+
+	ucall_arch_uninit(vm);
+}
+
+static struct ucall *ucall_alloc(void)
+{
+	struct ucall *uc = NULL;
+	int i;
+
+	if (!encrypted_guest)
+		goto out;
+
+	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
+		if (atomic_test_and_set_bit(i, ucall_hdr->in_use))
+			continue;
+
+		uc = &ucall_hdr->ucalls[i];
+	}
+
+out:
+	return uc;
+}
+
+static void ucall_free(struct ucall *uc)
+{
+	if (!encrypted_guest)
+		return;
+
+	clear_bit(uc->idx, ucall_hdr->in_use);
+}
+
+static vm_vaddr_t get_ucall_addr(struct ucall *uc)
+{
+	if (encrypted_guest)
+		return (vm_vaddr_t)uc->hva;
+
+	return (vm_vaddr_t)uc;
+}
 
 void ucall(uint64_t cmd, int nargs, ...)
 {
-	struct ucall uc = {
-		.cmd = cmd,
-	};
+	struct ucall *uc;
+	struct ucall tmp;
 	va_list va;
 	int i;
 
+	uc = ucall_alloc();
+	if (!uc)
+		uc = &tmp;
+
+	uc->cmd = cmd;
+
 	nargs = min(nargs, UCALL_MAX_ARGS);
 
 	va_start(va, nargs);
 	for (i = 0; i < nargs; ++i)
-		uc.args[i] = va_arg(va, uint64_t);
+		uc->args[i] = va_arg(va, uint64_t);
 	va_end(va);
 
-	ucall_arch_do_ucall((vm_vaddr_t)&uc);
+	ucall_arch_do_ucall(get_ucall_addr(uc));
+
+	ucall_free(uc);
+}
+
+void *get_ucall_hva(struct kvm_vm *vm, void *uc)
+{
+	if (encrypted_guest)
+		return uc;
+
+	return addr_gva2hva(vm, (vm_vaddr_t)uc);
 }
 
 uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
@@ -27,9 +128,9 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
 	if (!uc)
 		uc = &ucall;
 
-	addr = ucall_arch_get_ucall(vcpu);
+	addr = get_ucall_hva(vcpu->vm, ucall_arch_get_ucall(vcpu));
 	if (addr) {
-		memcpy(uc, addr, sizeof(*uc));
+		memcpy(uc, addr, sizeof(struct ucall));
 		vcpu_run_complete_io(vcpu);
 	} else {
 		memset(uc, 0, sizeof(*uc));
diff --git a/tools/testing/selftests/kvm/lib/x86_64/ucall.c b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
index ec53a406f689..ea6b2e3a8e39 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
@@ -30,7 +30,7 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
 		struct kvm_regs regs;
 
 		vcpu_regs_get(vcpu, &regs);
-		return addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi);
+		return (void *)regs.rdi;
 	}
 	return NULL;
 }
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 09/10] KVM: selftests: add library for creating/interacting with SEV guests
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (8 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  2022-07-15 19:29 ` [RFC V1 10/10] KVM: selftests: Add simple sev vm testing Peter Gonda
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

Add interfaces to allow tests to create/manage SEV guests. The
additional state associated with these guests is encapsulated in a new
struct sev_vm, which is a light wrapper around struct kvm_vm. These
VMs will use vm_set_memory_encryption() and vm_get_encrypted_phy_pages()
under the covers to configure and sync up with the core kvm_util
library on what should/shouldn't be treated as encrypted memory.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/x86_64/sev.h        |  44 +++
 tools/testing/selftests/kvm/lib/x86_64/sev.c  | 251 ++++++++++++++++++
 3 files changed, 296 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 3d9f2a017389..f3f29a64e4b2 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -56,6 +56,7 @@ LIBKVM_x86_64 += lib/x86_64/processor.c
 LIBKVM_x86_64 += lib/x86_64/svm.c
 LIBKVM_x86_64 += lib/x86_64/ucall.c
 LIBKVM_x86_64 += lib/x86_64/vmx.c
+LIBKVM_x86_64 += lib/x86_64/sev.c
 
 LIBKVM_aarch64 += lib/aarch64/gic.c
 LIBKVM_aarch64 += lib/aarch64/gic_v3.c
diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h
new file mode 100644
index 000000000000..2f7f7c741b12
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/sev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Helpers used for SEV guests
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+#ifndef SELFTEST_KVM_SEV_H
+#define SELFTEST_KVM_SEV_H
+
+#include <stdint.h>
+#include <stdbool.h>
+#include "kvm_util.h"
+
+/* Makefile might set this separately for user-overrides */
+#ifndef SEV_DEV_PATH
+#define SEV_DEV_PATH		"/dev/sev"
+#endif
+
+#define SEV_FW_REQ_VER_MAJOR	0
+#define SEV_FW_REQ_VER_MINOR	17
+
+#define SEV_POLICY_NO_DBG	(1UL << 0)
+#define SEV_POLICY_ES		(1UL << 2)
+
+enum {
+	SEV_GSTATE_UNINIT = 0,
+	SEV_GSTATE_LUPDATE,
+	SEV_GSTATE_LSECRET,
+	SEV_GSTATE_RUNNING,
+};
+
+struct sev_vm;
+
+void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data);
+struct kvm_vm *sev_get_vm(struct sev_vm *sev);
+uint8_t sev_get_enc_bit(struct sev_vm *sev);
+
+struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages);
+void sev_vm_free(struct sev_vm *sev);
+void sev_vm_launch(struct sev_vm *sev);
+void sev_vm_launch_measure(struct sev_vm *sev, uint8_t *measurement);
+void sev_vm_launch_finish(struct sev_vm *sev);
+
+#endif /* SELFTEST_KVM_SEV_H */
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
new file mode 100644
index 000000000000..309467cd7501
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
@@ -0,0 +1,251 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Helpers used for SEV guests
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+
+#include <stdint.h>
+#include <stdbool.h>
+#include "kvm_util.h"
+#include "linux/psp-sev.h"
+#include "processor.h"
+#include "sev.h"
+
+#define PAGE_SHIFT		12
+
+struct sev_vm {
+	struct kvm_vm *vm;
+	int fd;
+	int enc_bit;
+	uint32_t sev_policy;
+};
+
+/* Common SEV helpers/accessors. */
+
+struct kvm_vm *sev_get_vm(struct sev_vm *sev)
+{
+	return sev->vm;
+}
+
+uint8_t sev_get_enc_bit(struct sev_vm *sev)
+{
+	return sev->enc_bit;
+}
+
+void sev_ioctl(int sev_fd, int cmd, void *data)
+{
+	int ret;
+	struct sev_issue_cmd arg;
+
+	arg.cmd = cmd;
+	arg.data = (unsigned long)data;
+	ret = ioctl(sev_fd, SEV_ISSUE_CMD, &arg);
+	TEST_ASSERT(ret == 0,
+		    "SEV ioctl %d failed, error: %d, fw_error: %d",
+		    cmd, ret, arg.error);
+}
+
+void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data)
+{
+	struct kvm_sev_cmd arg = {0};
+	int ret;
+
+	arg.id = cmd;
+	arg.sev_fd = sev->fd;
+	arg.data = (__u64)data;
+
+	ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg);
+	TEST_ASSERT(ret == 0,
+		    "SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d",
+		    cmd, ret, errno, strerror(errno), arg.error);
+}
+
+/* Local helpers. */
+
+static void
+sev_register_user_region(struct sev_vm *sev, void *hva, uint64_t size)
+{
+	struct kvm_enc_region range = {0};
+	int ret;
+
+	pr_debug("%s: hva: %p, size: %lu\n", __func__, hva, size);
+
+	range.addr = (__u64)hva;
+	range.size = size;
+
+	ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
+	TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n", errno);
+}
+
+static void
+sev_encrypt_phy_range(struct sev_vm *sev, vm_paddr_t gpa, uint64_t size)
+{
+	struct kvm_sev_launch_update_data ksev_update_data = {0};
+
+	pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size);
+
+	ksev_update_data.uaddr = (__u64)addr_gpa2hva(sev->vm, gpa);
+	ksev_update_data.len = size;
+
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data);
+}
+
+static void sev_encrypt(struct sev_vm *sev)
+{
+	const struct sparsebit *enc_phy_pages;
+	struct kvm_vm *vm = sev->vm;
+	sparsebit_idx_t pg = 0;
+	vm_paddr_t gpa_start;
+	uint64_t memory_size;
+
+	/* Only memslot 0 supported for now. */
+	enc_phy_pages = vm_get_encrypted_phy_pages(sev->vm, 0, &gpa_start, &memory_size);
+	TEST_ASSERT(enc_phy_pages, "Unable to retrieve encrypted pages bitmap");
+	while (pg < (memory_size / vm->page_size)) {
+		sparsebit_idx_t pg_cnt;
+
+		if (sparsebit_is_clear(enc_phy_pages, pg)) {
+			pg = sparsebit_next_set(enc_phy_pages, pg);
+			if (!pg)
+				break;
+		}
+
+		pg_cnt = sparsebit_next_clear(enc_phy_pages, pg) - pg;
+		if (pg_cnt <= 0)
+			pg_cnt = 1;
+
+		sev_encrypt_phy_range(sev,
+				      gpa_start + pg * vm->page_size,
+				      pg_cnt * vm->page_size);
+		pg += pg_cnt;
+	}
+
+	sev->vm->memcrypt.encrypted = true;
+}
+
+/* SEV VM implementation. */
+
+static struct sev_vm *sev_vm_alloc(struct kvm_vm *vm)
+{
+	struct sev_user_data_status sev_status = {0};
+	uint32_t eax, ebx, ecx, edx;
+	struct sev_vm *sev;
+	int sev_fd;
+
+	sev_fd = open(SEV_DEV_PATH, O_RDWR);
+	if (sev_fd < 0) {
+		pr_info("Failed to open SEV device, path: %s, error: %d, skipping test.\n",
+			SEV_DEV_PATH, sev_fd);
+		return NULL;
+	}
+
+	sev_ioctl(sev_fd, SEV_PLATFORM_STATUS, &sev_status);
+
+	if (!(sev_status.api_major > SEV_FW_REQ_VER_MAJOR ||
+	      (sev_status.api_major == SEV_FW_REQ_VER_MAJOR &&
+	       sev_status.api_minor >= SEV_FW_REQ_VER_MINOR))) {
+		pr_info("SEV FW version too old. Have API %d.%d (build: %d), need %d.%d, skipping test.\n",
+			sev_status.api_major, sev_status.api_minor, sev_status.build,
+			SEV_FW_REQ_VER_MAJOR, SEV_FW_REQ_VER_MINOR);
+		return NULL;
+	}
+
+	sev = calloc(1, sizeof(*sev));
+	sev->fd = sev_fd;
+	sev->vm = vm;
+
+	/* Get encryption bit via CPUID. */
+	eax = 0x8000001f;
+	ecx = 0;
+	cpuid(&eax, &ebx, &ecx, &edx);
+	sev->enc_bit = ebx & 0x3F;
+
+	return sev;
+}
+
+void sev_vm_free(struct sev_vm *sev)
+{
+	ucall_uninit(sev->vm);
+	kvm_vm_free(sev->vm);
+	close(sev->fd);
+	free(sev);
+}
+
+struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
+{
+	struct sev_vm *sev;
+	struct kvm_vm *vm;
+
+	/* Need to handle memslots after init, and after setting memcrypt. */
+	vm = vm_create_barebones();
+	sev = sev_vm_alloc(vm);
+	if (!sev)
+		return NULL;
+	sev->sev_policy = policy;
+
+	kvm_sev_ioctl(sev, KVM_SEV_INIT, NULL);
+
+	vm->vpages_mapped = sparsebit_alloc();
+	vm_set_memory_encryption(vm, true, true, sev->enc_bit);
+	pr_info("SEV cbit: %d\n", sev->enc_bit);
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0);
+	sev_register_user_region(sev, addr_gpa2hva(vm, 0),
+				 npages * vm->page_size);
+
+	pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n",
+		sev->sev_policy, npages * vm->page_size / 1024);
+
+	return sev;
+}
+
+void sev_vm_launch(struct sev_vm *sev)
+{
+	struct kvm_sev_launch_start ksev_launch_start = {0};
+	struct kvm_sev_guest_status ksev_status = {0};
+
+	/* Need to use ucall_shared for synchronization. */
+	//ucall_init_ops(sev_get_vm(sev), NULL, &ucall_ops_halt);
+
+	ksev_launch_start.policy = sev->sev_policy;
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_START, &ksev_launch_start);
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.policy == sev->sev_policy, "Incorrect guest policy.");
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE,
+		    "Unexpected guest state: %d", ksev_status.state);
+
+	ucall_init(sev->vm, NULL);
+
+	sev_encrypt(sev);
+}
+
+void sev_vm_launch_measure(struct sev_vm *sev, uint8_t *measurement)
+{
+	struct kvm_sev_launch_measure ksev_launch_measure = {0};
+	struct kvm_sev_guest_status ksev_guest_status = {0};
+
+	ksev_launch_measure.len = 256;
+	ksev_launch_measure.uaddr = (__u64)measurement;
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_MEASURE, &ksev_launch_measure);
+
+	/* Measurement causes a state transition, check that. */
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_guest_status);
+	TEST_ASSERT(ksev_guest_status.state == SEV_GSTATE_LSECRET,
+		    "Unexpected guest state: %d", ksev_guest_status.state);
+}
+
+void sev_vm_launch_finish(struct sev_vm *sev)
+{
+	struct kvm_sev_guest_status ksev_status = {0};
+
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE ||
+		    ksev_status.state == SEV_GSTATE_LSECRET,
+		    "Unexpected guest state: %d", ksev_status.state);
+
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_FINISH, NULL);
+
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_RUNNING,
+		    "Unexpected guest state: %d", ksev_status.state);
+}
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [RFC V1 10/10] KVM: selftests: Add simple sev vm testing
  2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
                   ` (9 preceding siblings ...)
  2022-07-15 19:29 ` [RFC V1 09/10] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
@ 2022-07-15 19:29 ` Peter Gonda
  10 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-15 19:29 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

A very simple of booting SEV guests that checks related CPUID bits. This
is a stripped down version of "[PATCH v2 08/13] KVM: selftests: add SEV
boot tests" from Michael but much simpler.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>

---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/sev_all_boot_test.c  | 134 ++++++++++++++++++
 3 files changed, 136 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index f44ebf401310..ec084a61a819 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -34,6 +34,7 @@
 /x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
+/x86_64/sev_all_boot_test
 /x86_64/sev_migrate_tests
 /x86_64/smm_test
 /x86_64/state_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index f3f29a64e4b2..2b89e6bcb5b0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -123,6 +123,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test
 TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
 TEST_GEN_PROGS_x86_64 += x86_64/amx_test
 TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
new file mode 100644
index 000000000000..222ce41d6f68
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
@@ -0,0 +1,134 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Basic SEV boot tests.
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "linux/psp-sev.h"
+#include "sev.h"
+
+#define VCPU_ID			2
+#define PAGE_STRIDE		32
+
+#define SHARED_PAGES		8192
+#define SHARED_VADDR_MIN	0x1000000
+
+#define PRIVATE_PAGES		2048
+#define PRIVATE_VADDR_MIN	(SHARED_VADDR_MIN + SHARED_PAGES * PAGE_SIZE)
+
+#define TOTAL_PAGES		(512 + SHARED_PAGES + PRIVATE_PAGES)
+
+#define NR_SYNCS 1
+
+static void guest_run_loop(struct kvm_vcpu *vcpu)
+{
+	struct ucall uc;
+	int i;
+
+	for (i = 0; i <= NR_SYNCS; ++i) {
+		vcpu_run(vcpu);
+		switch (get_ucall(vcpu, &uc)) {
+		case UCALL_SYNC:
+			continue;
+		case UCALL_DONE:
+			return;
+		case UCALL_ABORT:
+			TEST_ASSERT(false, "%s at %s:%ld\n\tvalues: %#lx, %#lx",
+				    (const char *)uc.args[0], __FILE__,
+				    uc.args[1], uc.args[2], uc.args[3]);
+		default:
+			TEST_ASSERT(
+				false, "Unexpected exit: %s",
+				exit_reason_str(vcpu->run->exit_reason));
+		}
+	}
+}
+
+static void __attribute__((__flatten__))
+guest_sev_code(void)
+{
+	uint32_t eax, ebx, ecx, edx;
+	uint64_t sev_status;
+
+	GUEST_SYNC(1);
+
+	eax = 0x8000001f;
+	ecx = 0;
+	cpuid(&eax, &ebx, &ecx, &edx);
+	GUEST_ASSERT(eax & (1 << 1));
+
+	sev_status = rdmsr(MSR_AMD64_SEV);
+	GUEST_ASSERT((sev_status & 0x1) == 1);
+
+	GUEST_DONE();
+}
+
+static struct sev_vm *
+setup_test_common(void *guest_code, uint64_t policy, struct kvm_vcpu **vcpu)
+{
+	uint8_t measurement[512];
+	struct sev_vm *sev;
+	struct kvm_vm *vm;
+	int i;
+
+	sev = sev_vm_create(policy, TOTAL_PAGES);
+	if (!sev)
+		return NULL;
+	vm = sev_get_vm(sev);
+
+	/* Set up VCPU and initial guest kernel. */
+	*vcpu = vm_vcpu_add(vm, VCPU_ID, guest_code);
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	/* Allocations/setup done. Encrypt initial guest payload. */
+	sev_vm_launch(sev);
+
+	/* Dump the initial measurement. A test to actually verify it would be nice. */
+	sev_vm_launch_measure(sev, measurement);
+	pr_info("guest measurement: ");
+	for (i = 0; i < 32; ++i)
+		pr_info("%02x", measurement[i]);
+	pr_info("\n");
+
+	sev_vm_launch_finish(sev);
+
+	return sev;
+}
+
+static void test_sev(void *guest_code, uint64_t policy)
+{
+	struct sev_vm *sev;
+	struct kvm_vcpu *vcpu;
+
+	sev = setup_test_common(guest_code, policy, &vcpu);
+	if (!sev)
+		return;
+
+	/* Guest is ready to run. Do the tests. */
+	guest_run_loop(vcpu);
+
+	pr_info("guest ran successfully\n");
+
+	sev_vm_free(sev);
+}
+
+int main(int argc, char *argv[])
+{
+	/* SEV tests */
+	test_sev(guest_sev_code, SEV_POLICY_NO_DBG);
+	test_sev(guest_sev_code, 0);
+
+	return 0;
+}
-- 
2.37.0.170.g444d1eabd0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [RFC V1 06/10] KVM: selftests: Consolidate boilerplate code in get_ucall()
  2022-07-15 19:29 ` [RFC V1 06/10] KVM: selftests: Consolidate boilerplate code in get_ucall() Peter Gonda
@ 2022-07-19 14:42   ` Andrew Jones
  0 siblings, 0 replies; 17+ messages in thread
From: Andrew Jones @ 2022-07-19 14:42 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, seanjc, michael.roth,
	thomas.lendacky, joro, mizhang, pbonzini

On Fri, Jul 15, 2022 at 12:29:52PM -0700, Peter Gonda wrote:
> From: Sean Christopherson <seanjc@google.com>
> 
> Consolidate the actual copying of a ucall struct from guest=>host into
> the common get_ucall().  Return a host virtual address instead of a guest
> virtual address even though the addr_gva2hva() part could be moved to
> get_ucall() too.  Conceptually, get_ucall() is invoked from the host and
> should return a host virtual address (and returning NULL for "nothing to
> see here" is far superior to returning 0).
> 
> Use pointer shenanigans instead of an unnecessary bounce buffer when the
> caller of get_ucall() provides a valid pointer.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> ---
>  tools/testing/selftests/kvm/Makefile          |  1 +
>  .../selftests/kvm/include/ucall_common.h      | 17 +++++++-
>  .../testing/selftests/kvm/lib/aarch64/ucall.c | 36 ++++-------------
>  .../selftests/kvm/lib/perf_test_util.c        |  2 +-
>  tools/testing/selftests/kvm/lib/riscv/ucall.c | 40 ++++---------------
>  tools/testing/selftests/kvm/lib/s390x/ucall.c | 37 ++++-------------
>  .../testing/selftests/kvm/lib/ucall_common.c  | 39 ++++++++++++++++++
>  .../testing/selftests/kvm/lib/x86_64/ucall.c  | 37 ++++-------------
>  8 files changed, 84 insertions(+), 125 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/lib/ucall_common.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 4d6753aadfa0..61e85892dd9b 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -46,6 +46,7 @@ LIBKVM += lib/perf_test_util.c
>  LIBKVM += lib/rbtree.c
>  LIBKVM += lib/sparsebit.c
>  LIBKVM += lib/test_util.c
> +LIBKVM += lib/ucall_common.c
>  
>  LIBKVM_x86_64 += lib/x86_64/apic.c
>  LIBKVM_x86_64 += lib/x86_64/handlers.S
> diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
> index 98562f685151..cb9b37282701 100644
> --- a/tools/testing/selftests/kvm/include/ucall_common.h
> +++ b/tools/testing/selftests/kvm/include/ucall_common.h
> @@ -23,11 +23,24 @@ struct ucall {
>  	uint64_t args[UCALL_MAX_ARGS];
>  };
>  
> -void ucall_init(struct kvm_vm *vm, void *arg);
> -void ucall_uninit(struct kvm_vm *vm);
> +void ucall_arch_init(struct kvm_vm *vm, void *arg);
> +void ucall_arch_uninit(struct kvm_vm *vm);
> +void ucall_arch_do_ucall(vm_vaddr_t uc);
> +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
> +
>  void ucall(uint64_t cmd, int nargs, ...);
>  uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
>  
> +static inline void ucall_init(struct kvm_vm *vm, void *arg)
> +{
> +	ucall_arch_init(vm, arg);
> +}
> +
> +static inline void ucall_uninit(struct kvm_vm *vm)
> +{
> +	ucall_arch_uninit(vm);
> +}
> +
>  #define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4)	\
>  				ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
>  #define GUEST_SYNC(stage)	ucall(UCALL_SYNC, 2, "hello", stage)
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/ucall.c b/tools/testing/selftests/kvm/lib/aarch64/ucall.c
> index 0b949ee06b5e..9c124adbb560 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/ucall.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/ucall.c
> @@ -21,7 +21,7 @@ static bool ucall_mmio_init(struct kvm_vm *vm, vm_paddr_t gpa)
>  	return true;
>  }
>  
> -void ucall_init(struct kvm_vm *vm, void *arg)
> +void ucall_arch_init(struct kvm_vm *vm, void *arg)
>  {
>  	vm_paddr_t gpa, start, end, step, offset;
>  	unsigned int bits;
> @@ -64,37 +64,20 @@ void ucall_init(struct kvm_vm *vm, void *arg)
>  	TEST_FAIL("Can't find a ucall mmio address");
>  }
>  
> -void ucall_uninit(struct kvm_vm *vm)
> +void ucall_arch_uninit(struct kvm_vm *vm)
>  {
>  	ucall_exit_mmio_addr = 0;
>  	sync_global_to_guest(vm, ucall_exit_mmio_addr);
>  }
>  
> -void ucall(uint64_t cmd, int nargs, ...)
> +void ucall_arch_do_ucall(vm_vaddr_t uc)
>  {
> -	struct ucall uc = {
> -		.cmd = cmd,
> -	};
> -	va_list va;
> -	int i;
> -
> -	nargs = min(nargs, UCALL_MAX_ARGS);
> -
> -	va_start(va, nargs);
> -	for (i = 0; i < nargs; ++i)
> -		uc.args[i] = va_arg(va, uint64_t);
> -	va_end(va);
> -
> -	*ucall_exit_mmio_addr = (vm_vaddr_t)&uc;
> +	*ucall_exit_mmio_addr = uc;
>  }
>  
> -uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_run *run = vcpu->run;
> -	struct ucall ucall = {};
> -
> -	if (uc)
> -		memset(uc, 0, sizeof(*uc));
>  
>  	if (run->exit_reason == KVM_EXIT_MMIO &&
>  	    run->mmio.phys_addr == (uint64_t)ucall_exit_mmio_addr) {
> @@ -103,12 +86,7 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
>  		TEST_ASSERT(run->mmio.is_write && run->mmio.len == 8,
>  			    "Unexpected ucall exit mmio address access");
>  		memcpy(&gva, run->mmio.data, sizeof(gva));
> -		memcpy(&ucall, addr_gva2hva(vcpu->vm, gva), sizeof(ucall));
> -
> -		vcpu_run_complete_io(vcpu);
> -		if (uc)
> -			memcpy(uc, &ucall, sizeof(ucall));
> +		return addr_gva2hva(vcpu->vm, gva);
>  	}
> -
> -	return ucall.cmd;
> +	return NULL;
>  }
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 9618b37c66f7..14f88f3ffa80 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -209,7 +209,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
>  		perf_test_setup_nested(vm, nr_vcpus, vcpus);
>  	}
>  
> -	ucall_init(vm, NULL);
> +	ucall_arch_init(vm, NULL);

This change shouldn't be necessary since we have the ucall_init() wrapper
in ucall_common.h

>  
>  	/* Export the shared variables to the guest. */
>  	sync_global_to_guest(vm, perf_test_args);
> diff --git a/tools/testing/selftests/kvm/lib/riscv/ucall.c b/tools/testing/selftests/kvm/lib/riscv/ucall.c
> index 087b9740bc8f..37e091d4366e 100644
> --- a/tools/testing/selftests/kvm/lib/riscv/ucall.c
> +++ b/tools/testing/selftests/kvm/lib/riscv/ucall.c
> @@ -10,11 +10,11 @@
>  #include "kvm_util.h"
>  #include "processor.h"
>  
> -void ucall_init(struct kvm_vm *vm, void *arg)
> +void ucall_arch_init(struct kvm_vm *vm, void *arg)
>  {
>  }
>  
> -void ucall_uninit(struct kvm_vm *vm)
> +void ucall_arch_uninit(struct kvm_vm *vm)
>  {
>  }
>  
> @@ -44,47 +44,22 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
>  	return ret;
>  }
>  
> -void ucall(uint64_t cmd, int nargs, ...)
> +void ucall_arch_do_ucall(vm_vaddr_t uc)
>  {
> -	struct ucall uc = {
> -		.cmd = cmd,
> -	};
> -	va_list va;
> -	int i;
> -
> -	nargs = min(nargs, UCALL_MAX_ARGS);
> -
> -	va_start(va, nargs);
> -	for (i = 0; i < nargs; ++i)
> -		uc.args[i] = va_arg(va, uint64_t);
> -	va_end(va);
> -
>  	sbi_ecall(KVM_RISCV_SELFTESTS_SBI_EXT,
>  		  KVM_RISCV_SELFTESTS_SBI_UCALL,
> -		  (vm_vaddr_t)&uc, 0, 0, 0, 0, 0);
> +		  uc, 0, 0, 0, 0, 0);
>  }
>  
> -uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_run *run = vcpu->run;
> -	struct ucall ucall = {};
> -
> -	if (uc)
> -		memset(uc, 0, sizeof(*uc));
>  
>  	if (run->exit_reason == KVM_EXIT_RISCV_SBI &&
>  	    run->riscv_sbi.extension_id == KVM_RISCV_SELFTESTS_SBI_EXT) {
>  		switch (run->riscv_sbi.function_id) {
>  		case KVM_RISCV_SELFTESTS_SBI_UCALL:
> -			memcpy(&ucall,
> -			       addr_gva2hva(vcpu->vm, run->riscv_sbi.args[0]),
> -			       sizeof(ucall));
> -
> -			vcpu_run_complete_io(vcpu);
> -			if (uc)
> -				memcpy(uc, &ucall, sizeof(ucall));
> -
> -			break;
> +			return addr_gva2hva(vcpu->vm, run->riscv_sbi.args[0]);
>  		case KVM_RISCV_SELFTESTS_SBI_UNEXP:
>  			vcpu_dump(stderr, vcpu, 2);
>  			TEST_ASSERT(0, "Unexpected trap taken by guest");
> @@ -93,6 +68,5 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
>  			break;
>  		}
>  	}
> -
> -	return ucall.cmd;
> +	return NULL;
>  }
> diff --git a/tools/testing/selftests/kvm/lib/s390x/ucall.c b/tools/testing/selftests/kvm/lib/s390x/ucall.c
> index 73dc4e21190f..0f695a031d35 100644
> --- a/tools/testing/selftests/kvm/lib/s390x/ucall.c
> +++ b/tools/testing/selftests/kvm/lib/s390x/ucall.c
> @@ -6,40 +6,23 @@
>   */
>  #include "kvm_util.h"
>  
> -void ucall_init(struct kvm_vm *vm, void *arg)
> +void ucall_arch_init(struct kvm_vm *vm, void *arg)
>  {
>  }
>  
> -void ucall_uninit(struct kvm_vm *vm)
> +void ucall_arch_uninit(struct kvm_vm *vm)
>  {
>  }
>  
> -void ucall(uint64_t cmd, int nargs, ...)
> +void ucall_arch_do_ucall(vm_vaddr_t uc)
>  {
> -	struct ucall uc = {
> -		.cmd = cmd,
> -	};
> -	va_list va;
> -	int i;
> -
> -	nargs = min(nargs, UCALL_MAX_ARGS);
> -
> -	va_start(va, nargs);
> -	for (i = 0; i < nargs; ++i)
> -		uc.args[i] = va_arg(va, uint64_t);
> -	va_end(va);
> -
>  	/* Exit via DIAGNOSE 0x501 (normally used for breakpoints) */
> -	asm volatile ("diag 0,%0,0x501" : : "a"(&uc) : "memory");
> +	asm volatile ("diag 0,%0,0x501" : : "a"(uc) : "memory");
>  }
>  
> -uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_run *run = vcpu->run;
> -	struct ucall ucall = {};
> -
> -	if (uc)
> -		memset(uc, 0, sizeof(*uc));
>  
>  	if (run->exit_reason == KVM_EXIT_S390_SIEIC &&
>  	    run->s390_sieic.icptcode == 4 &&
> @@ -47,13 +30,7 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
>  	    (run->s390_sieic.ipb >> 16) == 0x501) {
>  		int reg = run->s390_sieic.ipa & 0xf;
>  
> -		memcpy(&ucall, addr_gva2hva(vcpu->vm, run->s.regs.gprs[reg]),
> -		       sizeof(ucall));
> -
> -		vcpu_run_complete_io(vcpu);
> -		if (uc)
> -			memcpy(uc, &ucall, sizeof(ucall));
> +		return addr_gva2hva(vcpu->vm, run->s.regs.gprs[reg]);
>  	}
> -
> -	return ucall.cmd;
> +	return NULL;
>  }
> diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
> new file mode 100644
> index 000000000000..c488ed23d0dd
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/ucall_common.c
> @@ -0,0 +1,39 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +#include "kvm_util.h"
> +
> +void ucall(uint64_t cmd, int nargs, ...)
> +{
> +	struct ucall uc = {
> +		.cmd = cmd,
> +	};
> +	va_list va;
> +	int i;
> +
> +	nargs = min(nargs, UCALL_MAX_ARGS);
> +
> +	va_start(va, nargs);
> +	for (i = 0; i < nargs; ++i)
> +		uc.args[i] = va_arg(va, uint64_t);
> +	va_end(va);
> +
> +	ucall_arch_do_ucall((vm_vaddr_t)&uc);
> +}
> +
> +uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> +{
> +	struct ucall ucall;
> +	void *addr;
> +
> +	if (!uc)
> +		uc = &ucall;
> +
> +	addr = ucall_arch_get_ucall(vcpu);
> +	if (addr) {
> +		memcpy(uc, addr, sizeof(*uc));
> +		vcpu_run_complete_io(vcpu);
> +	} else {
> +		memset(uc, 0, sizeof(*uc));
> +	}
> +
> +	return uc->cmd;
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/ucall.c b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> index e5f0f9e0d3ee..ec53a406f689 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> @@ -8,52 +8,29 @@
>  
>  #define UCALL_PIO_PORT ((uint16_t)0x1000)
>  
> -void ucall_init(struct kvm_vm *vm, void *arg)
> +void ucall_arch_init(struct kvm_vm *vm, void *arg)
>  {
>  }
>  
> -void ucall_uninit(struct kvm_vm *vm)
> +void ucall_arch_uninit(struct kvm_vm *vm)
>  {
>  }
>  
> -void ucall(uint64_t cmd, int nargs, ...)
> +void ucall_arch_do_ucall(vm_vaddr_t uc)
>  {
> -	struct ucall uc = {
> -		.cmd = cmd,
> -	};
> -	va_list va;
> -	int i;
> -
> -	nargs = min(nargs, UCALL_MAX_ARGS);
> -
> -	va_start(va, nargs);
> -	for (i = 0; i < nargs; ++i)
> -		uc.args[i] = va_arg(va, uint64_t);
> -	va_end(va);
> -
>  	asm volatile("in %[port], %%al"
> -		: : [port] "d" (UCALL_PIO_PORT), "D" (&uc) : "rax", "memory");
> +		: : [port] "d" (UCALL_PIO_PORT), "D" (uc) : "rax", "memory");
>  }
>  
> -uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> +void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_run *run = vcpu->run;
> -	struct ucall ucall = {};
> -
> -	if (uc)
> -		memset(uc, 0, sizeof(*uc));
>  
>  	if (run->exit_reason == KVM_EXIT_IO && run->io.port == UCALL_PIO_PORT) {
>  		struct kvm_regs regs;
>  
>  		vcpu_regs_get(vcpu, &regs);
> -		memcpy(&ucall, addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi),
> -		       sizeof(ucall));
> -
> -		vcpu_run_complete_io(vcpu);
> -		if (uc)
> -			memcpy(uc, &ucall, sizeof(ucall));
> +		return addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi);
>  	}
> -
> -	return ucall.cmd;
> +	return NULL;
>  }
> -- 
> 2.37.0.170.g444d1eabd0-goog
>

Other than the ucall_init() wrapper comment

Reviewed-by: Andrew Jones <andrew.jones@linux.dev>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests
  2022-07-15 19:29 ` [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests Peter Gonda
@ 2022-07-19 15:43   ` Andrew Jones
  2022-07-27 13:38     ` Peter Gonda
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Jones @ 2022-07-19 15:43 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, seanjc, michael.roth,
	thomas.lendacky, joro, mizhang, pbonzini

On Fri, Jul 15, 2022 at 12:29:54PM -0700, Peter Gonda wrote:
> Add support for encrypted, SEV, guests in the ucall framework. If
> encryption is enabled set up a pool of ucall structs in the guests'
> shared memory region. This was suggested in the thread on "[RFC PATCH
> 00/10] KVM: selftests: Add support for test-selectable ucall
> implementations". Using a listed as suggested there doesn't work well

list

> because the list is setup using HVAs not GVAs so use a bitmap + array
> solution instead to get the same pool result.
> 
> Suggested-by:Sean Christopherson <seanjc@google.com>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> 
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/include/kvm_util_base.h     |  30 +++--
>  .../selftests/kvm/include/ucall_common.h      |  14 +--
>  .../testing/selftests/kvm/lib/ucall_common.c  | 115 ++++++++++++++++--
>  .../testing/selftests/kvm/lib/x86_64/ucall.c  |   2 +-
>  5 files changed, 134 insertions(+), 28 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 61e85892dd9b..3d9f2a017389 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -47,6 +47,7 @@ LIBKVM += lib/rbtree.c
>  LIBKVM += lib/sparsebit.c
>  LIBKVM += lib/test_util.c
>  LIBKVM += lib/ucall_common.c
> +LIBKVM += $(top_srcdir)/tools/lib/find_bit.c

Why is this file being added?

>  
>  LIBKVM_x86_64 += lib/x86_64/apic.c
>  LIBKVM_x86_64 += lib/x86_64/handlers.S
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index 60b604ac9fa9..77aff2356d64 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -65,6 +65,7 @@ struct userspace_mem_regions {
>  /* Memory encryption policy/configuration. */
>  struct vm_memcrypt {
>  	bool enabled;
> +	bool encrypted;
>  	int8_t enc_by_default;
>  	bool has_enc_bit;
>  	int8_t enc_bit;
> @@ -99,6 +100,8 @@ struct kvm_vm {
>  	int stats_fd;
>  	struct kvm_stats_header stats_header;
>  	struct kvm_stats_desc *stats_desc;
> +
> +	struct list_head ucall_list;
>  };
>  
>  
> @@ -141,21 +144,21 @@ enum vm_guest_mode {
>  
>  extern enum vm_guest_mode vm_mode_default;
>  
> -#define VM_MODE_DEFAULT			vm_mode_default
> -#define MIN_PAGE_SHIFT			12U
> -#define ptes_per_page(page_size)	((page_size) / 8)
> +#define VM_MODE_DEFAULT            vm_mode_default
> +#define MIN_PAGE_SHIFT            12U
> +#define ptes_per_page(page_size)    ((page_size) / 8)
>  
>  #elif defined(__x86_64__)
>  
> -#define VM_MODE_DEFAULT			VM_MODE_PXXV48_4K
> -#define MIN_PAGE_SHIFT			12U
> -#define ptes_per_page(page_size)	((page_size) / 8)
> +#define VM_MODE_DEFAULT            VM_MODE_PXXV48_4K
> +#define MIN_PAGE_SHIFT            12U
> +#define ptes_per_page(page_size)    ((page_size) / 8)
>  
>  #elif defined(__s390x__)
>  
> -#define VM_MODE_DEFAULT			VM_MODE_P44V64_4K
> -#define MIN_PAGE_SHIFT			12U
> -#define ptes_per_page(page_size)	((page_size) / 16)
> +#define VM_MODE_DEFAULT            VM_MODE_P44V64_4K
> +#define MIN_PAGE_SHIFT            12U
> +#define ptes_per_page(page_size)    ((page_size) / 16)
>  
>  #elif defined(__riscv)
>  
> @@ -163,9 +166,9 @@ extern enum vm_guest_mode vm_mode_default;
>  #error "RISC-V 32-bit kvm selftests not supported"
>  #endif
>  
> -#define VM_MODE_DEFAULT			VM_MODE_P40V48_4K
> -#define MIN_PAGE_SHIFT			12U
> -#define ptes_per_page(page_size)	((page_size) / 8)
> +#define VM_MODE_DEFAULT            VM_MODE_P40V48_4K
> +#define MIN_PAGE_SHIFT            12U
> +#define ptes_per_page(page_size)    ((page_size) / 8)

Looks like your editor decided to change all the above defines to use
spaces instead of tabs. You might want to double check the other
patches as well to ensure lines added use tabs vs. spaces and that
there are no other random whitespace changes.

>  
>  #endif
>  
> @@ -802,6 +805,9 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
>  
>  static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
>  {
> +	TEST_ASSERT(
> +		!vm->memcrypt.encrypted,
> +		"Encrypted guests have their page tables encrypted so gva2* conversions are not possible.");
>  	return addr_arch_gva2gpa(vm, gva);
>  }
>  
> diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
> index cb9b37282701..d8ac16a68c0a 100644
> --- a/tools/testing/selftests/kvm/include/ucall_common.h
> +++ b/tools/testing/selftests/kvm/include/ucall_common.h
> @@ -21,6 +21,10 @@ enum {
>  struct ucall {
>  	uint64_t cmd;
>  	uint64_t args[UCALL_MAX_ARGS];
> +
> +	/* For encrypted guests. */
> +	uint64_t idx;
> +	struct ucall *hva;
>  };
>  
>  void ucall_arch_init(struct kvm_vm *vm, void *arg);
> @@ -31,15 +35,9 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
>  void ucall(uint64_t cmd, int nargs, ...);
>  uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
>  
> -static inline void ucall_init(struct kvm_vm *vm, void *arg)
> -{
> -	ucall_arch_init(vm, arg);
> -}
> +void ucall_init(struct kvm_vm *vm, void *arg);
>  
> -static inline void ucall_uninit(struct kvm_vm *vm)
> -{
> -	ucall_arch_uninit(vm);
> -}
> +void ucall_uninit(struct kvm_vm *vm);
>  
>  #define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4)	\
>  				ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
> diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
> index c488ed23d0dd..8e660b10f9b2 100644
> --- a/tools/testing/selftests/kvm/lib/ucall_common.c
> +++ b/tools/testing/selftests/kvm/lib/ucall_common.c
> @@ -1,22 +1,123 @@
>  // SPDX-License-Identifier: GPL-2.0-only
>  #include "kvm_util.h"
> +#include "linux/types.h"
> +#include "linux/bitmap.h"
> +#include "linux/bitops.h"
> +#include "linux/atomic.h"

Do we really need bitmap.h, bitops.h, and atomic.h? I see we use
clear_bit(), which I think is from atomic.h, and
atomic_test_and_set_bit(), which I have no idea where it comes from...

> +
> +struct ucall_header {
> +	DECLARE_BITMAP(in_use, KVM_MAX_VCPUS);
> +	struct ucall ucalls[KVM_MAX_VCPUS];
> +};
> +
> +static bool encrypted_guest;
> +static struct ucall_header *ucall_hdr;
> +
> +void ucall_init(struct kvm_vm *vm, void *arg)
> +{
> +	struct ucall *uc;
> +	struct ucall_header *hdr;
> +	vm_vaddr_t vaddr;
> +	int i;
> +
> +	encrypted_guest = vm->memcrypt.enabled;
> +	sync_global_to_guest(vm, encrypted_guest);
> +	if (!encrypted_guest)
> +		goto out;
> +
> +	TEST_ASSERT(!ucall_hdr,
> +		    "Only a single encrypted guest at a time for ucalls.");
> +	vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), vm->page_size);
> +	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
> +	memset(hdr, 0, sizeof(*hdr));
> +
> +	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
> +		uc = &hdr->ucalls[i];
> +		uc->hva = uc;
> +		uc->idx = i;
> +	}
> +
> +	ucall_hdr = (struct ucall_header *)vaddr;
> +	sync_global_to_guest(vm, ucall_hdr);
> +
> +out:
> +	ucall_arch_init(vm, arg);
> +}
> +
> +void ucall_uninit(struct kvm_vm *vm)
> +{
> +	encrypted_guest = false;
> +	ucall_hdr = NULL;
> +
> +	ucall_arch_uninit(vm);
> +}
> +
> +static struct ucall *ucall_alloc(void)
> +{
> +	struct ucall *uc = NULL;
> +	int i;
> +
> +	if (!encrypted_guest)
> +		goto out;
> +
> +	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
> +		if (atomic_test_and_set_bit(i, ucall_hdr->in_use))
> +			continue;
> +
> +		uc = &ucall_hdr->ucalls[i];

Doesn't this just mark all buffers as in-use and return the last one?
I think we want

	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
		if (!atomic_test_and_set_bit(i, ucall_hdr->in_use)) {
			uc = &ucall_hdr->ucalls[i];
			break;
		}
	}

> +	}
> +
> +out:
> +	return uc;
> +}
> +
> +static void ucall_free(struct ucall *uc)
> +{
> +	if (!encrypted_guest)
> +		return;
> +
> +	clear_bit(uc->idx, ucall_hdr->in_use);
> +}
> +
> +static vm_vaddr_t get_ucall_addr(struct ucall *uc)
> +{
> +	if (encrypted_guest)
> +		return (vm_vaddr_t)uc->hva;
> +
> +	return (vm_vaddr_t)uc;
> +}
>  
>  void ucall(uint64_t cmd, int nargs, ...)
>  {
> -	struct ucall uc = {
> -		.cmd = cmd,
> -	};
> +	struct ucall *uc;
> +	struct ucall tmp;
>  	va_list va;
>  	int i;
>  
> +	uc = ucall_alloc();
> +	if (!uc)
> +		uc = &tmp;
> +
> +	uc->cmd = cmd;
> +
>  	nargs = min(nargs, UCALL_MAX_ARGS);
>  
>  	va_start(va, nargs);
>  	for (i = 0; i < nargs; ++i)
> -		uc.args[i] = va_arg(va, uint64_t);
> +		uc->args[i] = va_arg(va, uint64_t);
>  	va_end(va);
>  
> -	ucall_arch_do_ucall((vm_vaddr_t)&uc);
> +	ucall_arch_do_ucall(get_ucall_addr(uc));
> +
> +	ucall_free(uc);
> +}
> +
> +void *get_ucall_hva(struct kvm_vm *vm, void *uc)
> +{
> +	if (encrypted_guest)
> +		return uc;
> +
> +	return addr_gva2hva(vm, (vm_vaddr_t)uc);
>  }
>  
>  uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> @@ -27,9 +128,9 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
>  	if (!uc)
>  		uc = &ucall;
>  
> -	addr = ucall_arch_get_ucall(vcpu);
> +	addr = get_ucall_hva(vcpu->vm, ucall_arch_get_ucall(vcpu));

Hmm... so now it's expected that ucall_arch_get_ucall() returns a gva...

>  	if (addr) {
> -		memcpy(uc, addr, sizeof(*uc));
> +		memcpy(uc, addr, sizeof(struct ucall));

Why make this change?

>  		vcpu_run_complete_io(vcpu);
>  	} else {
>  		memset(uc, 0, sizeof(*uc));
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/ucall.c b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> index ec53a406f689..ea6b2e3a8e39 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> @@ -30,7 +30,7 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
>  		struct kvm_regs regs;
>  
>  		vcpu_regs_get(vcpu, &regs);
> -		return addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi);
> +		return (void *)regs.rdi;

...we're only updating x86's ucall_arch_get_ucall() to return gvas.
What about the other architectures? Anyway, I'd rather we don't
change ucall_arch_get_ucall() to return gvas. They should continue
returning hvas and any trickery needed to translate a pool uc to
an hva should be put inside ucall_arch_get_ucall().

>  	}
>  	return NULL;
>  }
> -- 
> 2.37.0.170.g444d1eabd0-goog
> 

I'm not a big fan of mixing the concept of encrypted guests into ucalls. I
think we should have two types of ucalls, those have a uc pool in memory
shared with the host and those that don't. Encrypted guests pick the pool
version.

Thanks,
drew

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests
  2022-07-19 15:43   ` Andrew Jones
@ 2022-07-27 13:38     ` Peter Gonda
  2022-07-27 13:56       ` Andrew Jones
  0 siblings, 1 reply; 17+ messages in thread
From: Peter Gonda @ 2022-07-27 13:38 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvm list, LKML, Marc Orr, Sean Christopherson, Michael Roth,
	Lendacky, Thomas, Joerg Roedel, Mingwei Zhang, Paolo Bonzini

On Tue, Jul 19, 2022 at 9:43 AM Andrew Jones <andrew.jones@linux.dev> wrote:
>
> On Fri, Jul 15, 2022 at 12:29:54PM -0700, Peter Gonda wrote:
> > Add support for encrypted, SEV, guests in the ucall framework. If
> > encryption is enabled set up a pool of ucall structs in the guests'
> > shared memory region. This was suggested in the thread on "[RFC PATCH
> > 00/10] KVM: selftests: Add support for test-selectable ucall
> > implementations". Using a listed as suggested there doesn't work well
>
> list
>
> > because the list is setup using HVAs not GVAs so use a bitmap + array
> > solution instead to get the same pool result.
> >
> > Suggested-by:Sean Christopherson <seanjc@google.com>
> > Signed-off-by: Peter Gonda <pgonda@google.com>
> >
> > ---
> >  tools/testing/selftests/kvm/Makefile          |   1 +
> >  .../selftests/kvm/include/kvm_util_base.h     |  30 +++--
> >  .../selftests/kvm/include/ucall_common.h      |  14 +--
> >  .../testing/selftests/kvm/lib/ucall_common.c  | 115 ++++++++++++++++--
> >  .../testing/selftests/kvm/lib/x86_64/ucall.c  |   2 +-
> >  5 files changed, 134 insertions(+), 28 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > index 61e85892dd9b..3d9f2a017389 100644
> > --- a/tools/testing/selftests/kvm/Makefile
> > +++ b/tools/testing/selftests/kvm/Makefile
> > @@ -47,6 +47,7 @@ LIBKVM += lib/rbtree.c
> >  LIBKVM += lib/sparsebit.c
> >  LIBKVM += lib/test_util.c
> >  LIBKVM += lib/ucall_common.c
> > +LIBKVM += $(top_srcdir)/tools/lib/find_bit.c
>
> Why is this file being added?

This is a mistake, I'll revert. I was originally trying to use
find_first_zero_bit().

>
> >
> >  LIBKVM_x86_64 += lib/x86_64/apic.c
> >  LIBKVM_x86_64 += lib/x86_64/handlers.S
> > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > index 60b604ac9fa9..77aff2356d64 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > @@ -65,6 +65,7 @@ struct userspace_mem_regions {
> >  /* Memory encryption policy/configuration. */
> >  struct vm_memcrypt {
> >       bool enabled;
> > +     bool encrypted;
> >       int8_t enc_by_default;
> >       bool has_enc_bit;
> >       int8_t enc_bit;
> > @@ -99,6 +100,8 @@ struct kvm_vm {
> >       int stats_fd;
> >       struct kvm_stats_header stats_header;
> >       struct kvm_stats_desc *stats_desc;
> > +
> > +     struct list_head ucall_list;
> >  };
> >
> >
> > @@ -141,21 +144,21 @@ enum vm_guest_mode {
> >
> >  extern enum vm_guest_mode vm_mode_default;
> >
> > -#define VM_MODE_DEFAULT                      vm_mode_default
> > -#define MIN_PAGE_SHIFT                       12U
> > -#define ptes_per_page(page_size)     ((page_size) / 8)
> > +#define VM_MODE_DEFAULT            vm_mode_default
> > +#define MIN_PAGE_SHIFT            12U
> > +#define ptes_per_page(page_size)    ((page_size) / 8)
> >
> >  #elif defined(__x86_64__)
> >
> > -#define VM_MODE_DEFAULT                      VM_MODE_PXXV48_4K
> > -#define MIN_PAGE_SHIFT                       12U
> > -#define ptes_per_page(page_size)     ((page_size) / 8)
> > +#define VM_MODE_DEFAULT            VM_MODE_PXXV48_4K
> > +#define MIN_PAGE_SHIFT            12U
> > +#define ptes_per_page(page_size)    ((page_size) / 8)
> >
> >  #elif defined(__s390x__)
> >
> > -#define VM_MODE_DEFAULT                      VM_MODE_P44V64_4K
> > -#define MIN_PAGE_SHIFT                       12U
> > -#define ptes_per_page(page_size)     ((page_size) / 16)
> > +#define VM_MODE_DEFAULT            VM_MODE_P44V64_4K
> > +#define MIN_PAGE_SHIFT            12U
> > +#define ptes_per_page(page_size)    ((page_size) / 16)
> >
> >  #elif defined(__riscv)
> >
> > @@ -163,9 +166,9 @@ extern enum vm_guest_mode vm_mode_default;
> >  #error "RISC-V 32-bit kvm selftests not supported"
> >  #endif
> >
> > -#define VM_MODE_DEFAULT                      VM_MODE_P40V48_4K
> > -#define MIN_PAGE_SHIFT                       12U
> > -#define ptes_per_page(page_size)     ((page_size) / 8)
> > +#define VM_MODE_DEFAULT            VM_MODE_P40V48_4K
> > +#define MIN_PAGE_SHIFT            12U
> > +#define ptes_per_page(page_size)    ((page_size) / 8)
>
> Looks like your editor decided to change all the above defines to use
> spaces instead of tabs. You might want to double check the other
> patches as well to ensure lines added use tabs vs. spaces and that
> there are no other random whitespace changes.

Will fix.

>
> >
> >  #endif
> >
> > @@ -802,6 +805,9 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
> >
> >  static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> >  {
> > +     TEST_ASSERT(
> > +             !vm->memcrypt.encrypted,
> > +             "Encrypted guests have their page tables encrypted so gva2* conversions are not possible.");
> >       return addr_arch_gva2gpa(vm, gva);
> >  }
> >
> > diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
> > index cb9b37282701..d8ac16a68c0a 100644
> > --- a/tools/testing/selftests/kvm/include/ucall_common.h
> > +++ b/tools/testing/selftests/kvm/include/ucall_common.h
> > @@ -21,6 +21,10 @@ enum {
> >  struct ucall {
> >       uint64_t cmd;
> >       uint64_t args[UCALL_MAX_ARGS];
> > +
> > +     /* For encrypted guests. */
> > +     uint64_t idx;
> > +     struct ucall *hva;
> >  };
> >
> >  void ucall_arch_init(struct kvm_vm *vm, void *arg);
> > @@ -31,15 +35,9 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
> >  void ucall(uint64_t cmd, int nargs, ...);
> >  uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
> >
> > -static inline void ucall_init(struct kvm_vm *vm, void *arg)
> > -{
> > -     ucall_arch_init(vm, arg);
> > -}
> > +void ucall_init(struct kvm_vm *vm, void *arg);
> >
> > -static inline void ucall_uninit(struct kvm_vm *vm)
> > -{
> > -     ucall_arch_uninit(vm);
> > -}
> > +void ucall_uninit(struct kvm_vm *vm);
> >
> >  #define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4)       \
> >                               ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
> > diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
> > index c488ed23d0dd..8e660b10f9b2 100644
> > --- a/tools/testing/selftests/kvm/lib/ucall_common.c
> > +++ b/tools/testing/selftests/kvm/lib/ucall_common.c
> > @@ -1,22 +1,123 @@
> >  // SPDX-License-Identifier: GPL-2.0-only
> >  #include "kvm_util.h"
> > +#include "linux/types.h"
> > +#include "linux/bitmap.h"
> > +#include "linux/bitops.h"
> > +#include "linux/atomic.h"
>
> Do we really need bitmap.h, bitops.h, and atomic.h? I see we use
> clear_bit(), which I think is from atomic.h, and
> atomic_test_and_set_bit(), which I have no idea where it comes from...

I added that function to atomic.h in "RFC V1 07/10] tools: Add
atomic_test_and_set_bit()". You are right, I should revert the other
header additions.

>
> > +
> > +struct ucall_header {
> > +     DECLARE_BITMAP(in_use, KVM_MAX_VCPUS);
> > +     struct ucall ucalls[KVM_MAX_VCPUS];
> > +};
> > +
> > +static bool encrypted_guest;
> > +static struct ucall_header *ucall_hdr;
> > +
> > +void ucall_init(struct kvm_vm *vm, void *arg)
> > +{
> > +     struct ucall *uc;
> > +     struct ucall_header *hdr;
> > +     vm_vaddr_t vaddr;
> > +     int i;
> > +
> > +     encrypted_guest = vm->memcrypt.enabled;
> > +     sync_global_to_guest(vm, encrypted_guest);
> > +     if (!encrypted_guest)
> > +             goto out;
> > +
> > +     TEST_ASSERT(!ucall_hdr,
> > +                 "Only a single encrypted guest at a time for ucalls.");
> > +     vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), vm->page_size);
> > +     hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
> > +     memset(hdr, 0, sizeof(*hdr));
> > +
> > +     for (i = 0; i < KVM_MAX_VCPUS; ++i) {
> > +             uc = &hdr->ucalls[i];
> > +             uc->hva = uc;
> > +             uc->idx = i;
> > +     }
> > +
> > +     ucall_hdr = (struct ucall_header *)vaddr;
> > +     sync_global_to_guest(vm, ucall_hdr);
> > +
> > +out:
> > +     ucall_arch_init(vm, arg);
> > +}
> > +
> > +void ucall_uninit(struct kvm_vm *vm)
> > +{
> > +     encrypted_guest = false;
> > +     ucall_hdr = NULL;
> > +
> > +     ucall_arch_uninit(vm);
> > +}
> > +
> > +static struct ucall *ucall_alloc(void)
> > +{
> > +     struct ucall *uc = NULL;
> > +     int i;
> > +
> > +     if (!encrypted_guest)
> > +             goto out;
> > +
> > +     for (i = 0; i < KVM_MAX_VCPUS; ++i) {
> > +             if (atomic_test_and_set_bit(i, ucall_hdr->in_use))
> > +                     continue;
> > +
> > +             uc = &ucall_hdr->ucalls[i];
>
> Doesn't this just mark all buffers as in-use and return the last one?
> I think we want
>
>         for (i = 0; i < KVM_MAX_VCPUS; ++i) {
>                 if (!atomic_test_and_set_bit(i, ucall_hdr->in_use)) {
>                         uc = &ucall_hdr->ucalls[i];
>                         break;
>                 }
>         }

Yes, that looks correct. I'll update this.
>
> > +     }
> > +
> > +out:
> > +     return uc;
> > +}
> > +
> > +static void ucall_free(struct ucall *uc)
> > +{
> > +     if (!encrypted_guest)
> > +             return;
> > +
> > +     clear_bit(uc->idx, ucall_hdr->in_use);
> > +}
> > +
> > +static vm_vaddr_t get_ucall_addr(struct ucall *uc)
> > +{
> > +     if (encrypted_guest)
> > +             return (vm_vaddr_t)uc->hva;
> > +
> > +     return (vm_vaddr_t)uc;
> > +}
> >
> >  void ucall(uint64_t cmd, int nargs, ...)
> >  {
> > -     struct ucall uc = {
> > -             .cmd = cmd,
> > -     };
> > +     struct ucall *uc;
> > +     struct ucall tmp;
> >       va_list va;
> >       int i;
> >
> > +     uc = ucall_alloc();
> > +     if (!uc)
> > +             uc = &tmp;
> > +
> > +     uc->cmd = cmd;
> > +
> >       nargs = min(nargs, UCALL_MAX_ARGS);
> >
> >       va_start(va, nargs);
> >       for (i = 0; i < nargs; ++i)
> > -             uc.args[i] = va_arg(va, uint64_t);
> > +             uc->args[i] = va_arg(va, uint64_t);
> >       va_end(va);
> >
> > -     ucall_arch_do_ucall((vm_vaddr_t)&uc);
> > +     ucall_arch_do_ucall(get_ucall_addr(uc));
> > +
> > +     ucall_free(uc);
> > +}
> > +
> > +void *get_ucall_hva(struct kvm_vm *vm, void *uc)
> > +{
> > +     if (encrypted_guest)
> > +             return uc;
> > +
> > +     return addr_gva2hva(vm, (vm_vaddr_t)uc);
> >  }
> >
> >  uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> > @@ -27,9 +128,9 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
> >       if (!uc)
> >               uc = &ucall;
> >
> > -     addr = ucall_arch_get_ucall(vcpu);
> > +     addr = get_ucall_hva(vcpu->vm, ucall_arch_get_ucall(vcpu));
>
> Hmm... so now it's expected that ucall_arch_get_ucall() returns a gva...
>
> >       if (addr) {
> > -             memcpy(uc, addr, sizeof(*uc));
> > +             memcpy(uc, addr, sizeof(struct ucall));
>
> Why make this change?

I'll revert this.

>
> >               vcpu_run_complete_io(vcpu);
> >       } else {
> >               memset(uc, 0, sizeof(*uc));
> > diff --git a/tools/testing/selftests/kvm/lib/x86_64/ucall.c b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> > index ec53a406f689..ea6b2e3a8e39 100644
> > --- a/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> > +++ b/tools/testing/selftests/kvm/lib/x86_64/ucall.c
> > @@ -30,7 +30,7 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
> >               struct kvm_regs regs;
> >
> >               vcpu_regs_get(vcpu, &regs);
> > -             return addr_gva2hva(vcpu->vm, (vm_vaddr_t)regs.rdi);
> > +             return (void *)regs.rdi;
>
> ...we're only updating x86's ucall_arch_get_ucall() to return gvas.
> What about the other architectures? Anyway, I'd rather we don't
> change ucall_arch_get_ucall() to return gvas. They should continue
> returning hvas and any trickery needed to translate a pool uc to
> an hva should be put inside ucall_arch_get_ucall().

Makes sense. I'll maintain returning HVAs in both cases and let the
_arch_ calls handle the translations.

>
> >       }
> >       return NULL;
> >  }
> > --
> > 2.37.0.170.g444d1eabd0-goog
> >
>
> I'm not a big fan of mixing the concept of encrypted guests into ucalls. I
> think we should have two types of ucalls, those have a uc pool in memory
> shared with the host and those that don't. Encrypted guests pick the pool
> version.

Sean suggested this version where encrypted guests and normal guests
used the same ucall macros/functions. I am fine with adding a second
interface for encrypted VM ucall, do you think macros like
ENCRYPTED_GUEST_SYNC, ENCRYPTED_GUEST_ASSERT, and
get_encrypted_ucall() ?


>
> Thanks,
> drew

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests
  2022-07-27 13:38     ` Peter Gonda
@ 2022-07-27 13:56       ` Andrew Jones
  2022-07-27 14:07         ` Peter Gonda
  0 siblings, 1 reply; 17+ messages in thread
From: Andrew Jones @ 2022-07-27 13:56 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm list, LKML, Marc Orr, Sean Christopherson, Michael Roth,
	Lendacky, Thomas, Joerg Roedel, Mingwei Zhang, Paolo Bonzini

On Wed, Jul 27, 2022 at 07:38:29AM -0600, Peter Gonda wrote:
> On Tue, Jul 19, 2022 at 9:43 AM Andrew Jones <andrew.jones@linux.dev> wrote:
> > I'm not a big fan of mixing the concept of encrypted guests into ucalls. I
> > think we should have two types of ucalls, those have a uc pool in memory
> > shared with the host and those that don't. Encrypted guests pick the pool
> > version.
> 
> Sean suggested this version where encrypted guests and normal guests
> used the same ucall macros/functions. I am fine with adding a second
> interface for encrypted VM ucall, do you think macros like
> ENCRYPTED_GUEST_SYNC, ENCRYPTED_GUEST_ASSERT, and
> get_encrypted_ucall() ?
>

It's fine to add new functionality to ucall in order to keep the
interfaces the same, except for initializing with some sort of indication
that the "uc pool" version is needed. I just don't like all the references
to encrypted guests inside ucall. ucall should implement uc pools without
the current motivation for uc pools creeping into its implementation.

Thanks,
drew

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests
  2022-07-27 13:56       ` Andrew Jones
@ 2022-07-27 14:07         ` Peter Gonda
  0 siblings, 0 replies; 17+ messages in thread
From: Peter Gonda @ 2022-07-27 14:07 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvm list, LKML, Marc Orr, Sean Christopherson, Michael Roth,
	Lendacky, Thomas, Joerg Roedel, Mingwei Zhang, Paolo Bonzini

On Wed, Jul 27, 2022 at 7:56 AM Andrew Jones <andrew.jones@linux.dev> wrote:
>
> On Wed, Jul 27, 2022 at 07:38:29AM -0600, Peter Gonda wrote:
> > On Tue, Jul 19, 2022 at 9:43 AM Andrew Jones <andrew.jones@linux.dev> wrote:
> > > I'm not a big fan of mixing the concept of encrypted guests into ucalls. I
> > > think we should have two types of ucalls, those have a uc pool in memory
> > > shared with the host and those that don't. Encrypted guests pick the pool
> > > version.
> >
> > Sean suggested this version where encrypted guests and normal guests
> > used the same ucall macros/functions. I am fine with adding a second
> > interface for encrypted VM ucall, do you think macros like
> > ENCRYPTED_GUEST_SYNC, ENCRYPTED_GUEST_ASSERT, and
> > get_encrypted_ucall() ?
> >
>
> It's fine to add new functionality to ucall in order to keep the
> interfaces the same, except for initializing with some sort of indication
> that the "uc pool" version is needed. I just don't like all the references
> to encrypted guests inside ucall. ucall should implement uc pools without
> the current motivation for uc pools creeping into its implementation.

Ah that makes sense. So maybe instead of checking for 'if
(vm->memcrypt.enabled)' I should just add a new field in kvm_vm to
select for use of the uc pool? Something like kvm_vm.enable_uc_pool?

Thanks Drew!

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-07-27 14:07 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 19:29 [RFC V1 00/10] *** KVM: selftests: Add simple SEV test Peter Gonda
2022-07-15 19:29 ` [PATCH V2] KVM: selftests: Add simple sev vm testing Peter Gonda
2022-07-15 19:29 ` [RFC V1 01/10] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
2022-07-15 19:29 ` [RFC V1 02/10] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
2022-07-15 19:29 ` [RFC V1 03/10] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
2022-07-15 19:29 ` [RFC V1 04/10] KVM: selftests: handle encryption bits in page tables Peter Gonda
2022-07-15 19:29 ` [RFC V1 05/10] KVM: selftests: add support for encrypted vm_vaddr_* allocations Peter Gonda
2022-07-15 19:29 ` [RFC V1 06/10] KVM: selftests: Consolidate boilerplate code in get_ucall() Peter Gonda
2022-07-19 14:42   ` Andrew Jones
2022-07-15 19:29 ` [RFC V1 07/10] tools: Add atomic_test_and_set_bit() Peter Gonda
2022-07-15 19:29 ` [RFC V1 08/10] KVM: selftests: Make ucall work with encrypted guests Peter Gonda
2022-07-19 15:43   ` Andrew Jones
2022-07-27 13:38     ` Peter Gonda
2022-07-27 13:56       ` Andrew Jones
2022-07-27 14:07         ` Peter Gonda
2022-07-15 19:29 ` [RFC V1 09/10] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
2022-07-15 19:29 ` [RFC V1 10/10] KVM: selftests: Add simple sev vm testing Peter Gonda

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).