kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [V4 0/8] KVM: selftests: Add simple SEV test
@ 2022-08-29 17:10 Peter Gonda
  2022-08-29 17:10 ` [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
                   ` (7 more replies)
  0 siblings, 8 replies; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

This patch series continues the work Michael Roth has done in supporting
SEV guests in selftests. It continues on top of the work Sean
Christopherson has sent to support ucalls from SEV guests. Along with a
very simple version of the SEV selftests Michael originally proposed.

V4
 * Rebase ontop of seanjc@'s latest Ucall Pool series:
   https://lore.kernel.org/linux-arm-kernel/20220825232522.3997340-8-seanjc@google.com/
 * Fix up review comments from seanjc
 * Switch authorship on 2 patches because of significant changes, added
 * Michael as suggested-by or originally-by.

V3
 * Addressed more of andrew.jones@ in ucall patches.
 * Fix build in non-x86 archs.

V2
 * Dropped RFC tag
 * Correctly separated Sean's ucall patches into 2 as originally
   intended.
 * Addressed andrew.jones@ in ucall patches.
 * Fixed ucall pool usage to work for other archs

V1
 * https://lore.kernel.org/all/20220715192956.1873315-1-pgonda@google.com/

Michael Roth (5):
  KVM: selftests: move vm_phy_pages_alloc() earlier in file
  KVM: selftests: sparsebit: add const where appropriate
  KVM: selftests: add hooks for managing encrypted guest memory
  KVM: selftests: handle encryption bits in page tables
  KVM: selftests: add support for encrypted vm_vaddr_* allocations

Peter Gonda (3):
  KVM: selftests: add library for creating/interacting with SEV guests
  KVM: selftests: Update ucall pool to allocate from shared memory
  KVM: selftests: Add simple sev vm testing

 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/include/kvm_util_base.h     |  23 ++
 .../testing/selftests/kvm/include/sparsebit.h |  36 +--
 .../selftests/kvm/include/x86_64/sev.h        |  47 +++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 267 +++++++++++++-----
 tools/testing/selftests/kvm/lib/sparsebit.c   |  48 ++--
 .../testing/selftests/kvm/lib/ucall_common.c  |   2 +-
 .../selftests/kvm/lib/x86_64/processor.c      |  15 +-
 tools/testing/selftests/kvm/lib/x86_64/sev.c  | 232 +++++++++++++++
 .../selftests/kvm/x86_64/sev_all_boot_test.c  | 127 +++++++++
 11 files changed, 674 insertions(+), 126 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-10-06 17:35   ` Sean Christopherson
  2022-08-29 17:10 ` [V4 2/8] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

Subsequent patches will break some of this code out into file-local
helper functions, which will be used by functions like vm_vaddr_alloc(),
which currently are defined earlier in the file, so a forward
declaration would be needed.

Instead, move it earlier in the file, just above vm_vaddr_alloc() and
and friends, which are the main users.

Reviewed-by: Mingwei Zhang <mizhang@google.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 145 ++++++++++-----------
 1 file changed, 72 insertions(+), 73 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 846f9f6c5a17..06559994711e 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1100,6 +1100,78 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
 	return vcpu;
 }
 
+/*
+ * Physical Contiguous Page Allocator
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   num - number of pages
+ *   paddr_min - Physical address minimum
+ *   memslot - Memory region to allocate page from
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   Starting physical address
+ *
+ * Within the VM specified by vm, locates a range of available physical
+ * pages at or above paddr_min. If found, the pages are marked as in use
+ * and their base address is returned. A TEST_ASSERT failure occurs if
+ * not enough pages are available at or above paddr_min.
+ */
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot)
+{
+	struct userspace_mem_region *region;
+	sparsebit_idx_t pg, base;
+
+	TEST_ASSERT(num > 0, "Must allocate at least one page");
+
+	TEST_ASSERT((paddr_min % vm->page_size) == 0,
+		"Min physical address not divisible by page size.\n paddr_min: 0x%lx page_size: 0x%x",
+		paddr_min, vm->page_size);
+
+	region = memslot2region(vm, memslot);
+	base = pg = paddr_min >> vm->page_shift;
+
+	do {
+		for (; pg < base + num; ++pg) {
+			if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
+				base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
+				break;
+			}
+		}
+	} while (pg && pg != base + num);
+
+	if (pg == 0) {
+		fprintf(stderr,
+			"No guest physical page available, paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
+			paddr_min, vm->page_size, memslot);
+		fputs("---- vm dump ----\n", stderr);
+		vm_dump(stderr, vm, 2);
+		abort();
+	}
+
+	for (pg = base; pg < base + num; ++pg)
+		sparsebit_clear(region->unused_phy_pages, pg);
+
+	return base * vm->page_size;
+}
+
+vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
+			     uint32_t memslot)
+{
+	return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
+}
+
+/* Arbitrary minimum physical address used for virtual translation tables. */
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
+
+vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
+{
+	return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+}
+
 /*
  * VM Virtual Address Unused Gap
  *
@@ -1746,79 +1818,6 @@ const char *exit_reason_str(unsigned int exit_reason)
 	return "Unknown";
 }
 
-/*
- * Physical Contiguous Page Allocator
- *
- * Input Args:
- *   vm - Virtual Machine
- *   num - number of pages
- *   paddr_min - Physical address minimum
- *   memslot - Memory region to allocate page from
- *
- * Output Args: None
- *
- * Return:
- *   Starting physical address
- *
- * Within the VM specified by vm, locates a range of available physical
- * pages at or above paddr_min. If found, the pages are marked as in use
- * and their base address is returned. A TEST_ASSERT failure occurs if
- * not enough pages are available at or above paddr_min.
- */
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
-			      vm_paddr_t paddr_min, uint32_t memslot)
-{
-	struct userspace_mem_region *region;
-	sparsebit_idx_t pg, base;
-
-	TEST_ASSERT(num > 0, "Must allocate at least one page");
-
-	TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address "
-		"not divisible by page size.\n"
-		"  paddr_min: 0x%lx page_size: 0x%x",
-		paddr_min, vm->page_size);
-
-	region = memslot2region(vm, memslot);
-	base = pg = paddr_min >> vm->page_shift;
-
-	do {
-		for (; pg < base + num; ++pg) {
-			if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
-				base = pg = sparsebit_next_set(region->unused_phy_pages, pg);
-				break;
-			}
-		}
-	} while (pg && pg != base + num);
-
-	if (pg == 0) {
-		fprintf(stderr, "No guest physical page available, "
-			"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
-			paddr_min, vm->page_size, memslot);
-		fputs("---- vm dump ----\n", stderr);
-		vm_dump(stderr, vm, 2);
-		abort();
-	}
-
-	for (pg = base; pg < base + num; ++pg)
-		sparsebit_clear(region->unused_phy_pages, pg);
-
-	return base * vm->page_size;
-}
-
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
-			     uint32_t memslot)
-{
-	return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
-}
-
-/* Arbitrary minimum physical address used for virtual translation tables. */
-#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
-
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
-{
-	return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
-}
-
 /*
  * Address Guest Virtual to Host Virtual
  *
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 2/8] KVM: selftests: sparsebit: add const where appropriate
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
  2022-08-29 17:10 ` [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-08-29 17:10 ` [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

Subsequent patches will introduce an encryption bitmap in kvm_util that
would be useful to allow tests to access in read-only fashion. This
will be done via a const sparsebit*. To avoid warnings or the need to
add casts everywhere, add const to the various sparsebit functions that
are applicable for read-only usage of sparsebit.

Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../testing/selftests/kvm/include/sparsebit.h | 36 +++++++-------
 tools/testing/selftests/kvm/lib/sparsebit.c   | 48 +++++++++----------
 2 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h
index 12a9a4b9cead..fb5170d57fcb 100644
--- a/tools/testing/selftests/kvm/include/sparsebit.h
+++ b/tools/testing/selftests/kvm/include/sparsebit.h
@@ -30,26 +30,26 @@ typedef uint64_t sparsebit_num_t;
 
 struct sparsebit *sparsebit_alloc(void);
 void sparsebit_free(struct sparsebit **sbitp);
-void sparsebit_copy(struct sparsebit *dstp, struct sparsebit *src);
+void sparsebit_copy(struct sparsebit *dstp, const struct sparsebit *src);
 
-bool sparsebit_is_set(struct sparsebit *sbit, sparsebit_idx_t idx);
-bool sparsebit_is_set_num(struct sparsebit *sbit,
+bool sparsebit_is_set(const struct sparsebit *sbit, sparsebit_idx_t idx);
+bool sparsebit_is_set_num(const struct sparsebit *sbit,
 			  sparsebit_idx_t idx, sparsebit_num_t num);
-bool sparsebit_is_clear(struct sparsebit *sbit, sparsebit_idx_t idx);
-bool sparsebit_is_clear_num(struct sparsebit *sbit,
+bool sparsebit_is_clear(const struct sparsebit *sbit, sparsebit_idx_t idx);
+bool sparsebit_is_clear_num(const struct sparsebit *sbit,
 			    sparsebit_idx_t idx, sparsebit_num_t num);
-sparsebit_num_t sparsebit_num_set(struct sparsebit *sbit);
-bool sparsebit_any_set(struct sparsebit *sbit);
-bool sparsebit_any_clear(struct sparsebit *sbit);
-bool sparsebit_all_set(struct sparsebit *sbit);
-bool sparsebit_all_clear(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_first_set(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_first_clear(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_next_set(struct sparsebit *sbit, sparsebit_idx_t prev);
-sparsebit_idx_t sparsebit_next_clear(struct sparsebit *sbit, sparsebit_idx_t prev);
-sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *sbit,
+sparsebit_num_t sparsebit_num_set(const struct sparsebit *sbit);
+bool sparsebit_any_set(const struct sparsebit *sbit);
+bool sparsebit_any_clear(const struct sparsebit *sbit);
+bool sparsebit_all_set(const struct sparsebit *sbit);
+bool sparsebit_all_clear(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_first_set(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_next_set(const struct sparsebit *sbit, sparsebit_idx_t prev);
+sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *sbit, sparsebit_idx_t prev);
+sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *sbit,
 				       sparsebit_idx_t start, sparsebit_num_t num);
-sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *sbit,
+sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *sbit,
 					 sparsebit_idx_t start, sparsebit_num_t num);
 
 void sparsebit_set(struct sparsebit *sbitp, sparsebit_idx_t idx);
@@ -62,9 +62,9 @@ void sparsebit_clear_num(struct sparsebit *sbitp,
 			 sparsebit_idx_t start, sparsebit_num_t num);
 void sparsebit_clear_all(struct sparsebit *sbitp);
 
-void sparsebit_dump(FILE *stream, struct sparsebit *sbit,
+void sparsebit_dump(FILE *stream, const struct sparsebit *sbit,
 		    unsigned int indent);
-void sparsebit_validate_internal(struct sparsebit *sbit);
+void sparsebit_validate_internal(const struct sparsebit *sbit);
 
 #ifdef __cplusplus
 }
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index 50e0cf41a7dd..6777a5b1fbd2 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -202,7 +202,7 @@ static sparsebit_num_t node_num_set(struct node *nodep)
 /* Returns a pointer to the node that describes the
  * lowest bit index.
  */
-static struct node *node_first(struct sparsebit *s)
+static struct node *node_first(const struct sparsebit *s)
 {
 	struct node *nodep;
 
@@ -216,7 +216,7 @@ static struct node *node_first(struct sparsebit *s)
  * lowest bit index > the index of the node pointed to by np.
  * Returns NULL if no node with a higher index exists.
  */
-static struct node *node_next(struct sparsebit *s, struct node *np)
+static struct node *node_next(const struct sparsebit *s, struct node *np)
 {
 	struct node *nodep = np;
 
@@ -244,7 +244,7 @@ static struct node *node_next(struct sparsebit *s, struct node *np)
  * highest index < the index of the node pointed to by np.
  * Returns NULL if no node with a lower index exists.
  */
-static struct node *node_prev(struct sparsebit *s, struct node *np)
+static struct node *node_prev(const struct sparsebit *s, struct node *np)
 {
 	struct node *nodep = np;
 
@@ -273,7 +273,7 @@ static struct node *node_prev(struct sparsebit *s, struct node *np)
  * subtree and duplicates the bit settings to the newly allocated nodes.
  * Returns the newly allocated copy of subtree.
  */
-static struct node *node_copy_subtree(struct node *subtree)
+static struct node *node_copy_subtree(const struct node *subtree)
 {
 	struct node *root;
 
@@ -307,7 +307,7 @@ static struct node *node_copy_subtree(struct node *subtree)
  * index is within the bits described by the mask bits or the number of
  * contiguous bits set after the mask.  Returns NULL if there is no such node.
  */
-static struct node *node_find(struct sparsebit *s, sparsebit_idx_t idx)
+static struct node *node_find(const struct sparsebit *s, sparsebit_idx_t idx)
 {
 	struct node *nodep;
 
@@ -393,7 +393,7 @@ static struct node *node_add(struct sparsebit *s, sparsebit_idx_t idx)
 }
 
 /* Returns whether all the bits in the sparsebit array are set.  */
-bool sparsebit_all_set(struct sparsebit *s)
+bool sparsebit_all_set(const struct sparsebit *s)
 {
 	/*
 	 * If any nodes there must be at least one bit set.  Only case
@@ -776,7 +776,7 @@ static void node_reduce(struct sparsebit *s, struct node *nodep)
 /* Returns whether the bit at the index given by idx, within the
  * sparsebit array is set or not.
  */
-bool sparsebit_is_set(struct sparsebit *s, sparsebit_idx_t idx)
+bool sparsebit_is_set(const struct sparsebit *s, sparsebit_idx_t idx)
 {
 	struct node *nodep;
 
@@ -922,7 +922,7 @@ static inline sparsebit_idx_t node_first_clear(struct node *nodep, int start)
  * used by test cases after they detect an unexpected condition, as a means
  * to capture diagnostic information.
  */
-static void sparsebit_dump_internal(FILE *stream, struct sparsebit *s,
+static void sparsebit_dump_internal(FILE *stream, const struct sparsebit *s,
 	unsigned int indent)
 {
 	/* Dump the contents of s */
@@ -970,7 +970,7 @@ void sparsebit_free(struct sparsebit **sbitp)
  * sparsebit_alloc().  It can though already have bits set, which
  * if different from src will be cleared.
  */
-void sparsebit_copy(struct sparsebit *d, struct sparsebit *s)
+void sparsebit_copy(struct sparsebit *d, const struct sparsebit *s)
 {
 	/* First clear any bits already set in the destination */
 	sparsebit_clear_all(d);
@@ -982,7 +982,7 @@ void sparsebit_copy(struct sparsebit *d, struct sparsebit *s)
 }
 
 /* Returns whether num consecutive bits starting at idx are all set.  */
-bool sparsebit_is_set_num(struct sparsebit *s,
+bool sparsebit_is_set_num(const struct sparsebit *s,
 	sparsebit_idx_t idx, sparsebit_num_t num)
 {
 	sparsebit_idx_t next_cleared;
@@ -1006,14 +1006,14 @@ bool sparsebit_is_set_num(struct sparsebit *s,
 }
 
 /* Returns whether the bit at the index given by idx.  */
-bool sparsebit_is_clear(struct sparsebit *s,
+bool sparsebit_is_clear(const struct sparsebit *s,
 	sparsebit_idx_t idx)
 {
 	return !sparsebit_is_set(s, idx);
 }
 
 /* Returns whether num consecutive bits starting at idx are all cleared.  */
-bool sparsebit_is_clear_num(struct sparsebit *s,
+bool sparsebit_is_clear_num(const struct sparsebit *s,
 	sparsebit_idx_t idx, sparsebit_num_t num)
 {
 	sparsebit_idx_t next_set;
@@ -1042,13 +1042,13 @@ bool sparsebit_is_clear_num(struct sparsebit *s,
  * value.  Use sparsebit_any_set(), instead of sparsebit_num_set() > 0,
  * to determine if the sparsebit array has any bits set.
  */
-sparsebit_num_t sparsebit_num_set(struct sparsebit *s)
+sparsebit_num_t sparsebit_num_set(const struct sparsebit *s)
 {
 	return s->num_set;
 }
 
 /* Returns whether any bit is set in the sparsebit array.  */
-bool sparsebit_any_set(struct sparsebit *s)
+bool sparsebit_any_set(const struct sparsebit *s)
 {
 	/*
 	 * Nodes only describe set bits.  If any nodes then there
@@ -1071,20 +1071,20 @@ bool sparsebit_any_set(struct sparsebit *s)
 }
 
 /* Returns whether all the bits in the sparsebit array are cleared.  */
-bool sparsebit_all_clear(struct sparsebit *s)
+bool sparsebit_all_clear(const struct sparsebit *s)
 {
 	return !sparsebit_any_set(s);
 }
 
 /* Returns whether all the bits in the sparsebit array are set.  */
-bool sparsebit_any_clear(struct sparsebit *s)
+bool sparsebit_any_clear(const struct sparsebit *s)
 {
 	return !sparsebit_all_set(s);
 }
 
 /* Returns the index of the first set bit.  Abort if no bits are set.
  */
-sparsebit_idx_t sparsebit_first_set(struct sparsebit *s)
+sparsebit_idx_t sparsebit_first_set(const struct sparsebit *s)
 {
 	struct node *nodep;
 
@@ -1098,7 +1098,7 @@ sparsebit_idx_t sparsebit_first_set(struct sparsebit *s)
 /* Returns the index of the first cleared bit.  Abort if
  * no bits are cleared.
  */
-sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s)
+sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *s)
 {
 	struct node *nodep1, *nodep2;
 
@@ -1152,7 +1152,7 @@ sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s)
 /* Returns index of next bit set within s after the index given by prev.
  * Returns 0 if there are no bits after prev that are set.
  */
-sparsebit_idx_t sparsebit_next_set(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_set(const struct sparsebit *s,
 	sparsebit_idx_t prev)
 {
 	sparsebit_idx_t lowest_possible = prev + 1;
@@ -1245,7 +1245,7 @@ sparsebit_idx_t sparsebit_next_set(struct sparsebit *s,
 /* Returns index of next bit cleared within s after the index given by prev.
  * Returns 0 if there are no bits after prev that are cleared.
  */
-sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *s,
 	sparsebit_idx_t prev)
 {
 	sparsebit_idx_t lowest_possible = prev + 1;
@@ -1301,7 +1301,7 @@ sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s,
  * and returns the index of the first sequence of num consecutively set
  * bits.  Returns a value of 0 of no such sequence exists.
  */
-sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *s,
 	sparsebit_idx_t start, sparsebit_num_t num)
 {
 	sparsebit_idx_t idx;
@@ -1336,7 +1336,7 @@ sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s,
  * and returns the index of the first sequence of num consecutively cleared
  * bits.  Returns a value of 0 of no such sequence exists.
  */
-sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *s,
 	sparsebit_idx_t start, sparsebit_num_t num)
 {
 	sparsebit_idx_t idx;
@@ -1584,7 +1584,7 @@ static size_t display_range(FILE *stream, sparsebit_idx_t low,
  * contiguous bits.  This is done because '-' is used to specify command-line
  * options, and sometimes ranges are specified as command-line arguments.
  */
-void sparsebit_dump(FILE *stream, struct sparsebit *s,
+void sparsebit_dump(FILE *stream, const struct sparsebit *s,
 	unsigned int indent)
 {
 	size_t current_line_len = 0;
@@ -1682,7 +1682,7 @@ void sparsebit_dump(FILE *stream, struct sparsebit *s,
  * s.  On error, diagnostic information is printed to stderr and
  * abort is called.
  */
-void sparsebit_validate_internal(struct sparsebit *s)
+void sparsebit_validate_internal(const struct sparsebit *s)
 {
 	bool error_detected = false;
 	struct node *nodep, *prev = NULL;
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
  2022-08-29 17:10 ` [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
  2022-08-29 17:10 ` [V4 2/8] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-10-06 17:48   ` Sean Christopherson
  2022-08-29 17:10 ` [V4 4/8] KVM: selftests: handle encryption bits in page tables Peter Gonda
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

VM implementations that make use of encrypted memory need a way to
configure things like the encryption/shared bit position for page
table handling, the default encryption policy for internal allocations
made by the core library, and a way to fetch the list/bitmap of
encrypted pages to do the actual memory encryption. Add an interface to
configure these parameters. Also introduce a sparsebit map to track
allocations/mappings that should be treated as encrypted, and provide
a way for VM implementations to retrieve it to handle operations
related memory encryption.

Reviewed-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     | 17 ++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 52 +++++++++++++++++--
 2 files changed, 66 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 59d52b58a1a6..5ecde5ad4c2f 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -33,6 +33,7 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
 struct userspace_mem_region {
 	struct kvm_userspace_memory_region region;
 	struct sparsebit *unused_phy_pages;
+	struct sparsebit *encrypted_phy_pages;
 	int fd;
 	off_t offset;
 	void *host_mem;
@@ -65,6 +66,14 @@ struct userspace_mem_regions {
 	DECLARE_HASHTABLE(slot_hash, 9);
 };
 
+/* Memory encryption policy/configuration. */
+struct vm_memcrypt {
+	bool enabled;
+	int8_t enc_by_default;
+	bool has_enc_bit;
+	int8_t enc_bit;
+};
+
 struct kvm_vm {
 	int mode;
 	unsigned long type;
@@ -89,6 +98,7 @@ struct kvm_vm {
 	vm_vaddr_t idt;
 	vm_vaddr_t handlers;
 	uint32_t dirty_ring_size;
+	struct vm_memcrypt memcrypt;
 
 	/* Cache of information for binary stats interface */
 	int stats_fd;
@@ -849,4 +859,11 @@ static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
 	return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
 }
 
+void vm_set_memory_encryption(struct kvm_vm *vm, bool enc_by_default, bool has_enc_bit,
+			      uint8_t enc_bit);
+
+const struct sparsebit *vm_get_encrypted_phy_pages(struct kvm_vm *vm, int slot,
+						   vm_paddr_t *gpa_start,
+						   uint64_t *size);
+
 #endif /* SELFTEST_KVM_UTIL_BASE_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 06559994711e..53b9a509c1d5 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -553,6 +553,7 @@ static void __vm_mem_region_delete(struct kvm_vm *vm,
 	vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, &region->region);
 
 	sparsebit_free(&region->unused_phy_pages);
+	sparsebit_free(&region->encrypted_phy_pages);
 	ret = munmap(region->mmap_start, region->mmap_size);
 	TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret));
 
@@ -893,6 +894,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	}
 
 	region->unused_phy_pages = sparsebit_alloc();
+	region->encrypted_phy_pages = sparsebit_alloc();
 	sparsebit_set_num(region->unused_phy_pages,
 		guest_paddr >> vm->page_shift, npages);
 	region->region.slot = slot;
@@ -1108,6 +1110,7 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
  *   num - number of pages
  *   paddr_min - Physical address minimum
  *   memslot - Memory region to allocate page from
+ *   encrypt - Whether to treat the pages as encrypted
  *
  * Output Args: None
  *
@@ -1119,8 +1122,9 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
  * and their base address is returned. A TEST_ASSERT failure occurs if
  * not enough pages are available at or above paddr_min.
  */
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
-			      vm_paddr_t paddr_min, uint32_t memslot)
+static vm_paddr_t
+_vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min,
+		    uint32_t memslot, bool encrypt)
 {
 	struct userspace_mem_region *region;
 	sparsebit_idx_t pg, base;
@@ -1152,12 +1156,22 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 		abort();
 	}
 
-	for (pg = base; pg < base + num; ++pg)
+	for (pg = base; pg < base + num; ++pg) {
 		sparsebit_clear(region->unused_phy_pages, pg);
+		if (encrypt)
+			sparsebit_set(region->encrypted_phy_pages, pg);
+	}
 
 	return base * vm->page_size;
 }
 
+vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot)
+{
+	return _vm_phy_pages_alloc(vm, num, paddr_min, memslot,
+				   vm->memcrypt.enc_by_default);
+}
+
 vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 			     uint32_t memslot)
 {
@@ -1741,6 +1755,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 			region->host_mem);
 		fprintf(stream, "%*sunused_phy_pages: ", indent + 2, "");
 		sparsebit_dump(stream, region->unused_phy_pages, 0);
+		if (vm->memcrypt.enabled) {
+			fprintf(stream, "%*sencrypted_phy_pages: ", indent + 2, "");
+			sparsebit_dump(stream, region->encrypted_phy_pages, 0);
+		}
 	}
 	fprintf(stream, "%*sMapped Virtual Pages:\n", indent, "");
 	sparsebit_dump(stream, vm->vpages_mapped, indent + 2);
@@ -1989,3 +2007,31 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
 		break;
 	}
 }
+
+void vm_set_memory_encryption(struct kvm_vm *vm, bool enc_by_default, bool has_enc_bit,
+			      uint8_t enc_bit)
+{
+	vm->memcrypt.enabled = true;
+	vm->memcrypt.enc_by_default = enc_by_default;
+	vm->memcrypt.has_enc_bit = has_enc_bit;
+	vm->memcrypt.enc_bit = enc_bit;
+}
+
+const struct sparsebit *
+vm_get_encrypted_phy_pages(struct kvm_vm *vm, int slot, vm_paddr_t *gpa_start,
+			   uint64_t *size)
+{
+	struct userspace_mem_region *region;
+
+	if (!vm->memcrypt.enabled)
+		return NULL;
+
+	region = memslot2region(vm, slot);
+	if (!region)
+		return NULL;
+
+	*size = region->region.memory_size;
+	*gpa_start = region->region.guest_phys_addr;
+
+	return region->encrypted_phy_pages;
+}
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 4/8] KVM: selftests: handle encryption bits in page tables
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (2 preceding siblings ...)
  2022-08-29 17:10 ` [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-10-06 17:34   ` Sean Christopherson
  2022-08-29 17:10 ` [V4 5/8] KVM: selftests: add support for encrypted vm_vaddr_* allocations Peter Gonda
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

SEV guests rely on an encyption bit which resides within the range that
current code treats as address bits. Guest code will expect these bits
to be set appropriately in their page tables, whereas the rest of the
kvm_util functions will generally expect these bits to not be present.
Introduce addr_gpa2raw()/addr_raw2gpa() to add/remove these bits, then
use them where appropriate.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     |  2 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 55 ++++++++++++++++++-
 .../selftests/kvm/lib/x86_64/processor.c      | 15 +++--
 3 files changed, 66 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 5ecde5ad4c2f..dda8467d1434 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -401,6 +401,8 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
 void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
 vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
 void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
+vm_paddr_t addr_raw2gpa(struct kvm_vm *vm, vm_vaddr_t gpa_raw);
+vm_paddr_t addr_gpa2raw(struct kvm_vm *vm, vm_vaddr_t gpa);
 
 void vcpu_run(struct kvm_vcpu *vcpu);
 int _vcpu_run(struct kvm_vcpu *vcpu);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 53b9a509c1d5..de13be62d52d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1388,6 +1388,58 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 	}
 }
 
+/*
+ * Mask off any special bits from raw GPA
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa_raw - Raw VM physical address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   GPA with special bits (e.g. shared/encrypted) masked off.
+ */
+vm_paddr_t addr_raw2gpa(struct kvm_vm *vm, vm_paddr_t gpa_raw)
+{
+	if (!vm->memcrypt.has_enc_bit)
+		return gpa_raw;
+
+	return gpa_raw & ~(1ULL << vm->memcrypt.enc_bit);
+}
+
+/*
+ * Add special/encryption bits to a GPA based on encryption bitmap.
+ *
+ * Input Args:
+ *   vm - Virtual Machine
+ *   gpa - VM physical address
+ *
+ * Output Args: None
+ *
+ * Return:
+ *   GPA with special bits (e.g. shared/encrypted) added in.
+ */
+vm_paddr_t addr_gpa2raw(struct kvm_vm *vm, vm_paddr_t gpa)
+{
+	struct userspace_mem_region *region;
+	sparsebit_idx_t pg;
+	vm_paddr_t gpa_raw = gpa;
+
+	TEST_ASSERT(addr_raw2gpa(vm, gpa) == gpa, "Unexpected bits in GPA: %lx",
+		    gpa);
+
+	if (!vm->memcrypt.has_enc_bit)
+		return gpa;
+
+	region = userspace_mem_region_find(vm, gpa, gpa);
+	pg = gpa >> vm->page_shift;
+	if (sparsebit_is_set(region->encrypted_phy_pages, pg))
+		gpa_raw |= (1ULL << vm->memcrypt.enc_bit);
+
+	return gpa_raw;
+}
+
 /*
  * Address VM Physical to Host Virtual
  *
@@ -1405,9 +1457,10 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
  * address providing the memory to the vm physical address is returned.
  * A TEST_ASSERT failure occurs if no region containing gpa exists.
  */
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa_raw)
 {
 	struct userspace_mem_region *region;
+	vm_paddr_t gpa = addr_raw2gpa(vm, gpa_raw);
 
 	region = userspace_mem_region_find(vm, gpa, gpa);
 	if (!region) {
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 2e6e61bbe81b..b2df259ce706 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -118,7 +118,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
 
 	/* If needed, create page map l4 table. */
 	if (!vm->pgd_created) {
-		vm->pgd = vm_alloc_page_table(vm);
+		vm->pgd = addr_gpa2raw(vm, vm_alloc_page_table(vm));
 		vm->pgd_created = true;
 	}
 }
@@ -140,13 +140,15 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
 				       int target_level)
 {
 	uint64_t *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level);
+	uint64_t paddr_raw = addr_gpa2raw(vm, paddr);
 
 	if (!(*pte & PTE_PRESENT_MASK)) {
 		*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK;
 		if (current_level == target_level)
-			*pte |= PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+			*pte |= PTE_LARGE_MASK | (paddr_raw & PHYSICAL_PAGE_MASK);
 		else
-			*pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
+			*pte |= addr_gpa2raw(vm, vm_alloc_page_table(vm)) & PHYSICAL_PAGE_MASK;
+
 	} else {
 		/*
 		 * Entry already present.  Assert that the caller doesn't want
@@ -184,6 +186,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
 		    "Physical address beyond maximum supported,\n"
 		    "  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
 		    paddr, vm->max_gfn, vm->page_size);
+	TEST_ASSERT(addr_raw2gpa(vm, paddr) == paddr,
+		    "Unexpected bits in paddr: %lx", paddr);
 
 	/*
 	 * Allocate upper level page tables, if not already present.  Return
@@ -206,7 +210,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
 	pte = virt_get_pte(vm, PTE_GET_PFN(*pde), vaddr, PG_LEVEL_4K);
 	TEST_ASSERT(!(*pte & PTE_PRESENT_MASK),
 		    "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
-	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK |
+	       (addr_gpa2raw(vm, paddr) & PHYSICAL_PAGE_MASK);
 }
 
 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
@@ -515,7 +520,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 	if (!(pte[index[0]] & PTE_PRESENT_MASK))
 		goto unmapped_gva;
 
-	return (PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
+	return addr_raw2gpa(vm, PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
 
 unmapped_gva:
 	TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 5/8] KVM: selftests: add support for encrypted vm_vaddr_* allocations
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (3 preceding siblings ...)
  2022-08-29 17:10 ` [V4 4/8] KVM: selftests: handle encryption bits in page tables Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-08-29 17:10 ` [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

From: Michael Roth <michael.roth@amd.com>

The default policy for whether to handle allocations as encrypted or
shared pages is currently determined by vm_phy_pages_alloc(), which in
turn uses the policy defined by vm->memcrypt.enc_by_default.

Test programs may wish to allocate shared vaddrs for things like
sharing memory with the guest. Since enc_by_default will be true in the
case of SEV guests (since it's required in order to have the initial
ELF binary and page table become part of the initial guest payload), an
interface is needed to explicitly request shared pages.

Implement this by splitting the common code out from vm_vaddr_alloc()
and introducing a new vm_vaddr_alloc_shared().

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 21 +++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index dda8467d1434..489e8c833e5f 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -392,6 +392,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
 void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
 vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index de13be62d52d..ffdf39a5b12d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1273,12 +1273,13 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
 }
 
 /*
- * VM Virtual Address Allocate
+ * VM Virtual Address Allocate Shared/Encrypted
  *
  * Input Args:
  *   vm - Virtual Machine
  *   sz - Size in bytes
  *   vaddr_min - Minimum starting virtual address
+ *   encrypt - Whether the region should be handled as encrypted
  *
  * Output Args: None
  *
@@ -1291,13 +1292,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
  * a unique set of pages, with the minimum real allocation being at least
  * a page.
  */
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+static vm_vaddr_t
+_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, bool encrypt)
 {
 	uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
 
 	virt_pgd_alloc(vm);
-	vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages,
-					      KVM_UTIL_MIN_PFN * vm->page_size, 0);
+	vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages,
+					       KVM_UTIL_MIN_PFN * vm->page_size,
+					       0, encrypt);
 
 	/*
 	 * Find an unused range of virtual page addresses of at least
@@ -1318,6 +1321,16 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
 	return vaddr_start;
 }
 
+vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+{
+	return _vm_vaddr_alloc(vm, sz, vaddr_min, vm->memcrypt.enc_by_default);
+}
+
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+{
+	return _vm_vaddr_alloc(vm, sz, vaddr_min, false);
+}
+
 /*
  * VM Virtual Address Allocate Pages
  *
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (4 preceding siblings ...)
  2022-08-29 17:10 ` [V4 5/8] KVM: selftests: add support for encrypted vm_vaddr_* allocations Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-10-06 18:25   ` Sean Christopherson
  2022-08-29 17:10 ` [V4 7/8] KVM: selftests: Update ucall pool to allocate from shared memory Peter Gonda
  2022-08-29 17:10 ` [V4 8/8] KVM: selftests: Add simple sev vm testing Peter Gonda
  7 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

Add interfaces to allow tests to create/manage SEV guests. The
additional state associated with these guests is encapsulated in a new
struct sev_vm, which is a light wrapper around struct kvm_vm. These
VMs will use vm_set_memory_encryption() and vm_get_encrypted_phy_pages()
under the covers to configure and sync up with the core kvm_util
library on what should/shouldn't be treated as encrypted memory.

Originally-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/kvm_util_base.h     |   3 +
 .../selftests/kvm/include/x86_64/sev.h        |  47 ++++
 tools/testing/selftests/kvm/lib/x86_64/sev.c  | 232 ++++++++++++++++++
 4 files changed, 283 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 23649c5d42fc..0a70e50f0498 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -56,6 +56,7 @@ LIBKVM_x86_64 += lib/x86_64/processor.c
 LIBKVM_x86_64 += lib/x86_64/svm.c
 LIBKVM_x86_64 += lib/x86_64/ucall.c
 LIBKVM_x86_64 += lib/x86_64/vmx.c
+LIBKVM_x86_64 += lib/x86_64/sev.c
 
 LIBKVM_aarch64 += lib/aarch64/gic.c
 LIBKVM_aarch64 += lib/aarch64/gic_v3.c
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 489e8c833e5f..0927e262623d 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -69,6 +69,7 @@ struct userspace_mem_regions {
 /* Memory encryption policy/configuration. */
 struct vm_memcrypt {
 	bool enabled;
+	bool encrypted;
 	int8_t enc_by_default;
 	bool has_enc_bit;
 	int8_t enc_bit;
@@ -831,6 +832,8 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
 
 static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 {
+	TEST_ASSERT(!vm->memcrypt.encrypted,
+		    "Encrypted guests have their page tables encrypted so gva2gpa conversions are not possible.");
 	return addr_arch_gva2gpa(vm, gva);
 }
 
diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h
new file mode 100644
index 000000000000..810d36377e8c
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/sev.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Helpers used for SEV guests
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+#ifndef SELFTEST_KVM_SEV_H
+#define SELFTEST_KVM_SEV_H
+
+#include <stdint.h>
+#include <stdbool.h>
+#include "kvm_util.h"
+
+/* Makefile might set this separately for user-overrides */
+#ifndef SEV_DEV_PATH
+#define SEV_DEV_PATH		"/dev/sev"
+#endif
+
+#define SEV_FW_REQ_VER_MAJOR	0
+#define SEV_FW_REQ_VER_MINOR	17
+
+#define SEV_POLICY_NO_DBG	(1UL << 0)
+#define SEV_POLICY_ES		(1UL << 2)
+
+enum sev_guest_state {
+	SEV_GSTATE_UNINIT = 0,
+	SEV_GSTATE_LUPDATE,
+	SEV_GSTATE_LSECRET,
+	SEV_GSTATE_RUNNING,
+};
+
+struct sev_vm {
+	struct kvm_vm *vm;
+	int fd;
+	int enc_bit;
+	uint32_t sev_policy;
+};
+
+void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data);
+
+struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages);
+void sev_vm_free(struct sev_vm *sev);
+void sev_vm_launch(struct sev_vm *sev);
+void sev_vm_launch_measure(struct sev_vm *sev, uint8_t *measurement);
+void sev_vm_launch_finish(struct sev_vm *sev);
+
+#endif /* SELFTEST_KVM_SEV_H */
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
new file mode 100644
index 000000000000..dd8d708fd01a
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
@@ -0,0 +1,232 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Helpers used for SEV guests
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+
+#include <stdint.h>
+#include <stdbool.h>
+#include "kvm_util.h"
+#include "linux/psp-sev.h"
+#include "processor.h"
+#include "sev.h"
+
+#define CPUID_MEM_ENC_LEAF 0x8000001f
+#define CPUID_EBX_CBIT_MASK 0x3f
+
+/* Common SEV helpers/accessors. */
+
+void sev_ioctl(int sev_fd, int cmd, void *data)
+{
+	int ret;
+	struct sev_issue_cmd arg;
+
+	arg.cmd = cmd;
+	arg.data = (unsigned long)data;
+	ret = ioctl(sev_fd, SEV_ISSUE_CMD, &arg);
+	TEST_ASSERT(ret == 0,
+		    "SEV ioctl %d failed, error: %d, fw_error: %d",
+		    cmd, ret, arg.error);
+}
+
+void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data)
+{
+	struct kvm_sev_cmd arg = {0};
+	int ret;
+
+	arg.id = cmd;
+	arg.sev_fd = sev->fd;
+	arg.data = (__u64)data;
+
+	ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg);
+	TEST_ASSERT(ret == 0,
+		    "SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d",
+		    cmd, ret, errno, strerror(errno), arg.error);
+}
+
+/* Local helpers. */
+
+static void sev_register_user_region(struct sev_vm *sev, void *hva, uint64_t size)
+{
+	struct kvm_enc_region range = {0};
+	int ret;
+
+	pr_debug("%s: hva: %p, size: %lu\n", __func__, hva, size);
+
+	range.addr = (__u64)hva;
+	range.size = size;
+
+	ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
+	TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n", errno);
+}
+
+static void sev_encrypt_phy_range(struct sev_vm *sev, vm_paddr_t gpa, uint64_t size)
+{
+	struct kvm_sev_launch_update_data ksev_update_data = {0};
+
+	pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size);
+
+	ksev_update_data.uaddr = (__u64)addr_gpa2hva(sev->vm, gpa);
+	ksev_update_data.len = size;
+
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data);
+}
+
+static void sev_encrypt(struct sev_vm *sev)
+{
+	const struct sparsebit *enc_phy_pages;
+	struct kvm_vm *vm = sev->vm;
+	sparsebit_idx_t pg = 0;
+	vm_paddr_t gpa_start;
+	uint64_t memory_size;
+	int ctr;
+	struct userspace_mem_region *region;
+
+	hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
+		enc_phy_pages = vm_get_encrypted_phy_pages(
+			sev->vm, region->region.slot, &gpa_start, &memory_size);
+		TEST_ASSERT(enc_phy_pages,
+			    "Unable to retrieve encrypted pages bitmap");
+		while (pg < (memory_size / vm->page_size)) {
+			sparsebit_idx_t pg_cnt;
+
+			if (sparsebit_is_clear(enc_phy_pages, pg)) {
+				pg = sparsebit_next_set(enc_phy_pages, pg);
+				if (!pg)
+					break;
+			}
+
+			pg_cnt = sparsebit_next_clear(enc_phy_pages, pg) - pg;
+			if (pg_cnt <= 0)
+				pg_cnt = 1;
+
+			sev_encrypt_phy_range(sev,
+					      gpa_start + pg * vm->page_size,
+					      pg_cnt * vm->page_size);
+			pg += pg_cnt;
+		}
+	}
+
+	sev->vm->memcrypt.encrypted = true;
+}
+
+/* SEV VM implementation. */
+
+static struct sev_vm *sev_vm_alloc(struct kvm_vm *vm)
+{
+	struct sev_user_data_status sev_status;
+	uint32_t eax, ebx, ecx, edx;
+	struct sev_vm *sev;
+	int sev_fd;
+
+	sev_fd = open(SEV_DEV_PATH, O_RDWR);
+	if (sev_fd < 0) {
+		pr_info("Failed to open SEV device, path: %s, error: %d, skipping test.\n",
+			SEV_DEV_PATH, sev_fd);
+		return NULL;
+	}
+
+	sev_ioctl(sev_fd, SEV_PLATFORM_STATUS, &sev_status);
+
+	if (!(sev_status.api_major > SEV_FW_REQ_VER_MAJOR ||
+	      (sev_status.api_major == SEV_FW_REQ_VER_MAJOR &&
+	       sev_status.api_minor >= SEV_FW_REQ_VER_MINOR))) {
+		pr_info("SEV FW version too old. Have API %d.%d (build: %d), need %d.%d, skipping test.\n",
+			sev_status.api_major, sev_status.api_minor, sev_status.build,
+			SEV_FW_REQ_VER_MAJOR, SEV_FW_REQ_VER_MINOR);
+		return NULL;
+	}
+
+	sev = calloc(1, sizeof(*sev));
+	sev->fd = sev_fd;
+	sev->vm = vm;
+
+	/* Get encryption bit via CPUID. */
+	cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx);
+	sev->enc_bit = ebx & CPUID_EBX_CBIT_MASK;
+
+	return sev;
+}
+
+void sev_vm_free(struct sev_vm *sev)
+{
+	kvm_vm_free(sev->vm);
+	close(sev->fd);
+	free(sev);
+}
+
+struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
+{
+	struct sev_vm *sev;
+	struct kvm_vm *vm;
+
+	/* Need to handle memslots after init, and after setting memcrypt. */
+	vm = vm_create_barebones();
+	sev = sev_vm_alloc(vm);
+	if (!sev)
+		return NULL;
+	sev->sev_policy = policy;
+
+	kvm_sev_ioctl(sev, KVM_SEV_INIT, NULL);
+
+	vm->vpages_mapped = sparsebit_alloc();
+	vm_set_memory_encryption(vm, true, true, sev->enc_bit);
+	pr_info("SEV cbit: %d\n", sev->enc_bit);
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0);
+	sev_register_user_region(sev, addr_gpa2hva(vm, 0),
+				 npages * vm->page_size);
+
+	pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n",
+		sev->sev_policy, npages * vm->page_size / 1024);
+
+	return sev;
+}
+
+void sev_vm_launch(struct sev_vm *sev)
+{
+	struct kvm_sev_launch_start ksev_launch_start = {0};
+	struct kvm_sev_guest_status ksev_status;
+
+	ksev_launch_start.policy = sev->sev_policy;
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_START, &ksev_launch_start);
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.policy == sev->sev_policy, "Incorrect guest policy.");
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE,
+		    "Unexpected guest state: %d", ksev_status.state);
+
+	ucall_init(sev->vm, 0);
+
+	sev_encrypt(sev);
+}
+
+void sev_vm_launch_measure(struct sev_vm *sev, uint8_t *measurement)
+{
+	struct kvm_sev_launch_measure ksev_launch_measure;
+	struct kvm_sev_guest_status ksev_guest_status;
+
+	ksev_launch_measure.len = 256;
+	ksev_launch_measure.uaddr = (__u64)measurement;
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_MEASURE, &ksev_launch_measure);
+
+	/* Measurement causes a state transition, check that. */
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_guest_status);
+	TEST_ASSERT(ksev_guest_status.state == SEV_GSTATE_LSECRET,
+		    "Unexpected guest state: %d", ksev_guest_status.state);
+}
+
+void sev_vm_launch_finish(struct sev_vm *sev)
+{
+	struct kvm_sev_guest_status ksev_status;
+
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE ||
+		    ksev_status.state == SEV_GSTATE_LSECRET,
+		    "Unexpected guest state: %d", ksev_status.state);
+
+	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_FINISH, NULL);
+
+	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_RUNNING,
+		    "Unexpected guest state: %d", ksev_status.state);
+}
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 7/8] KVM: selftests: Update ucall pool to allocate from shared memory
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (5 preceding siblings ...)
  2022-08-29 17:10 ` [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-08-29 17:10 ` [V4 8/8] KVM: selftests: Add simple sev vm testing Peter Gonda
  7 siblings, 0 replies; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

Update the per VM ucall_header allocation from vm_vaddr_alloc() to
vm_vaddr_alloc_shared(). This allows encrypted guests to use ucall pools
by placing their shared ucall structures in unencrypted (shared) memory.
No behavior change for non encrypted guests.

Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/testing/selftests/kvm/lib/ucall_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index cc79d497b6b4..8e164c47e2fb 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -22,7 +22,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
 	vm_vaddr_t vaddr;
 	int i;
 
-	vaddr = vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
+	vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
 	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
 	memset(hdr, 0, sizeof(*hdr));
 
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [V4 8/8] KVM: selftests: Add simple sev vm testing
  2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (6 preceding siblings ...)
  2022-08-29 17:10 ` [V4 7/8] KVM: selftests: Update ucall pool to allocate from shared memory Peter Gonda
@ 2022-08-29 17:10 ` Peter Gonda
  2022-10-06 18:31   ` Sean Christopherson
  7 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-08-29 17:10 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, Peter Gonda

A very simple of booting SEV guests that checks related CPUID bits. This
is a stripped down version of "[PATCH v2 08/13] KVM: selftests: add SEV
boot tests" from Michael but much simpler.

Suggested-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/sev_all_boot_test.c  | 127 ++++++++++++++++++
 3 files changed, 129 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index d625a3f83780..ca57969a0923 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -33,6 +33,7 @@
 /x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
+/x86_64/sev_all_boot_test
 /x86_64/sev_migrate_tests
 /x86_64/smm_test
 /x86_64/state_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 0a70e50f0498..10a7ea09ea98 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -123,6 +123,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test
 TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
 TEST_GEN_PROGS_x86_64 += x86_64/amx_test
 TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
new file mode 100644
index 000000000000..be8ed3720767
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Basic SEV boot tests.
+ *
+ * Copyright (C) 2021 Advanced Micro Devices
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "linux/psp-sev.h"
+#include "sev.h"
+
+#define PAGE_STRIDE		32
+
+#define SHARED_PAGES		8192
+#define SHARED_VADDR_MIN	0x1000000
+
+#define PRIVATE_PAGES		2048
+#define PRIVATE_VADDR_MIN	(SHARED_VADDR_MIN + SHARED_PAGES * PAGE_SIZE)
+
+#define TOTAL_PAGES		(512 + SHARED_PAGES + PRIVATE_PAGES)
+
+#define NR_SYNCS 1
+
+static void guest_run_loop(struct kvm_vcpu *vcpu)
+{
+	struct ucall uc;
+	int i;
+
+	for (i = 0; i <= NR_SYNCS; ++i) {
+		vcpu_run(vcpu);
+		switch (get_ucall(vcpu, &uc)) {
+		case UCALL_SYNC:
+			continue;
+		case UCALL_DONE:
+			return;
+		case UCALL_ABORT:
+			TEST_ASSERT(false, "%s at %s:%ld\n\tvalues: %#lx, %#lx",
+				    (const char *)uc.args[0], __FILE__,
+				    uc.args[1], uc.args[2], uc.args[3]);
+		default:
+			TEST_ASSERT(
+				false, "Unexpected exit: %s",
+				exit_reason_str(vcpu->run->exit_reason));
+		}
+	}
+}
+
+static void guest_sev_code(void)
+{
+	uint32_t eax, ebx, ecx, edx;
+	uint64_t sev_status;
+
+	GUEST_SYNC(1);
+
+	GUEST_ASSERT(this_cpu_has(X86_FEATURE_SEV));
+
+	sev_status = rdmsr(MSR_AMD64_SEV);
+	GUEST_ASSERT((sev_status & 0x1) == 1);
+
+	GUEST_DONE();
+}
+
+static struct sev_vm *setup_test_common(void *guest_code, uint64_t policy,
+					struct kvm_vcpu **vcpu)
+{
+	uint8_t measurement[512];
+	struct sev_vm *sev;
+	struct kvm_vm *vm;
+	int i;
+
+	sev = sev_vm_create(policy, TOTAL_PAGES);
+	TEST_ASSERT(sev, "sev_vm_create() failed to create VM\n");
+
+	vm = sev->vm;
+
+	/* Set up VCPU and initial guest kernel. */
+	*vcpu = vm_vcpu_add(vm, 0, guest_code);
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	/* Allocations/setup done. Encrypt initial guest payload. */
+	sev_vm_launch(sev);
+
+	/* Dump the initial measurement. A test to actually verify it would be nice. */
+	sev_vm_launch_measure(sev, measurement);
+	pr_info("guest measurement: ");
+	for (i = 0; i < 32; ++i)
+		pr_info("%02x", measurement[i]);
+	pr_info("\n");
+
+	sev_vm_launch_finish(sev);
+
+	return sev;
+}
+
+static void test_sev(void *guest_code, uint64_t policy)
+{
+	struct sev_vm *sev;
+	struct kvm_vcpu *vcpu;
+
+	sev = setup_test_common(guest_code, policy, &vcpu);
+
+	/* Guest is ready to run. Do the tests. */
+	guest_run_loop(vcpu);
+
+	pr_info("guest ran successfully\n");
+
+	sev_vm_free(sev);
+}
+
+int main(int argc, char *argv[])
+{
+	/* SEV tests */
+	test_sev(guest_sev_code, SEV_POLICY_NO_DBG);
+	test_sev(guest_sev_code, 0);
+
+	return 0;
+}
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [V4 4/8] KVM: selftests: handle encryption bits in page tables
  2022-08-29 17:10 ` [V4 4/8] KVM: selftests: handle encryption bits in page tables Peter Gonda
@ 2022-10-06 17:34   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2022-10-06 17:34 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Aug 29, 2022, Peter Gonda wrote:
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 53b9a509c1d5..de13be62d52d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1388,6 +1388,58 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
>  	}
>  }
>  
> +/*
> + * Mask off any special bits from raw GPA
> + *
> + * Input Args:
> + *   vm - Virtual Machine
> + *   gpa_raw - Raw VM physical address
> + *
> + * Output Args: None
> + *
> + * Return:
> + *   GPA with special bits (e.g. shared/encrypted) masked off.
> + */
> +vm_paddr_t addr_raw2gpa(struct kvm_vm *vm, vm_paddr_t gpa_raw)
> +{
> +	if (!vm->memcrypt.has_enc_bit)
> +		return gpa_raw;
> +
> +	return gpa_raw & ~(1ULL << vm->memcrypt.enc_bit);

1. The notion of stealing GPA bits to tag the page should not be tied to memory
   encryption.

2. Assuming that the shared vs. private bit is active-high is wrong, e.g. TDX
   has active-low behavior.

3. "raw" is not super untuitive.

4. addr_gpa2raw() should not exist.  It assumes the "raw" address always wants
   the encryption bit set.

5. enc_by_default is an SEV-centric flag that should not exist outside of x86.

I think the easiest and most familiar solution for #1 will be to borrow the
kernel's tag/untag terminology for handling virtual addresses that can be tagged
for ASAN, e.g. ARM's top-byte-ignore  and x86's linear address masking (LAM).

So instead of addr_raw2gpa(), just do:

  static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_vaddr_t gpa)
  {
	return gpa & ~vm->gpa_tag_mask;
  }

That way zero-allocating the VM will Just Work.

> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 2e6e61bbe81b..b2df259ce706 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -118,7 +118,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
>  
>  	/* If needed, create page map l4 table. */
>  	if (!vm->pgd_created) {
> -		vm->pgd = vm_alloc_page_table(vm);
> +		vm->pgd = addr_gpa2raw(vm, vm_alloc_page_table(vm));

Rather than add "struct vm_memcrypt", I think it makes sense to introduce
"struct kvm_vm_arch" and then the x86 implementation can add pte_me_mask, similar
to what KVM does for SPTEs.  THen this code because

		vm->pgd = vm_alloc_page_table(vm) | vm->arch.pte_me_mask;

That will Just Work for TDX, because its pte_me_mask will be '0', even though it
effectively has an encryption bit (active-low).

>  		vm->pgd_created = true;
>  	}
>  }
> @@ -140,13 +140,15 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
>  				       int target_level)
>  {
>  	uint64_t *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level);
> +	uint64_t paddr_raw = addr_gpa2raw(vm, paddr);
>  
>  	if (!(*pte & PTE_PRESENT_MASK)) {
>  		*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK;
>  		if (current_level == target_level)
> -			*pte |= PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK);
> +			*pte |= PTE_LARGE_MASK | (paddr_raw & PHYSICAL_PAGE_MASK);
>  		else
> -			*pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
> +			*pte |= addr_gpa2raw(vm, vm_alloc_page_table(vm)) & PHYSICAL_PAGE_MASK;

Prefer to write this as:

			*pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
			*pte |= vm->arch.pte_me_mask;

so that selftests don't assume the encryption bit is stolen from the GPA.

> +
>  	} else {
>  		/*
>  		 * Entry already present.  Assert that the caller doesn't want
> @@ -184,6 +186,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
>  		    "Physical address beyond maximum supported,\n"
>  		    "  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
>  		    paddr, vm->max_gfn, vm->page_size);
> +	TEST_ASSERT(addr_raw2gpa(vm, paddr) == paddr,
> +		    "Unexpected bits in paddr: %lx", paddr);
>  
>  	/*
>  	 * Allocate upper level page tables, if not already present.  Return
> @@ -206,7 +210,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
>  	pte = virt_get_pte(vm, PTE_GET_PFN(*pde), vaddr, PG_LEVEL_4K);
>  	TEST_ASSERT(!(*pte & PTE_PRESENT_MASK),
>  		    "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
> -	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
> +	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK |
> +	       (addr_gpa2raw(vm, paddr) & PHYSICAL_PAGE_MASK);

And with the above suggestions, this can become something like:

	if (vm_is_gpa_encrypted(vm, paddr))
		*pte |= vm->arch.c_bit;
	else
		*pte |= vm->arch.s_bit;

where SEV sets c_bit and TDX sets s_bit.

>  void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> @@ -515,7 +520,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
>  	if (!(pte[index[0]] & PTE_PRESENT_MASK))
>  		goto unmapped_gva;
>  
> -	return (PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
> +	return addr_raw2gpa(vm, PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);

Aha!  A use for my rework[*] of this mess!  I don't think you need to take a
dependency on that work, at a glance the conflicts should be minor, i.e. doesn't
matter that much which lands first.

[*] https://lore.kernel.org/all/20221006004512.666529-1-seanjc@google.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file
  2022-08-29 17:10 ` [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
@ 2022-10-06 17:35   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2022-10-06 17:35 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Aug 29, 2022, Peter Gonda wrote:
> From: Michael Roth <michael.roth@amd.com>
> 
> Subsequent patches will break some of this code out into file-local
> helper functions, which will be used by functions like vm_vaddr_alloc(),
> which currently are defined earlier in the file, so a forward
> declaration would be needed.
> 
> Instead, move it earlier in the file, just above vm_vaddr_alloc() and
> and friends, which are the main users.

Meh, the wrappers are one-liners, why not keep this where it is, keep it exposed,
and then make the wrappers "static inline".

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory
  2022-08-29 17:10 ` [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
@ 2022-10-06 17:48   ` Sean Christopherson
  2022-10-11 17:38     ` Peter Gonda
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-10-06 17:48 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Aug 29, 2022, Peter Gonda wrote:
> +static vm_paddr_t
> +_vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min,

Do not wrap before the function name.  Linus has a nice explanation/rant on this[*].
Note to self, add a Vim macro for this...

[*] https://lore.kernel.org/all/CAHk-=wjoLAYG446ZNHfg=GhjSY6nFmuB_wA8fYd5iLBNXjo9Bw@mail.gmail.com

> +		    uint32_t memslot, bool encrypt)
>  {
>  	struct userspace_mem_region *region;
>  	sparsebit_idx_t pg, base;
> @@ -1152,12 +1156,22 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
>  		abort();
>  	}
>  
> -	for (pg = base; pg < base + num; ++pg)
> +	for (pg = base; pg < base + num; ++pg) {
>  		sparsebit_clear(region->unused_phy_pages, pg);
> +		if (encrypt)

prefer s/encrypt/private, and s/encrypted_phy_pages/private_phy_pages.  pKVM
doesn't rely on encryption, and it's not impossible that x86 will someday gain
similar functionality.  And "encrypted" is also technically wrong for SEV and TDX,
as shared memory can also be encrypted with a common key.

> +			sparsebit_set(region->encrypted_phy_pages, pg);
> +	}
>  
>  	return base * vm->page_size;
>  }
>  
> +vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> +			      vm_paddr_t paddr_min, uint32_t memslot)
> +{
> +	return _vm_phy_pages_alloc(vm, num, paddr_min, memslot,
> +				   vm->memcrypt.enc_by_default);

enc_by_default yields a bizarre API.  The behavior depends on whether or not the
VM is protected, and whether or not the VM wants to protect memory by default.

For simplicity, IMO vm_phy_pages_alloc() should allocate memory as private if the
VM supports protected memory, i.e. just have vm->protected or whatever and use
that here.

> +}
> +
>  vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
>  			     uint32_t memslot)
>  {
> @@ -1741,6 +1755,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
>  			region->host_mem);
>  		fprintf(stream, "%*sunused_phy_pages: ", indent + 2, "");
>  		sparsebit_dump(stream, region->unused_phy_pages, 0);
> +		if (vm->memcrypt.enabled) {

vm->protected

> +			fprintf(stream, "%*sencrypted_phy_pages: ", indent + 2, "");
> +			sparsebit_dump(stream, region->encrypted_phy_pages, 0);
> +		}
>  	}
>  	fprintf(stream, "%*sMapped Virtual Pages:\n", indent, "");
>  	sparsebit_dump(stream, vm->vpages_mapped, indent + 2);
> @@ -1989,3 +2007,31 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
>  		break;
>  	}
>  }
> +
> +void vm_set_memory_encryption(struct kvm_vm *vm, bool enc_by_default, bool has_enc_bit,
> +			      uint8_t enc_bit)
> +{
> +	vm->memcrypt.enabled = true;
> +	vm->memcrypt.enc_by_default = enc_by_default;
> +	vm->memcrypt.has_enc_bit = has_enc_bit;
> +	vm->memcrypt.enc_bit = enc_bit;
> +}
> +
> +const struct sparsebit *
> +vm_get_encrypted_phy_pages(struct kvm_vm *vm, int slot, vm_paddr_t *gpa_start,
> +			   uint64_t *size)

Bad wrap.

> +{
> +	struct userspace_mem_region *region;
> +
> +	if (!vm->memcrypt.enabled)

This seems rather silly, why not TEST_ASSERT()?

> +		return NULL;
> +
> +	region = memslot2region(vm, slot);
> +	if (!region)

Same here, TEST_ASSERT() seems more appropriate.

Actually, I can't envision a use outside of SEV.  AFAIK, no other architecture
does the whole "launch update" thing.  I.e. just open code this in sev_encrypt().
The more generic API that will be useful for other VM types will be to query if a
specific GPA is private vs. shared.

> +		return NULL;
> +
> +	*size = region->region.memory_size;
> +	*gpa_start = region->region.guest_phys_addr;
> +
> +	return region->encrypted_phy_pages;
> +}
> -- 
> 2.37.2.672.g94769d06f0-goog
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-08-29 17:10 ` [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
@ 2022-10-06 18:25   ` Sean Christopherson
  2022-10-17 16:32     ` Peter Gonda
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-10-06 18:25 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Aug 29, 2022, Peter Gonda wrote:
> Add interfaces to allow tests to create/manage SEV guests. The
> additional state associated with these guests is encapsulated in a new
> struct sev_vm, which is a light wrapper around struct kvm_vm. These
> VMs will use vm_set_memory_encryption() and vm_get_encrypted_phy_pages()
> under the covers to configure and sync up with the core kvm_util
> library on what should/shouldn't be treated as encrypted memory.
> 
> Originally-by: Michael Roth <michael.roth@amd.com>
> Signed-off-by: Peter Gonda <pgonda@google.com>
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/include/kvm_util_base.h     |   3 +
>  .../selftests/kvm/include/x86_64/sev.h        |  47 ++++
>  tools/testing/selftests/kvm/lib/x86_64/sev.c  | 232 ++++++++++++++++++
>  4 files changed, 283 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
>  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 23649c5d42fc..0a70e50f0498 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -56,6 +56,7 @@ LIBKVM_x86_64 += lib/x86_64/processor.c
>  LIBKVM_x86_64 += lib/x86_64/svm.c
>  LIBKVM_x86_64 += lib/x86_64/ucall.c
>  LIBKVM_x86_64 += lib/x86_64/vmx.c
> +LIBKVM_x86_64 += lib/x86_64/sev.c
>  
>  LIBKVM_aarch64 += lib/aarch64/gic.c
>  LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index 489e8c833e5f..0927e262623d 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -69,6 +69,7 @@ struct userspace_mem_regions {
>  /* Memory encryption policy/configuration. */
>  struct vm_memcrypt {
>  	bool enabled;
> +	bool encrypted;
>  	int8_t enc_by_default;
>  	bool has_enc_bit;
>  	int8_t enc_bit;
> @@ -831,6 +832,8 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
>  
>  static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
>  {
> +	TEST_ASSERT(!vm->memcrypt.encrypted,

vm->protected

> +		    "Encrypted guests have their page tables encrypted so gva2gpa conversions are not possible.");

Unnecessarily verbose, e.g.

		    "Protected VMs have private, inaccessible page tables");

> +#define CPUID_MEM_ENC_LEAF 0x8000001f
> +#define CPUID_EBX_CBIT_MASK 0x3f
> +
> +/* Common SEV helpers/accessors. */

Please drop this comment and the "Local helpers" and "SEV VM implementation" comments
below.  There's 0% chance these comments will stay fresh as code is added and moved
around.  They also add no value IMO, e.g. "static" makes it quite obvious it's a
local function, and sev_* vs. sev_es_*. vs. sev_snp_* namespacing takes care of the
rest.

> +void sev_ioctl(int sev_fd, int cmd, void *data)
> +{
> +	int ret;
> +	struct sev_issue_cmd arg;
> +
> +	arg.cmd = cmd;
> +	arg.data = (unsigned long)data;
> +	ret = ioctl(sev_fd, SEV_ISSUE_CMD, &arg);
> +	TEST_ASSERT(ret == 0,
> +		    "SEV ioctl %d failed, error: %d, fw_error: %d",
> +		    cmd, ret, arg.error);
> +}
> +
> +void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data)
> +{
> +	struct kvm_sev_cmd arg = {0};
> +	int ret;
> +
> +	arg.id = cmd;
> +	arg.sev_fd = sev->fd;
> +	arg.data = (__u64)data;
> +
> +	ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg);
> +	TEST_ASSERT(ret == 0,
> +		    "SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d",
> +		    cmd, ret, errno, strerror(errno), arg.error);
> +}
> +
> +/* Local helpers. */
> +
> +static void sev_register_user_region(struct sev_vm *sev, void *hva, uint64_t size)
> +{
> +	struct kvm_enc_region range = {0};
> +	int ret;
> +
> +	pr_debug("%s: hva: %p, size: %lu\n", __func__, hva, size);
> +
> +	range.addr = (__u64)hva;
> +	range.size = size;
> +
> +	ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
> +	TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n", errno);
> +}
> +
> +static void sev_encrypt_phy_range(struct sev_vm *sev, vm_paddr_t gpa, uint64_t size)
> +{
> +	struct kvm_sev_launch_update_data ksev_update_data = {0};
> +
> +	pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size);
> +
> +	ksev_update_data.uaddr = (__u64)addr_gpa2hva(sev->vm, gpa);
> +	ksev_update_data.len = size;
> +
> +	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data);
> +}
> +
> +static void sev_encrypt(struct sev_vm *sev)
> +{
> +	const struct sparsebit *enc_phy_pages;
> +	struct kvm_vm *vm = sev->vm;
> +	sparsebit_idx_t pg = 0;
> +	vm_paddr_t gpa_start;
> +	uint64_t memory_size;
> +	int ctr;
> +	struct userspace_mem_region *region;
> +
> +	hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
> +		enc_phy_pages = vm_get_encrypted_phy_pages(

Please don't wrap after the opening paranthesis unless it's really, really necessary.
More for future reference since I think vm_get_encrypted_phy_pages() should be open
coded here.  E.g. in this case, the "enc_phy_" prefix doesn't add much value, and
dropping that makes the code easier to read overall.

		pages = vm_get_encrypted_phy_pages(sev->vm, region->region.slot,
						   &gpa_start, &memory_size);

> +			sev->vm, region->region.slot, &gpa_start, &memory_size);
> +		TEST_ASSERT(enc_phy_pages,
> +			    "Unable to retrieve encrypted pages bitmap");
> +		while (pg < (memory_size / vm->page_size)) {
> +			sparsebit_idx_t pg_cnt;

s/pg_cnt/nr_pages

> +
> +			if (sparsebit_is_clear(enc_phy_pages, pg)) {
> +				pg = sparsebit_next_set(enc_phy_pages, pg);
> +				if (!pg)
> +					break;
> +			}
> +
> +			pg_cnt = sparsebit_next_clear(enc_phy_pages, pg) - pg;
> +			if (pg_cnt <= 0)
> +				pg_cnt = 1;
> +
> +			sev_encrypt_phy_range(sev,
> +					      gpa_start + pg * vm->page_size,
> +					      pg_cnt * vm->page_size);
> +			pg += pg_cnt;
> +		}
> +	}
> +
> +	sev->vm->memcrypt.encrypted = true;
> +}
> +
> +/* SEV VM implementation. */
> +
> +static struct sev_vm *sev_vm_alloc(struct kvm_vm *vm)
> +{
> +	struct sev_user_data_status sev_status;
> +	uint32_t eax, ebx, ecx, edx;
> +	struct sev_vm *sev;
> +	int sev_fd;
> +
> +	sev_fd = open(SEV_DEV_PATH, O_RDWR);
> +	if (sev_fd < 0) {
> +		pr_info("Failed to open SEV device, path: %s, error: %d, skipping test.\n",
> +			SEV_DEV_PATH, sev_fd);
> +		return NULL;

Printing "skipping test" is wrong as there's no guarantee the caller is going to
skip the test.  E.g. the sole user in this series asserts, i.e. fails the test.

I also think that waiting until VM allocation to perform these sanity checks is
flawed.  Rather do these checks every time, add helpers to query SEV and SEV-ES
support, and then use TEST_REQUIRE() to actually skip tests that require support,
e.g.

	TEST_REQUIRE(kvm_is_sev_supported());

or

	TEST_REQUIRE(kvm_is_sev_es_supported());

Then this helper can simply assert that opening SEV_DEV_PATH succeeds.

> +	}
> +
> +	sev_ioctl(sev_fd, SEV_PLATFORM_STATUS, &sev_status);
> +
> +	if (!(sev_status.api_major > SEV_FW_REQ_VER_MAJOR ||
> +	      (sev_status.api_major == SEV_FW_REQ_VER_MAJOR &&
> +	       sev_status.api_minor >= SEV_FW_REQ_VER_MINOR))) {
> +		pr_info("SEV FW version too old. Have API %d.%d (build: %d), need %d.%d, skipping test.\n",
> +			sev_status.api_major, sev_status.api_minor, sev_status.build,
> +			SEV_FW_REQ_VER_MAJOR, SEV_FW_REQ_VER_MINOR);
> +		return NULL;
> +	}
> +
> +	sev = calloc(1, sizeof(*sev));

TEST_ASSERT(sev, ...)

> +	sev->fd = sev_fd;
> +	sev->vm = vm;
> +
> +	/* Get encryption bit via CPUID. */
> +	cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx);
> +	sev->enc_bit = ebx & CPUID_EBX_CBIT_MASK;

Oh hey, another series of mine[*] that you can leverage :-)

[*] https://lore.kernel.org/all/20221006005125.680782-1-seanjc@google.com

> +
> +	return sev;
> +}
> +
> +void sev_vm_free(struct sev_vm *sev)
> +{
> +	kvm_vm_free(sev->vm);
> +	close(sev->fd);
> +	free(sev);
> +}
> +
> +struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)

The migration test already defines sev_vm_create().  That conflict needs to be
resolved.

> +{
> +	struct sev_vm *sev;
> +	struct kvm_vm *vm;
> +
> +	/* Need to handle memslots after init, and after setting memcrypt. */
> +	vm = vm_create_barebones();

Do not use vm_create_barebones().  That API is only to be used for tests that do
not intend to run vCPUs.



> +	sev = sev_vm_alloc(vm);
> +	if (!sev)
> +		return NULL;
> +	sev->sev_policy = policy;
> +
> +	kvm_sev_ioctl(sev, KVM_SEV_INIT, NULL);
> +
> +	vm->vpages_mapped = sparsebit_alloc();

This is unnecessary and leaks memory, vm->vpages_mapped is allocated by
____vm_create().

> +	vm_set_memory_encryption(vm, true, true, sev->enc_bit);
> +	pr_info("SEV cbit: %d\n", sev->enc_bit);
> +	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0);
> +	sev_register_user_region(sev, addr_gpa2hva(vm, 0),
> +				 npages * vm->page_size);

Burying sev_register_user_region() in here is not going to be maintainble.  I
think the best away to handle this is to add an arch hook in vm_userspace_mem_region_add()
and automatically register regions when they're created.

And with that, I believe sev_vm_create() can go away entirely and the SEV encryption
stuff can be handled via a new vm_guest_mode.  ____vm_create() already has a gross
__x86_64__ hook that we can tweak, e.g.

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 54b8d8825f5d..2d6cbca2c01a 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -238,9 +238,10 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
        case VM_MODE_P36V47_16K:
                vm->pgtable_levels = 3;
                break;
+       case VM_MODE_PXXV48_4K_SEV:
        case VM_MODE_PXXV48_4K:
 #ifdef __x86_64__
-               kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits);
+               kvm_init_vm_address_properties(vm);
                /*
                 * Ignore KVM support for 5-level paging (vm->va_bits == 57),
                 * it doesn't take effect unless a CR4.LA57 is set, which it

Then kvm_init_vm_address_properties() can pivot on vm->mode to deal with SEV
specific stuff.

> +
> +	pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n",
> +		sev->sev_policy, npages * vm->page_size / 1024);
> +
> +	return sev;
> +}
> +
> +void sev_vm_launch(struct sev_vm *sev)
> +{
> +	struct kvm_sev_launch_start ksev_launch_start = {0};
> +	struct kvm_sev_guest_status ksev_status;
> +
> +	ksev_launch_start.policy = sev->sev_policy;
> +	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_START, &ksev_launch_start);
> +	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
> +	TEST_ASSERT(ksev_status.policy == sev->sev_policy, "Incorrect guest policy.");
> +	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE,
> +		    "Unexpected guest state: %d", ksev_status.state);
> +
> +	ucall_init(sev->vm, 0);
> +
> +	sev_encrypt(sev);
> +}
> +
> +void sev_vm_launch_measure(struct sev_vm *sev, uint8_t *measurement)
> +{
> +	struct kvm_sev_launch_measure ksev_launch_measure;
> +	struct kvm_sev_guest_status ksev_guest_status;
> +
> +	ksev_launch_measure.len = 256;
> +	ksev_launch_measure.uaddr = (__u64)measurement;
> +	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_MEASURE, &ksev_launch_measure);
> +
> +	/* Measurement causes a state transition, check that. */
> +	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_guest_status);
> +	TEST_ASSERT(ksev_guest_status.state == SEV_GSTATE_LSECRET,
> +		    "Unexpected guest state: %d", ksev_guest_status.state);
> +}
> +
> +void sev_vm_launch_finish(struct sev_vm *sev)
> +{
> +	struct kvm_sev_guest_status ksev_status;
> +
> +	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
> +	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE ||
> +		    ksev_status.state == SEV_GSTATE_LSECRET,
> +		    "Unexpected guest state: %d", ksev_status.state);
> +
> +	kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_FINISH, NULL);
> +
> +	kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
> +	TEST_ASSERT(ksev_status.state == SEV_GSTATE_RUNNING,
> +		    "Unexpected guest state: %d", ksev_status.state);
> +}

Rather than force each test to invoke these via something like setup_test_common(),
add the same kvm_arch_vm_post_create() hook that Vishal is likely going to add,
and then automatically do all of the launch+measure+finish stuff for non-barebones
VMs.  That will let SEV/SEV-ES tests use __vm_create_with_vcpus() and
__vm_create().

And it'd be a little gross, but I think it'd be wortwhile to add another layer
to the "one_vcpu" helpers to make things even more convenient, e.g.

struct kvm_vm *____vm_create_with_one_vcpu(enum vm_guest_mode mode,
					   struct kvm_vcpu **vcpu,
					   uint64_t extra_mem_pages,
					   void *guest_code)
{
	struct kvm_vcpu *vcpus[1];
	struct kvm_vm *vm;

	vm = __vm_create_with_vcpus(mode, 1, extra_mem_pages, guest_code, vcpus);

	*vcpu = vcpus[0];
	return vm;
}

static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
						       uint64_t extra_mem_pages,
						       void *guest_code)
{
	return ____vm_create_with_one_vcpu(VM_MODE_DEFAULT, vcpu,
					   extra_mem_pages, guest_code);
}

static inline struct kvm_vm *____vm_create_with_one_vcpu(enum vm_guest_mode mode,
					   struct kvm_vcpu **vcpu,
					   uint64_t extra_mem_pages,
					   void *guest_code)
____vm_create_with_one_vcpu


diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index dafe4471a6c7..593dfadb662e 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -298,9 +298,8 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
 
        kvm_vm_elf_load(vm, program_invocation_name);
 
-#ifdef __x86_64__
-       vm_create_irqchip(vm);
-#endif
+       kvm_arch_vm_post_create(vm);
+
        return vm;
 }
 

[*] https://lore.kernel.org/all/YzsC4ibDqGh5qaP9@google.com

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [V4 8/8] KVM: selftests: Add simple sev vm testing
  2022-08-29 17:10 ` [V4 8/8] KVM: selftests: Add simple sev vm testing Peter Gonda
@ 2022-10-06 18:31   ` Sean Christopherson
  0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2022-10-06 18:31 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Aug 29, 2022, Peter Gonda wrote:
> +	for (i = 0; i <= NR_SYNCS; ++i) {
> +		vcpu_run(vcpu);
> +		switch (get_ucall(vcpu, &uc)) {
> +		case UCALL_SYNC:
> +			continue;
> +		case UCALL_DONE:
> +			return;
> +		case UCALL_ABORT:
> +			TEST_ASSERT(false, "%s at %s:%ld\n\tvalues: %#lx, %#lx",
> +				    (const char *)uc.args[0], __FILE__,
> +				    uc.args[1], uc.args[2], uc.args[3]);

REPORT_GUEST_ASSERT

> +		default:
> +			TEST_ASSERT(
> +				false, "Unexpected exit: %s",
> +				exit_reason_str(vcpu->run->exit_reason));

TEST_FAIL

> +		}
> +	}
> +}
> +
> +static void guest_sev_code(void)
> +{
> +	uint32_t eax, ebx, ecx, edx;
> +	uint64_t sev_status;
> +
> +	GUEST_SYNC(1);
> +
> +	GUEST_ASSERT(this_cpu_has(X86_FEATURE_SEV));
> +
> +	sev_status = rdmsr(MSR_AMD64_SEV);
> +	GUEST_ASSERT((sev_status & 0x1) == 1);

Add a #define for the magic 0x1.  And the "== 1" is unnecessary.  Hmm, and the
querying of SEV/SEV-ES/etc status in MSR_AMD64_SEV should be done in helpers.

> +
> +	GUEST_DONE();
> +}
> +
> +static struct sev_vm *setup_test_common(void *guest_code, uint64_t policy,
> +					struct kvm_vcpu **vcpu)
> +{
> +	uint8_t measurement[512];
> +	struct sev_vm *sev;
> +	struct kvm_vm *vm;
> +	int i;
> +
> +	sev = sev_vm_create(policy, TOTAL_PAGES);
> +	TEST_ASSERT(sev, "sev_vm_create() failed to create VM\n");

See earlier comments on using TEST_REQUIRE.

> +
> +	vm = sev->vm;
> +
> +	/* Set up VCPU and initial guest kernel. */
> +	*vcpu = vm_vcpu_add(vm, 0, guest_code);
> +	kvm_vm_elf_load(vm, program_invocation_name);
> +
> +	/* Allocations/setup done. Encrypt initial guest payload. */
> +	sev_vm_launch(sev);
> +
> +	/* Dump the initial measurement. A test to actually verify it would be nice. */
> +	sev_vm_launch_measure(sev, measurement);
> +	pr_info("guest measurement: ");
> +	for (i = 0; i < 32; ++i)
> +		pr_info("%02x", measurement[i]);
> +	pr_info("\n");
> +
> +	sev_vm_launch_finish(sev);

I believe this entire function gets replaced with

	vm = ____vm_create_with_one_vcpu(VM_MODE_PXXV48_4K_SEV, &vcpu, 0,
					 guest_code);

> +
> +	return sev;
> +}
> +
> +static void test_sev(void *guest_code, uint64_t policy)
> +{
> +	struct sev_vm *sev;
> +	struct kvm_vcpu *vcpu;
> +
> +	sev = setup_test_common(guest_code, policy, &vcpu);
> +
> +	/* Guest is ready to run. Do the tests. */

Comment doesn't add much value.

> +	guest_run_loop(vcpu);
> +
> +	pr_info("guest ran successfully\n");

Meh, the test passing is a good indication the test ran successfully.

> +
> +	sev_vm_free(sev);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +	/* SEV tests */

Again, stating the obvious.

> +	test_sev(guest_sev_code, SEV_POLICY_NO_DBG);
> +	test_sev(guest_sev_code, 0);
> +
> +	return 0;
> +}
> -- 
> 2.37.2.672.g94769d06f0-goog
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory
  2022-10-06 17:48   ` Sean Christopherson
@ 2022-10-11 17:38     ` Peter Gonda
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Gonda @ 2022-10-11 17:38 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Thu, Oct 6, 2022 at 11:48 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Aug 29, 2022, Peter Gonda wrote:
> > +static vm_paddr_t
> > +_vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, vm_paddr_t paddr_min,
>
> Do not wrap before the function name.  Linus has a nice explanation/rant on this[*].
> Note to self, add a Vim macro for this...
>
> [*] https://lore.kernel.org/all/CAHk-=wjoLAYG446ZNHfg=GhjSY6nFmuB_wA8fYd5iLBNXjo9Bw@mail.gmail.com
>

Fixed. Thanks. I'll work on a fix to my VIM setup.

> > +                 uint32_t memslot, bool encrypt)
> >  {
> >       struct userspace_mem_region *region;
> >       sparsebit_idx_t pg, base;
> > @@ -1152,12 +1156,22 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> >               abort();
> >       }
> >
> > -     for (pg = base; pg < base + num; ++pg)
> > +     for (pg = base; pg < base + num; ++pg) {
> >               sparsebit_clear(region->unused_phy_pages, pg);
> > +             if (encrypt)
>
> prefer s/encrypt/private, and s/encrypted_phy_pages/private_phy_pages.  pKVM
> doesn't rely on encryption, and it's not impossible that x86 will someday gain
> similar functionality.  And "encrypted" is also technically wrong for SEV and TDX,
> as shared memory can also be encrypted with a common key.

Makes sense. Private or protected sound better.

>
> > +                     sparsebit_set(region->encrypted_phy_pages, pg);
> > +     }
> >
> >       return base * vm->page_size;
> >  }
> >
> > +vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> > +                           vm_paddr_t paddr_min, uint32_t memslot)
> > +{
> > +     return _vm_phy_pages_alloc(vm, num, paddr_min, memslot,
> > +                                vm->memcrypt.enc_by_default);
>
> enc_by_default yields a bizarre API.  The behavior depends on whether or not the
> VM is protected, and whether or not the VM wants to protect memory by default.
>
> For simplicity, IMO vm_phy_pages_alloc() should allocate memory as private if the
> VM supports protected memory, i.e. just have vm->protected or whatever and use
> that here.

Removed "enc_by_default" concept.

>
> > +}
> > +
> >  vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
> >                            uint32_t memslot)
> >  {
> > @@ -1741,6 +1755,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
> >                       region->host_mem);
> >               fprintf(stream, "%*sunused_phy_pages: ", indent + 2, "");
> >               sparsebit_dump(stream, region->unused_phy_pages, 0);
> > +             if (vm->memcrypt.enabled) {
>
> vm->protected

renamed memcrypt -> protected.

>
> > +                     fprintf(stream, "%*sencrypted_phy_pages: ", indent + 2, "");
> > +                     sparsebit_dump(stream, region->encrypted_phy_pages, 0);
> > +             }
> >       }
> >       fprintf(stream, "%*sMapped Virtual Pages:\n", indent, "");
> >       sparsebit_dump(stream, vm->vpages_mapped, indent + 2);
> > @@ -1989,3 +2007,31 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
> >               break;
> >       }
> >  }
> > +
> > +void vm_set_memory_encryption(struct kvm_vm *vm, bool enc_by_default, bool has_enc_bit,
> > +                           uint8_t enc_bit)
> > +{
> > +     vm->memcrypt.enabled = true;
> > +     vm->memcrypt.enc_by_default = enc_by_default;
> > +     vm->memcrypt.has_enc_bit = has_enc_bit;
> > +     vm->memcrypt.enc_bit = enc_bit;
> > +}
> > +
> > +const struct sparsebit *
> > +vm_get_encrypted_phy_pages(struct kvm_vm *vm, int slot, vm_paddr_t *gpa_start,
> > +                        uint64_t *size)
>
> Bad wrap.

Fixed.

>
> > +{
> > +     struct userspace_mem_region *region;
> > +
> > +     if (!vm->memcrypt.enabled)
>
> This seems rather silly, why not TEST_ASSERT()?
>
> > +             return NULL;
> > +
> > +     region = memslot2region(vm, slot);
> > +     if (!region)
>
> Same here, TEST_ASSERT() seems more appropriate.
>
> Actually, I can't envision a use outside of SEV.  AFAIK, no other architecture
> does the whole "launch update" thing.  I.e. just open code this in sev_encrypt().
> The more generic API that will be useful for other VM types will be to query if a
> specific GPA is private vs. shared.

Good point. I'll move this code into that sev_encrypt() flow and
remove this function completely.

>
> > +             return NULL;
> > +
> > +     *size = region->region.memory_size;
> > +     *gpa_start = region->region.guest_phys_addr;
> > +
> > +     return region->encrypted_phy_pages;
> > +}
> > --
> > 2.37.2.672.g94769d06f0-goog
> >

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-06 18:25   ` Sean Christopherson
@ 2022-10-17 16:32     ` Peter Gonda
  2022-10-17 18:04       ` Sean Christopherson
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-10-17 16:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Thu, Oct 6, 2022 at 12:25 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Aug 29, 2022, Peter Gonda wrote:
> > Add interfaces to allow tests to create/manage SEV guests. The
> > additional state associated with these guests is encapsulated in a new
> > struct sev_vm, which is a light wrapper around struct kvm_vm. These
> > VMs will use vm_set_memory_encryption() and vm_get_encrypted_phy_pages()
> > under the covers to configure and sync up with the core kvm_util
> > library on what should/shouldn't be treated as encrypted memory.
> >
> > Originally-by: Michael Roth <michael.roth@amd.com>
> > Signed-off-by: Peter Gonda <pgonda@google.com>
> > ---
> >  tools/testing/selftests/kvm/Makefile          |   1 +
> >  .../selftests/kvm/include/kvm_util_base.h     |   3 +
> >  .../selftests/kvm/include/x86_64/sev.h        |  47 ++++
> >  tools/testing/selftests/kvm/lib/x86_64/sev.c  | 232 ++++++++++++++++++
> >  4 files changed, 283 insertions(+)
> >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
> >  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > index 23649c5d42fc..0a70e50f0498 100644
> > --- a/tools/testing/selftests/kvm/Makefile
> > +++ b/tools/testing/selftests/kvm/Makefile
> > @@ -56,6 +56,7 @@ LIBKVM_x86_64 += lib/x86_64/processor.c
> >  LIBKVM_x86_64 += lib/x86_64/svm.c
> >  LIBKVM_x86_64 += lib/x86_64/ucall.c
> >  LIBKVM_x86_64 += lib/x86_64/vmx.c
> > +LIBKVM_x86_64 += lib/x86_64/sev.c
> >
> >  LIBKVM_aarch64 += lib/aarch64/gic.c
> >  LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> > diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > index 489e8c833e5f..0927e262623d 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> > @@ -69,6 +69,7 @@ struct userspace_mem_regions {
> >  /* Memory encryption policy/configuration. */
> >  struct vm_memcrypt {
> >       bool enabled;
> > +     bool encrypted;
> >       int8_t enc_by_default;
> >       bool has_enc_bit;
> >       int8_t enc_bit;
> > @@ -831,6 +832,8 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
> >
> >  static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> >  {
> > +     TEST_ASSERT(!vm->memcrypt.encrypted,
>
> vm->protected
>
> > +                 "Encrypted guests have their page tables encrypted so gva2gpa conversions are not possible.");
>
> Unnecessarily verbose, e.g.
>
>                     "Protected VMs have private, inaccessible page tables");
>
> > +#define CPUID_MEM_ENC_LEAF 0x8000001f
> > +#define CPUID_EBX_CBIT_MASK 0x3f
> > +
> > +/* Common SEV helpers/accessors. */
>
> Please drop this comment and the "Local helpers" and "SEV VM implementation" comments
> below.  There's 0% chance these comments will stay fresh as code is added and moved
> around.  They also add no value IMO, e.g. "static" makes it quite obvious it's a
> local function, and sev_* vs. sev_es_*. vs. sev_snp_* namespacing takes care of the
> rest.
>
> > +void sev_ioctl(int sev_fd, int cmd, void *data)
> > +{
> > +     int ret;
> > +     struct sev_issue_cmd arg;
> > +
> > +     arg.cmd = cmd;
> > +     arg.data = (unsigned long)data;
> > +     ret = ioctl(sev_fd, SEV_ISSUE_CMD, &arg);
> > +     TEST_ASSERT(ret == 0,
> > +                 "SEV ioctl %d failed, error: %d, fw_error: %d",
> > +                 cmd, ret, arg.error);
> > +}
> > +
> > +void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data)
> > +{
> > +     struct kvm_sev_cmd arg = {0};
> > +     int ret;
> > +
> > +     arg.id = cmd;
> > +     arg.sev_fd = sev->fd;
> > +     arg.data = (__u64)data;
> > +
> > +     ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg);
> > +     TEST_ASSERT(ret == 0,
> > +                 "SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d",
> > +                 cmd, ret, errno, strerror(errno), arg.error);
> > +}
> > +
> > +/* Local helpers. */
> > +
> > +static void sev_register_user_region(struct sev_vm *sev, void *hva, uint64_t size)
> > +{
> > +     struct kvm_enc_region range = {0};
> > +     int ret;
> > +
> > +     pr_debug("%s: hva: %p, size: %lu\n", __func__, hva, size);
> > +
> > +     range.addr = (__u64)hva;
> > +     range.size = size;
> > +
> > +     ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
> > +     TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n", errno);
> > +}
> > +
> > +static void sev_encrypt_phy_range(struct sev_vm *sev, vm_paddr_t gpa, uint64_t size)
> > +{
> > +     struct kvm_sev_launch_update_data ksev_update_data = {0};
> > +
> > +     pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size);
> > +
> > +     ksev_update_data.uaddr = (__u64)addr_gpa2hva(sev->vm, gpa);
> > +     ksev_update_data.len = size;
> > +
> > +     kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data);
> > +}
> > +
> > +static void sev_encrypt(struct sev_vm *sev)
> > +{
> > +     const struct sparsebit *enc_phy_pages;
> > +     struct kvm_vm *vm = sev->vm;
> > +     sparsebit_idx_t pg = 0;
> > +     vm_paddr_t gpa_start;
> > +     uint64_t memory_size;
> > +     int ctr;
> > +     struct userspace_mem_region *region;
> > +
> > +     hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
> > +             enc_phy_pages = vm_get_encrypted_phy_pages(
>
> Please don't wrap after the opening paranthesis unless it's really, really necessary.
> More for future reference since I think vm_get_encrypted_phy_pages() should be open
> coded here.  E.g. in this case, the "enc_phy_" prefix doesn't add much value, and
> dropping that makes the code easier to read overall.
>
>                 pages = vm_get_encrypted_phy_pages(sev->vm, region->region.slot,
>                                                    &gpa_start, &memory_size);
>
> > +                     sev->vm, region->region.slot, &gpa_start, &memory_size);
> > +             TEST_ASSERT(enc_phy_pages,
> > +                         "Unable to retrieve encrypted pages bitmap");
> > +             while (pg < (memory_size / vm->page_size)) {
> > +                     sparsebit_idx_t pg_cnt;
>
> s/pg_cnt/nr_pages
>
> > +
> > +                     if (sparsebit_is_clear(enc_phy_pages, pg)) {
> > +                             pg = sparsebit_next_set(enc_phy_pages, pg);
> > +                             if (!pg)
> > +                                     break;
> > +                     }
> > +
> > +                     pg_cnt = sparsebit_next_clear(enc_phy_pages, pg) - pg;
> > +                     if (pg_cnt <= 0)
> > +                             pg_cnt = 1;
> > +
> > +                     sev_encrypt_phy_range(sev,
> > +                                           gpa_start + pg * vm->page_size,
> > +                                           pg_cnt * vm->page_size);
> > +                     pg += pg_cnt;
> > +             }
> > +     }
> > +
> > +     sev->vm->memcrypt.encrypted = true;
> > +}
> > +
> > +/* SEV VM implementation. */
> > +
> > +static struct sev_vm *sev_vm_alloc(struct kvm_vm *vm)
> > +{
> > +     struct sev_user_data_status sev_status;
> > +     uint32_t eax, ebx, ecx, edx;
> > +     struct sev_vm *sev;
> > +     int sev_fd;
> > +
> > +     sev_fd = open(SEV_DEV_PATH, O_RDWR);
> > +     if (sev_fd < 0) {
> > +             pr_info("Failed to open SEV device, path: %s, error: %d, skipping test.\n",
> > +                     SEV_DEV_PATH, sev_fd);
> > +             return NULL;
>
> Printing "skipping test" is wrong as there's no guarantee the caller is going to
> skip the test.  E.g. the sole user in this series asserts, i.e. fails the test.
>
> I also think that waiting until VM allocation to perform these sanity checks is
> flawed.  Rather do these checks every time, add helpers to query SEV and SEV-ES
> support, and then use TEST_REQUIRE() to actually skip tests that require support,
> e.g.
>
>         TEST_REQUIRE(kvm_is_sev_supported());
>
> or
>
>         TEST_REQUIRE(kvm_is_sev_es_supported());
>
> Then this helper can simply assert that opening SEV_DEV_PATH succeeds.
>
> > +     }
> > +
> > +     sev_ioctl(sev_fd, SEV_PLATFORM_STATUS, &sev_status);
> > +
> > +     if (!(sev_status.api_major > SEV_FW_REQ_VER_MAJOR ||
> > +           (sev_status.api_major == SEV_FW_REQ_VER_MAJOR &&
> > +            sev_status.api_minor >= SEV_FW_REQ_VER_MINOR))) {
> > +             pr_info("SEV FW version too old. Have API %d.%d (build: %d), need %d.%d, skipping test.\n",
> > +                     sev_status.api_major, sev_status.api_minor, sev_status.build,
> > +                     SEV_FW_REQ_VER_MAJOR, SEV_FW_REQ_VER_MINOR);
> > +             return NULL;
> > +     }
> > +
> > +     sev = calloc(1, sizeof(*sev));
>
> TEST_ASSERT(sev, ...)
>
> > +     sev->fd = sev_fd;
> > +     sev->vm = vm;
> > +
> > +     /* Get encryption bit via CPUID. */
> > +     cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx);
> > +     sev->enc_bit = ebx & CPUID_EBX_CBIT_MASK;
>
> Oh hey, another series of mine[*] that you can leverage :-)
>
> [*] https://lore.kernel.org/all/20221006005125.680782-1-seanjc@google.com
>
> > +
> > +     return sev;
> > +}
> > +
> > +void sev_vm_free(struct sev_vm *sev)
> > +{
> > +     kvm_vm_free(sev->vm);
> > +     close(sev->fd);
> > +     free(sev);
> > +}
> > +
> > +struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
>
> The migration test already defines sev_vm_create().  That conflict needs to be
> resolved.
>
> > +{
> > +     struct sev_vm *sev;
> > +     struct kvm_vm *vm;
> > +
> > +     /* Need to handle memslots after init, and after setting memcrypt. */
> > +     vm = vm_create_barebones();
>
> Do not use vm_create_barebones().  That API is only to be used for tests that do
> not intend to run vCPUs.
>
>
>
> > +     sev = sev_vm_alloc(vm);
> > +     if (!sev)
> > +             return NULL;
> > +     sev->sev_policy = policy;
> > +
> > +     kvm_sev_ioctl(sev, KVM_SEV_INIT, NULL);
> > +
> > +     vm->vpages_mapped = sparsebit_alloc();
>
> This is unnecessary and leaks memory, vm->vpages_mapped is allocated by
> ____vm_create().
>
> > +     vm_set_memory_encryption(vm, true, true, sev->enc_bit);
> > +     pr_info("SEV cbit: %d\n", sev->enc_bit);
> > +     vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0);
> > +     sev_register_user_region(sev, addr_gpa2hva(vm, 0),
> > +                              npages * vm->page_size);
>
> Burying sev_register_user_region() in here is not going to be maintainble.  I
> think the best away to handle this is to add an arch hook in vm_userspace_mem_region_add()
> and automatically register regions when they're created.
>
> And with that, I believe sev_vm_create() can go away entirely and the SEV encryption
> stuff can be handled via a new vm_guest_mode.  ____vm_create() already has a gross
> __x86_64__ hook that we can tweak, e.g.
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 54b8d8825f5d..2d6cbca2c01a 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -238,9 +238,10 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
>         case VM_MODE_P36V47_16K:
>                 vm->pgtable_levels = 3;
>                 break;
> +       case VM_MODE_PXXV48_4K_SEV:
>         case VM_MODE_PXXV48_4K:
>  #ifdef __x86_64__
> -               kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits);
> +               kvm_init_vm_address_properties(vm);
>                 /*
>                  * Ignore KVM support for 5-level paging (vm->va_bits == 57),
>                  * it doesn't take effect unless a CR4.LA57 is set, which it
>
> Then kvm_init_vm_address_properties() can pivot on vm->mode to deal with SEV
> specific stuff.
>
> > +
> > +     pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n",
> > +             sev->sev_policy, npages * vm->page_size / 1024);
> > +
> > +     return sev;
> > +}
> > +
> > +void sev_vm_launch(struct sev_vm *sev)
> > +{
> > +     struct kvm_sev_launch_start ksev_launch_start = {0};
> > +     struct kvm_sev_guest_status ksev_status;
> > +
> > +     ksev_launch_start.policy = sev->sev_policy;
> > +     kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_START, &ksev_launch_start);
> > +     kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
> > +     TEST_ASSERT(ksev_status.policy == sev->sev_policy, "Incorrect guest policy.");
> > +     TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE,
> > +                 "Unexpected guest state: %d", ksev_status.state);
> > +
> > +     ucall_init(sev->vm, 0);
> > +
> > +     sev_encrypt(sev);
> > +}
> > +
> > +void sev_vm_launch_measure(struct sev_vm *sev, uint8_t *measurement)
> > +{
> > +     struct kvm_sev_launch_measure ksev_launch_measure;
> > +     struct kvm_sev_guest_status ksev_guest_status;
> > +
> > +     ksev_launch_measure.len = 256;
> > +     ksev_launch_measure.uaddr = (__u64)measurement;
> > +     kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_MEASURE, &ksev_launch_measure);
> > +
> > +     /* Measurement causes a state transition, check that. */
> > +     kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_guest_status);
> > +     TEST_ASSERT(ksev_guest_status.state == SEV_GSTATE_LSECRET,
> > +                 "Unexpected guest state: %d", ksev_guest_status.state);
> > +}
> > +
> > +void sev_vm_launch_finish(struct sev_vm *sev)
> > +{
> > +     struct kvm_sev_guest_status ksev_status;
> > +
> > +     kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
> > +     TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE ||
> > +                 ksev_status.state == SEV_GSTATE_LSECRET,
> > +                 "Unexpected guest state: %d", ksev_status.state);
> > +
> > +     kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_FINISH, NULL);
> > +
> > +     kvm_sev_ioctl(sev, KVM_SEV_GUEST_STATUS, &ksev_status);
> > +     TEST_ASSERT(ksev_status.state == SEV_GSTATE_RUNNING,
> > +                 "Unexpected guest state: %d", ksev_status.state);
> > +}
>
> Rather than force each test to invoke these via something like setup_test_common(),
> add the same kvm_arch_vm_post_create() hook that Vishal is likely going to add,
> and then automatically do all of the launch+measure+finish stuff for non-barebones
> VMs.  That will let SEV/SEV-ES tests use __vm_create_with_vcpus() and
> __vm_create().
>
> And it'd be a little gross, but I think it'd be wortwhile to add another layer
> to the "one_vcpu" helpers to make things even more convenient, e.g.
>
> struct kvm_vm *____vm_create_with_one_vcpu(enum vm_guest_mode mode,
>                                            struct kvm_vcpu **vcpu,
>                                            uint64_t extra_mem_pages,
>                                            void *guest_code)
> {
>         struct kvm_vcpu *vcpus[1];
>         struct kvm_vm *vm;
>
>         vm = __vm_create_with_vcpus(mode, 1, extra_mem_pages, guest_code, vcpus);
>
>         *vcpu = vcpus[0];
>         return vm;
> }
>
> static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
>                                                        uint64_t extra_mem_pages,
>                                                        void *guest_code)
> {
>         return ____vm_create_with_one_vcpu(VM_MODE_DEFAULT, vcpu,
>                                            extra_mem_pages, guest_code);
> }
>
> static inline struct kvm_vm *____vm_create_with_one_vcpu(enum vm_guest_mode mode,
>                                            struct kvm_vcpu **vcpu,
>                                            uint64_t extra_mem_pages,
>                                            void *guest_code)
> ____vm_create_with_one_vcpu
>
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index dafe4471a6c7..593dfadb662e 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -298,9 +298,8 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
>
>         kvm_vm_elf_load(vm, program_invocation_name);
>
> -#ifdef __x86_64__
> -       vm_create_irqchip(vm);
> -#endif
> +       kvm_arch_vm_post_create(vm);
> +
>         return vm;
>  }
>
>
> [*] https://lore.kernel.org/all/YzsC4ibDqGh5qaP9@google.com

This refactor sounds good, working on this with a few changes.

Instead of kvm_init_vm_address_properties() as you suggested I've added this:

@@ -272,6 +275,8 @@ struct kvm_vm *____vm_create(enum vm_guest_mode
 mode, uint64_t nr_pages)
                vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits);
 #endif

+       kvm_init_vm_arch(vm);
+
        vm_open(vm);

        /* Limit to VA-bit canonical virtual addresses. */

And I need to put kvm_arch_vm_post_create() after the vCPUs are
created because the ordering we need is: KVM_SEV_INIT -> Create vCPUS
-> KVM_SEV_LAUNCH_FINISH.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-17 16:32     ` Peter Gonda
@ 2022-10-17 18:04       ` Sean Christopherson
  2022-10-17 18:25         ` Peter Gonda
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-10-17 18:04 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Oct 17, 2022, Peter Gonda wrote:
> On Thu, Oct 6, 2022 at 12:25 PM Sean Christopherson <seanjc@google.com> wrote:
> > And with that, I believe sev_vm_create() can go away entirely and the SEV encryption
> > stuff can be handled via a new vm_guest_mode.  ____vm_create() already has a gross
> > __x86_64__ hook that we can tweak, e.g.
> >
> > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > index 54b8d8825f5d..2d6cbca2c01a 100644
> > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > @@ -238,9 +238,10 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
> >         case VM_MODE_P36V47_16K:
> >                 vm->pgtable_levels = 3;
> >                 break;
> > +       case VM_MODE_PXXV48_4K_SEV:
> >         case VM_MODE_PXXV48_4K:
> >  #ifdef __x86_64__
> > -               kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits);
> > +               kvm_init_vm_address_properties(vm);
> >                 /*
> >                  * Ignore KVM support for 5-level paging (vm->va_bits == 57),
> >                  * it doesn't take effect unless a CR4.LA57 is set, which it
> >
> > Then kvm_init_vm_address_properties() can pivot on vm->mode to deal with SEV
> > specific stuff.

...

> This refactor sounds good, working on this with a few changes.
> 
> Instead of kvm_init_vm_address_properties() as you suggested I've added this:
> 
> @@ -272,6 +275,8 @@ struct kvm_vm *____vm_create(enum vm_guest_mode
>  mode, uint64_t nr_pages)
>                 vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits);
>  #endif
> 
> +       kvm_init_vm_arch(vm);

Why?  I'm not necessarily opposed to adding kvm_init_vm_arch(), but since x86
"needs" a dedicated hook to unpack the mode, why not piggyback that one?

> +
>         vm_open(vm);
> 
>         /* Limit to VA-bit canonical virtual addresses. */
> 
> And I need to put kvm_arch_vm_post_create() after the vCPUs are
> created because the ordering we need is: KVM_SEV_INIT -> Create vCPUS
> -> KVM_SEV_LAUNCH_FINISH.

Hrm, that's annoying.  Please don't use kvm_arch_vm_post_create() as the name,
that's a better fit for what Vishal is doing since the "vm_post_create()" implies
that it's called for "all" VM creation paths, where "all" means "everything
except barebones VMs".  E.g. in Vishal's series, kvm_arch_vm_post_create() can
be used to drop the vm_create_irqchip() call in common code.  In your case, IIUC
the hook will be invoked from __vm_create_with_vcpus().

I'm a little hesitant to have an arch hook for this case since it can't be
all-or-nothing (again, ignoring barebones VMs).  If a "finalize" arch hook is added,
then arguably tests that do __vm_create() and manually add vCPUs should call the
arch hook, i.e. we'd be adding maintenance burden to tests that in all likelihood
don't care about SEV and never will.

It's somewhat unfortunate, but dedicated vm_sev_create_with_one_vcpu() and
and vm_sev_create_with_vcpus() wrappers is probably the least awful solution.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-17 18:04       ` Sean Christopherson
@ 2022-10-17 18:25         ` Peter Gonda
  2022-10-17 20:34           ` Sean Christopherson
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-10-17 18:25 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Oct 17, 2022 at 12:04 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Oct 17, 2022, Peter Gonda wrote:
> > On Thu, Oct 6, 2022 at 12:25 PM Sean Christopherson <seanjc@google.com> wrote:
> > > And with that, I believe sev_vm_create() can go away entirely and the SEV encryption
> > > stuff can be handled via a new vm_guest_mode.  ____vm_create() already has a gross
> > > __x86_64__ hook that we can tweak, e.g.
> > >
> > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > > index 54b8d8825f5d..2d6cbca2c01a 100644
> > > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > > @@ -238,9 +238,10 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
> > >         case VM_MODE_P36V47_16K:
> > >                 vm->pgtable_levels = 3;
> > >                 break;
> > > +       case VM_MODE_PXXV48_4K_SEV:
> > >         case VM_MODE_PXXV48_4K:
> > >  #ifdef __x86_64__
> > > -               kvm_get_cpu_address_width(&vm->pa_bits, &vm->va_bits);
> > > +               kvm_init_vm_address_properties(vm);
> > >                 /*
> > >                  * Ignore KVM support for 5-level paging (vm->va_bits == 57),
> > >                  * it doesn't take effect unless a CR4.LA57 is set, which it
> > >
> > > Then kvm_init_vm_address_properties() can pivot on vm->mode to deal with SEV
> > > specific stuff.
>
> ...
>
> > This refactor sounds good, working on this with a few changes.
> >
> > Instead of kvm_init_vm_address_properties() as you suggested I've added this:
> >
> > @@ -272,6 +275,8 @@ struct kvm_vm *____vm_create(enum vm_guest_mode
> >  mode, uint64_t nr_pages)
> >                 vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits);
> >  #endif
> >
> > +       kvm_init_vm_arch(vm);
>
> Why?  I'm not necessarily opposed to adding kvm_init_vm_arch(), but since x86
> "needs" a dedicated hook to unpack the mode, why not piggyback that one?
>

Well I since I need to do more than just
kvm_init_vm_address_properties() I thought the more generic name would
be better. We need to allocate kvm_vm_arch, find the c-bit, and call
KVM_SEV_INIT. I can put it back in that switch case if thats better,
thoughts?

> > +
> >         vm_open(vm);
> >
> >         /* Limit to VA-bit canonical virtual addresses. */
> >
> > And I need to put kvm_arch_vm_post_create() after the vCPUs are
> > created because the ordering we need is: KVM_SEV_INIT -> Create vCPUS
> > -> KVM_SEV_LAUNCH_FINISH.
>
> Hrm, that's annoying.  Please don't use kvm_arch_vm_post_create() as the name,
> that's a better fit for what Vishal is doing since the "vm_post_create()" implies
> that it's called for "all" VM creation paths, where "all" means "everything
> except barebones VMs".  E.g. in Vishal's series, kvm_arch_vm_post_create() can
> be used to drop the vm_create_irqchip() call in common code.  In your case, IIUC
> the hook will be invoked from __vm_create_with_vcpus().
>
> I'm a little hesitant to have an arch hook for this case since it can't be
> all-or-nothing (again, ignoring barebones VMs).  If a "finalize" arch hook is added,
> then arguably tests that do __vm_create() and manually add vCPUs should call the
> arch hook, i.e. we'd be adding maintenance burden to tests that in all likelihood
> don't care about SEV and never will.
>
> It's somewhat unfortunate, but dedicated vm_sev_create_with_one_vcpu() and
> and vm_sev_create_with_vcpus() wrappers is probably the least awful solution.

Make sense. I think we can go back to your suggestion of
kvm_init_vm_address_properties() above since we can now do all the
KVM_SEV_* stuff. I think this means we don't need to add
VM_MODE_PXXV48_4K_SEV since we can set up the c-bit from inside of
vm_sev_create_*(), thoughts?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-17 18:25         ` Peter Gonda
@ 2022-10-17 20:34           ` Sean Christopherson
  2022-10-18 14:59             ` Peter Gonda
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-10-17 20:34 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Oct 17, 2022, Peter Gonda wrote:
> On Mon, Oct 17, 2022 at 12:04 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Mon, Oct 17, 2022, Peter Gonda wrote:
> > > This refactor sounds good, working on this with a few changes.
> > >
> > > Instead of kvm_init_vm_address_properties() as you suggested I've added this:
> > >
> > > @@ -272,6 +275,8 @@ struct kvm_vm *____vm_create(enum vm_guest_mode
> > >  mode, uint64_t nr_pages)
> > >                 vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits);
> > >  #endif
> > >
> > > +       kvm_init_vm_arch(vm);
> >
> > Why?  I'm not necessarily opposed to adding kvm_init_vm_arch(), but since x86
> > "needs" a dedicated hook to unpack the mode, why not piggyback that one?
> >
> 
> Well I since I need to do more than just
> kvm_init_vm_address_properties() I thought the more generic name would
> be better. We need to allocate kvm_vm_arch, find the c-bit, and call
> KVM_SEV_INIT. I can put it back in that switch case if thats better,
> thoughts?
> 
> > > +
> > >         vm_open(vm);
> > >
> > >         /* Limit to VA-bit canonical virtual addresses. */
> > >
> > > And I need to put kvm_arch_vm_post_create() after the vCPUs are
> > > created because the ordering we need is: KVM_SEV_INIT -> Create vCPUS
> > > -> KVM_SEV_LAUNCH_FINISH.
> >
> > Hrm, that's annoying.  Please don't use kvm_arch_vm_post_create() as the name,
> > that's a better fit for what Vishal is doing since the "vm_post_create()" implies
> > that it's called for "all" VM creation paths, where "all" means "everything
> > except barebones VMs".  E.g. in Vishal's series, kvm_arch_vm_post_create() can
> > be used to drop the vm_create_irqchip() call in common code.  In your case, IIUC
> > the hook will be invoked from __vm_create_with_vcpus().
> >
> > I'm a little hesitant to have an arch hook for this case since it can't be
> > all-or-nothing (again, ignoring barebones VMs).  If a "finalize" arch hook is added,
> > then arguably tests that do __vm_create() and manually add vCPUs should call the
> > arch hook, i.e. we'd be adding maintenance burden to tests that in all likelihood
> > don't care about SEV and never will.
> >
> > It's somewhat unfortunate, but dedicated vm_sev_create_with_one_vcpu() and
> > and vm_sev_create_with_vcpus() wrappers is probably the least awful solution.
> 
> Make sense. I think we can go back to your suggestion of
> kvm_init_vm_address_properties() above since we can now do all the
> KVM_SEV_* stuff. I think this means we don't need to add
> VM_MODE_PXXV48_4K_SEV since we can set up the c-bit from inside of
> vm_sev_create_*(), thoughts?

Configuring the C-bit inside vm_sev_create_*() won't work (at least not well).
The C-bit needs to be known before kvm_vm_elf_load(), i.e. can't be handled after
__vm_create(), and needs to be tracked inside the VM, i.e. can't be handled before
__vm_create().

The proposed kvm_init_vm_address_properties() seems like the best fit since the
C-bit (and TDX's S-bit) is stolen from GPA space, i.e. directly affects the other
values computed in that path.

As for the kvm_vm_arch allocation ugliness, when we talked off-list I didn't
consider the need to allocate in kvm_init_vm_address_properties().  That's quite
gross, especially since the pointer will be larger than the thing being allocated.

With that in mind, adding .../include/<arch>/kvm_util.h so that "struct kvm_vm_arch"
can be defined and referenced directly doesn't seem so bad.  Having to stub in the
struct for the other architectures is annoying, but not the end of the world.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-17 20:34           ` Sean Christopherson
@ 2022-10-18 14:59             ` Peter Gonda
  2022-10-19 16:34               ` Sean Christopherson
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-10-18 14:59 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Mon, Oct 17, 2022 at 2:34 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Oct 17, 2022, Peter Gonda wrote:
> > On Mon, Oct 17, 2022 at 12:04 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Mon, Oct 17, 2022, Peter Gonda wrote:
> > > > This refactor sounds good, working on this with a few changes.
> > > >
> > > > Instead of kvm_init_vm_address_properties() as you suggested I've added this:
> > > >
> > > > @@ -272,6 +275,8 @@ struct kvm_vm *____vm_create(enum vm_guest_mode
> > > >  mode, uint64_t nr_pages)
> > > >                 vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits);
> > > >  #endif
> > > >
> > > > +       kvm_init_vm_arch(vm);
> > >
> > > Why?  I'm not necessarily opposed to adding kvm_init_vm_arch(), but since x86
> > > "needs" a dedicated hook to unpack the mode, why not piggyback that one?
> > >
> >
> > Well I since I need to do more than just
> > kvm_init_vm_address_properties() I thought the more generic name would
> > be better. We need to allocate kvm_vm_arch, find the c-bit, and call
> > KVM_SEV_INIT. I can put it back in that switch case if thats better,
> > thoughts?
> >
> > > > +
> > > >         vm_open(vm);
> > > >
> > > >         /* Limit to VA-bit canonical virtual addresses. */
> > > >
> > > > And I need to put kvm_arch_vm_post_create() after the vCPUs are
> > > > created because the ordering we need is: KVM_SEV_INIT -> Create vCPUS
> > > > -> KVM_SEV_LAUNCH_FINISH.
> > >
> > > Hrm, that's annoying.  Please don't use kvm_arch_vm_post_create() as the name,
> > > that's a better fit for what Vishal is doing since the "vm_post_create()" implies
> > > that it's called for "all" VM creation paths, where "all" means "everything
> > > except barebones VMs".  E.g. in Vishal's series, kvm_arch_vm_post_create() can
> > > be used to drop the vm_create_irqchip() call in common code.  In your case, IIUC
> > > the hook will be invoked from __vm_create_with_vcpus().
> > >
> > > I'm a little hesitant to have an arch hook for this case since it can't be
> > > all-or-nothing (again, ignoring barebones VMs).  If a "finalize" arch hook is added,
> > > then arguably tests that do __vm_create() and manually add vCPUs should call the
> > > arch hook, i.e. we'd be adding maintenance burden to tests that in all likelihood
> > > don't care about SEV and never will.
> > >
> > > It's somewhat unfortunate, but dedicated vm_sev_create_with_one_vcpu() and
> > > and vm_sev_create_with_vcpus() wrappers is probably the least awful solution.
> >
> > Make sense. I think we can go back to your suggestion of
> > kvm_init_vm_address_properties() above since we can now do all the
> > KVM_SEV_* stuff. I think this means we don't need to add
> > VM_MODE_PXXV48_4K_SEV since we can set up the c-bit from inside of
> > vm_sev_create_*(), thoughts?
>
> Configuring the C-bit inside vm_sev_create_*() won't work (at least not well).
> The C-bit needs to be known before kvm_vm_elf_load(), i.e. can't be handled after
> __vm_create(), and needs to be tracked inside the VM, i.e. can't be handled before
> __vm_create().
>
> The proposed kvm_init_vm_address_properties() seems like the best fit since the
> C-bit (and TDX's S-bit) is stolen from GPA space, i.e. directly affects the other
> values computed in that path.
>
> As for the kvm_vm_arch allocation ugliness, when we talked off-list I didn't
> consider the need to allocate in kvm_init_vm_address_properties().  That's quite
> gross, especially since the pointer will be larger than the thing being allocated.
>
> With that in mind, adding .../include/<arch>/kvm_util.h so that "struct kvm_vm_arch"
> can be defined and referenced directly doesn't seem so bad.  Having to stub in the
> struct for the other architectures is annoying, but not the end of the world.

I'll make "struct kvm_vm_arch" a non pointer member, so adding
/include/<arch>/kvm_util.h files.

But I think we do not need VM_MODE_PXXV48_4K_SEV, see:

struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
                                           struct kvm_vcpu **cpu)
{
        enum vm_guest_mode mode = VM_MODE_PXXV48_4K;
        uint64_t nr_pages = vm_nr_pages_required(mode, 1, 0);
        struct kvm_vm *vm;
        uint8_t measurement[512];
        int i;

        vm = ____vm_create(mode, nr_pages);

        kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL);

        configure_sev_pte_masks(vm);

        *cpu = vm_vcpu_add(vm, 0, guest_code);
        kvm_vm_elf_load(vm, program_invocation_name);

        sev_vm_launch(vm, policy);

        /* Dump the initial measurement. A test to actually verify it
would be nice. */
        sev_vm_launch_measure(vm, measurement);
        pr_info("guest measurement: ");
        for (i = 0; i < 32; ++i)
                pr_info("%02x", measurement[i]);
        pr_info("\n");

        sev_vm_launch_finish(vm);

        return vm;
}

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-18 14:59             ` Peter Gonda
@ 2022-10-19 16:34               ` Sean Christopherson
  2022-10-27 16:24                 ` Peter Gonda
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-10-19 16:34 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Tue, Oct 18, 2022, Peter Gonda wrote:
> On Mon, Oct 17, 2022 at 2:34 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Mon, Oct 17, 2022, Peter Gonda wrote:
> > > I think this means we don't need to add VM_MODE_PXXV48_4K_SEV since we
> > > can set up the c-bit from inside of vm_sev_create_*(), thoughts?
> >
> > Configuring the C-bit inside vm_sev_create_*() won't work (at least not well).
> > The C-bit needs to be known before kvm_vm_elf_load(), i.e. can't be handled after
> > __vm_create(), and needs to be tracked inside the VM, i.e. can't be handled before
> > __vm_create().
> >
> > The proposed kvm_init_vm_address_properties() seems like the best fit since the
> > C-bit (and TDX's S-bit) is stolen from GPA space, i.e. directly affects the other
> > values computed in that path.
> >
> > As for the kvm_vm_arch allocation ugliness, when we talked off-list I didn't
> > consider the need to allocate in kvm_init_vm_address_properties().  That's quite
> > gross, especially since the pointer will be larger than the thing being allocated.
> >
> > With that in mind, adding .../include/<arch>/kvm_util.h so that "struct kvm_vm_arch"
> > can be defined and referenced directly doesn't seem so bad.  Having to stub in the
> > struct for the other architectures is annoying, but not the end of the world.
> 
> I'll make "struct kvm_vm_arch" a non pointer member, so adding
> /include/<arch>/kvm_util.h files.
> 
> But I think we do not need VM_MODE_PXXV48_4K_SEV, see:

I really don't want to open code __vm_create() with a slight tweak.  E.g. the
below code will be broken by Ricardo's series to add memslot0 is moved out of
____vm_create()[1], and kinda sorta be broken again by Vishal's series to add an
arch hook to __vm_create()[2].

AFAICT, there is no requirement that KVM_SEV_INIT be called before computing the
C-Bit, the only requirement is that KVM_SEV_INIT is called before adding vCPUs.

[1] https://lore.kernel.org/all/20221017195834.2295901-8-ricarkol@google.com
[2] https://lore.kernel.org/all/YzsC4ibDqGh5qaP9@google.com

> struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
>                                            struct kvm_vcpu **cpu)
> {
>         enum vm_guest_mode mode = VM_MODE_PXXV48_4K;
>         uint64_t nr_pages = vm_nr_pages_required(mode, 1, 0);
>         struct kvm_vm *vm;
>         uint8_t measurement[512];
>         int i;
> 
>         vm = ____vm_create(mode, nr_pages);
> 
>         kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL);
> 
>         configure_sev_pte_masks(vm);
> 
>         *cpu = vm_vcpu_add(vm, 0, guest_code);
>         kvm_vm_elf_load(vm, program_invocation_name);
> 
>         sev_vm_launch(vm, policy);
> 
>         /* Dump the initial measurement. A test to actually verify it
> would be nice. */
>         sev_vm_launch_measure(vm, measurement);
>         pr_info("guest measurement: ");
>         for (i = 0; i < 32; ++i)
>                 pr_info("%02x", measurement[i]);
>         pr_info("\n");
> 
>         sev_vm_launch_finish(vm);
> 
>         return vm;
> }

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-19 16:34               ` Sean Christopherson
@ 2022-10-27 16:24                 ` Peter Gonda
  2022-10-27 17:59                   ` Sean Christopherson
  0 siblings, 1 reply; 24+ messages in thread
From: Peter Gonda @ 2022-10-27 16:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Wed, Oct 19, 2022 at 10:34 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Oct 18, 2022, Peter Gonda wrote:
> > On Mon, Oct 17, 2022 at 2:34 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Mon, Oct 17, 2022, Peter Gonda wrote:
> > > > I think this means we don't need to add VM_MODE_PXXV48_4K_SEV since we
> > > > can set up the c-bit from inside of vm_sev_create_*(), thoughts?
> > >
> > > Configuring the C-bit inside vm_sev_create_*() won't work (at least not well).
> > > The C-bit needs to be known before kvm_vm_elf_load(), i.e. can't be handled after
> > > __vm_create(), and needs to be tracked inside the VM, i.e. can't be handled before
> > > __vm_create().
> > >
> > > The proposed kvm_init_vm_address_properties() seems like the best fit since the
> > > C-bit (and TDX's S-bit) is stolen from GPA space, i.e. directly affects the other
> > > values computed in that path.
> > >
> > > As for the kvm_vm_arch allocation ugliness, when we talked off-list I didn't
> > > consider the need to allocate in kvm_init_vm_address_properties().  That's quite
> > > gross, especially since the pointer will be larger than the thing being allocated.
> > >
> > > With that in mind, adding .../include/<arch>/kvm_util.h so that "struct kvm_vm_arch"
> > > can be defined and referenced directly doesn't seem so bad.  Having to stub in the
> > > struct for the other architectures is annoying, but not the end of the world.
> >
> > I'll make "struct kvm_vm_arch" a non pointer member, so adding
> > /include/<arch>/kvm_util.h files.
> >
> > But I think we do not need VM_MODE_PXXV48_4K_SEV, see:
>
> I really don't want to open code __vm_create() with a slight tweak.  E.g. the
> below code will be broken by Ricardo's series to add memslot0 is moved out of
> ____vm_create()[1], and kinda sorta be broken again by Vishal's series to add an
> arch hook to __vm_create()[2].
>
> AFAICT, there is no requirement that KVM_SEV_INIT be called before computing the
> C-Bit, the only requirement is that KVM_SEV_INIT is called before adding vCPUs.
>
> [1] https://lore.kernel.org/all/20221017195834.2295901-8-ricarkol@google.com
> [2] https://lore.kernel.org/all/YzsC4ibDqGh5qaP9@google.com

Oh I misunderstood your suggestion above.

I should make KVM_SEV_INIT happen from kvm_arch_vm_post_create().  Add
VM_MODE_PXXV48_4K_SEV for c-bit setting inside of
kvm_init_vm_address_properties().

Inside of vm_sev_create_with_one_vcpu() I use
__vm_create_with_vcpus(), then call KVM_SEV_LAUNCH_FINISH.

Is that correct?



>
> > struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
> >                                            struct kvm_vcpu **cpu)
> > {
> >         enum vm_guest_mode mode = VM_MODE_PXXV48_4K;
> >         uint64_t nr_pages = vm_nr_pages_required(mode, 1, 0);
> >         struct kvm_vm *vm;
> >         uint8_t measurement[512];
> >         int i;
> >
> >         vm = ____vm_create(mode, nr_pages);
> >
> >         kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL);
> >
> >         configure_sev_pte_masks(vm);
> >
> >         *cpu = vm_vcpu_add(vm, 0, guest_code);
> >         kvm_vm_elf_load(vm, program_invocation_name);
> >
> >         sev_vm_launch(vm, policy);
> >
> >         /* Dump the initial measurement. A test to actually verify it
> > would be nice. */
> >         sev_vm_launch_measure(vm, measurement);
> >         pr_info("guest measurement: ");
> >         for (i = 0; i < 32; ++i)
> >                 pr_info("%02x", measurement[i]);
> >         pr_info("\n");
> >
> >         sev_vm_launch_finish(vm);
> >
> >         return vm;
> > }

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-27 16:24                 ` Peter Gonda
@ 2022-10-27 17:59                   ` Sean Christopherson
  2022-10-27 18:34                     ` Peter Gonda
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-10-27 17:59 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Thu, Oct 27, 2022, Peter Gonda wrote:
> On Wed, Oct 19, 2022 at 10:34 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Tue, Oct 18, 2022, Peter Gonda wrote:
> > > On Mon, Oct 17, 2022 at 2:34 PM Sean Christopherson <seanjc@google.com> wrote:
> > > >
> > > > On Mon, Oct 17, 2022, Peter Gonda wrote:
> > > > > I think this means we don't need to add VM_MODE_PXXV48_4K_SEV since we
> > > > > can set up the c-bit from inside of vm_sev_create_*(), thoughts?
> > > >
> > > > Configuring the C-bit inside vm_sev_create_*() won't work (at least not well).
> > > > The C-bit needs to be known before kvm_vm_elf_load(), i.e. can't be handled after
> > > > __vm_create(), and needs to be tracked inside the VM, i.e. can't be handled before
> > > > __vm_create().
> > > >
> > > > The proposed kvm_init_vm_address_properties() seems like the best fit since the
> > > > C-bit (and TDX's S-bit) is stolen from GPA space, i.e. directly affects the other
> > > > values computed in that path.
> > > >
> > > > As for the kvm_vm_arch allocation ugliness, when we talked off-list I didn't
> > > > consider the need to allocate in kvm_init_vm_address_properties().  That's quite
> > > > gross, especially since the pointer will be larger than the thing being allocated.
> > > >
> > > > With that in mind, adding .../include/<arch>/kvm_util.h so that "struct kvm_vm_arch"
> > > > can be defined and referenced directly doesn't seem so bad.  Having to stub in the
> > > > struct for the other architectures is annoying, but not the end of the world.
> > >
> > > I'll make "struct kvm_vm_arch" a non pointer member, so adding
> > > /include/<arch>/kvm_util.h files.
> > >
> > > But I think we do not need VM_MODE_PXXV48_4K_SEV, see:
> >
> > I really don't want to open code __vm_create() with a slight tweak.  E.g. the
> > below code will be broken by Ricardo's series to add memslot0 is moved out of
> > ____vm_create()[1], and kinda sorta be broken again by Vishal's series to add an
> > arch hook to __vm_create()[2].
> >
> > AFAICT, there is no requirement that KVM_SEV_INIT be called before computing the
> > C-Bit, the only requirement is that KVM_SEV_INIT is called before adding vCPUs.
> >
> > [1] https://lore.kernel.org/all/20221017195834.2295901-8-ricarkol@google.com
> > [2] https://lore.kernel.org/all/YzsC4ibDqGh5qaP9@google.com
> 
> Oh I misunderstood your suggestion above.
> 
> I should make KVM_SEV_INIT happen from kvm_arch_vm_post_create().  Add
> VM_MODE_PXXV48_4K_SEV for c-bit setting inside of
> kvm_init_vm_address_properties().
> 
> Inside of vm_sev_create_with_one_vcpu() I use
> __vm_create_with_vcpus(), then call KVM_SEV_LAUNCH_FINISH.
> 
> Is that correct?

Yep, I'm pretty sure that was what I was thinking.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-27 17:59                   ` Sean Christopherson
@ 2022-10-27 18:34                     ` Peter Gonda
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Gonda @ 2022-10-27 18:34 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, marcorr, michael.roth, thomas.lendacky, joro,
	mizhang, pbonzini, andrew.jones

On Thu, Oct 27, 2022 at 11:59 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Oct 27, 2022, Peter Gonda wrote:
> > On Wed, Oct 19, 2022 at 10:34 AM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Tue, Oct 18, 2022, Peter Gonda wrote:
> > > > On Mon, Oct 17, 2022 at 2:34 PM Sean Christopherson <seanjc@google.com> wrote:
> > > > >
> > > > > On Mon, Oct 17, 2022, Peter Gonda wrote:
> > > > > > I think this means we don't need to add VM_MODE_PXXV48_4K_SEV since we
> > > > > > can set up the c-bit from inside of vm_sev_create_*(), thoughts?
> > > > >
> > > > > Configuring the C-bit inside vm_sev_create_*() won't work (at least not well).
> > > > > The C-bit needs to be known before kvm_vm_elf_load(), i.e. can't be handled after
> > > > > __vm_create(), and needs to be tracked inside the VM, i.e. can't be handled before
> > > > > __vm_create().
> > > > >
> > > > > The proposed kvm_init_vm_address_properties() seems like the best fit since the
> > > > > C-bit (and TDX's S-bit) is stolen from GPA space, i.e. directly affects the other
> > > > > values computed in that path.
> > > > >
> > > > > As for the kvm_vm_arch allocation ugliness, when we talked off-list I didn't
> > > > > consider the need to allocate in kvm_init_vm_address_properties().  That's quite
> > > > > gross, especially since the pointer will be larger than the thing being allocated.
> > > > >
> > > > > With that in mind, adding .../include/<arch>/kvm_util.h so that "struct kvm_vm_arch"
> > > > > can be defined and referenced directly doesn't seem so bad.  Having to stub in the
> > > > > struct for the other architectures is annoying, but not the end of the world.
> > > >
> > > > I'll make "struct kvm_vm_arch" a non pointer member, so adding
> > > > /include/<arch>/kvm_util.h files.
> > > >
> > > > But I think we do not need VM_MODE_PXXV48_4K_SEV, see:
> > >
> > > I really don't want to open code __vm_create() with a slight tweak.  E.g. the
> > > below code will be broken by Ricardo's series to add memslot0 is moved out of
> > > ____vm_create()[1], and kinda sorta be broken again by Vishal's series to add an
> > > arch hook to __vm_create()[2].
> > >
> > > AFAICT, there is no requirement that KVM_SEV_INIT be called before computing the
> > > C-Bit, the only requirement is that KVM_SEV_INIT is called before adding vCPUs.
> > >
> > > [1] https://lore.kernel.org/all/20221017195834.2295901-8-ricarkol@google.com
> > > [2] https://lore.kernel.org/all/YzsC4ibDqGh5qaP9@google.com
> >
> > Oh I misunderstood your suggestion above.
> >
> > I should make KVM_SEV_INIT happen from kvm_arch_vm_post_create().  Add
> > VM_MODE_PXXV48_4K_SEV for c-bit setting inside of
> > kvm_init_vm_address_properties().
> >
> > Inside of vm_sev_create_with_one_vcpu() I use
> > __vm_create_with_vcpus(), then call KVM_SEV_LAUNCH_FINISH.
> >
> > Is that correct?
>
> Yep, I'm pretty sure that was what I was thinking.

Duh! Should have realized that at first. You said you were going to
rebase your selftest series. I'll follow up with a V6 ontop of that
rebase.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2022-10-27 18:34 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-29 17:10 [V4 0/8] KVM: selftests: Add simple SEV test Peter Gonda
2022-08-29 17:10 ` [V4 1/8] KVM: selftests: move vm_phy_pages_alloc() earlier in file Peter Gonda
2022-10-06 17:35   ` Sean Christopherson
2022-08-29 17:10 ` [V4 2/8] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
2022-08-29 17:10 ` [V4 3/8] KVM: selftests: add hooks for managing encrypted guest memory Peter Gonda
2022-10-06 17:48   ` Sean Christopherson
2022-10-11 17:38     ` Peter Gonda
2022-08-29 17:10 ` [V4 4/8] KVM: selftests: handle encryption bits in page tables Peter Gonda
2022-10-06 17:34   ` Sean Christopherson
2022-08-29 17:10 ` [V4 5/8] KVM: selftests: add support for encrypted vm_vaddr_* allocations Peter Gonda
2022-08-29 17:10 ` [V4 6/8] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
2022-10-06 18:25   ` Sean Christopherson
2022-10-17 16:32     ` Peter Gonda
2022-10-17 18:04       ` Sean Christopherson
2022-10-17 18:25         ` Peter Gonda
2022-10-17 20:34           ` Sean Christopherson
2022-10-18 14:59             ` Peter Gonda
2022-10-19 16:34               ` Sean Christopherson
2022-10-27 16:24                 ` Peter Gonda
2022-10-27 17:59                   ` Sean Christopherson
2022-10-27 18:34                     ` Peter Gonda
2022-08-29 17:10 ` [V4 7/8] KVM: selftests: Update ucall pool to allocate from shared memory Peter Gonda
2022-08-29 17:10 ` [V4 8/8] KVM: selftests: Add simple sev vm testing Peter Gonda
2022-10-06 18:31   ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).