linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V5 0/7] KVM: selftests: Add simple SEV test
@ 2022-10-18 20:58 Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 1/7] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

This patch series continues the work Michael Roth has done in supporting
SEV guests in selftests. It continues on top of the work Sean
Christopherson has sent to support ucalls from SEV guests. Along with a
very simple version of the SEV selftests Michael originally proposed.

V5
 * Rebase onto seanjc@'s latest ucall pool series.
 * More review changes based on seanjc:
 ** use protected instead of encrypted outside of SEV specific files
 ** Swap memcrypt struct for kvm_vm_arch arch specific struct
 ** Make protected page table data agnostic of address bit stealing specifics
    of SEV
 ** Further clean up for SEV library to just vm_sev_create_one_vcpu()
 * Due to large changes moved more authorships from mroth@ to pgonda@. Gave 
   originally-by tags to mroth@ as suggested by Seanjc for this.

V4
 * Rebase ontop of seanjc@'s latest Ucall Pool series:
   https://lore.kernel.org/linux-arm-kernel/20220825232522.3997340-8-seanjc@google.com/
 * Fix up review comments from seanjc
 * Switch authorship on 2 patches because of significant changes, added
 * Michael as suggested-by or originally-by.

V3
 * Addressed more of andrew.jones@ in ucall patches.
 * Fix build in non-x86 archs.

V2
 * Dropped RFC tag
 * Correctly separated Sean's ucall patches into 2 as originally
   intended.
 * Addressed andrew.jones@ in ucall patches.
 * Fixed ucall pool usage to work for other archs

V1
 * https://lore.kernel.org/all/20220715192956.1873315-1-pgonda@google.com/

Michael Roth (2):
  KVM: selftests: sparsebit: add const where appropriate
  KVM: selftests: add support for protected vm_vaddr_* allocations

Peter Gonda (5):
  KVM: selftests: add hooks for managing protected guest memory
  KVM: selftests: handle protected bits in page tables
  KVM: selftests: add library for creating/interacting with SEV guests
  KVM: selftests: Update ucall pool to allocate from shared memory
  KVM: selftests: Add simple sev vm testing

 tools/arch/arm64/include/asm/kvm_host.h       |   7 +
 tools/arch/riscv/include/asm/kvm_host.h       |   7 +
 tools/arch/s390/include/asm/kvm_host.h        |   7 +
 tools/arch/x86/include/asm/kvm_host.h         |  15 ++
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/include/kvm_util_base.h     |  49 +++-
 .../testing/selftests/kvm/include/sparsebit.h |  36 +--
 .../selftests/kvm/include/x86_64/sev.h        |  22 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  63 ++++-
 tools/testing/selftests/kvm/lib/sparsebit.c   |  48 ++--
 .../testing/selftests/kvm/lib/ucall_common.c  |   2 +-
 .../selftests/kvm/lib/x86_64/processor.c      |  23 +-
 tools/testing/selftests/kvm/lib/x86_64/sev.c  | 243 ++++++++++++++++++
 .../selftests/kvm/x86_64/sev_all_boot_test.c  |  84 ++++++
 15 files changed, 549 insertions(+), 60 deletions(-)
 create mode 100644 tools/arch/arm64/include/asm/kvm_host.h
 create mode 100644 tools/arch/riscv/include/asm/kvm_host.h
 create mode 100644 tools/arch/s390/include/asm/kvm_host.h
 create mode 100644 tools/arch/x86/include/asm/kvm_host.h
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH V5 1/7] KVM: selftests: sparsebit: add const where appropriate
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 2/7] KVM: selftests: add hooks for managing protected guest memory Peter Gonda
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

From: Michael Roth <michael.roth@amd.com>

Subsequent patches will introduce an encryption bitmap in kvm_util that
would be useful to allow tests to access in read-only fashion. This
will be done via a const sparsebit*. To avoid warnings or the need to
add casts everywhere, add const to the various sparsebit functions that
are applicable for read-only usage of sparsebit.

Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../testing/selftests/kvm/include/sparsebit.h | 36 +++++++-------
 tools/testing/selftests/kvm/lib/sparsebit.c   | 48 +++++++++----------
 2 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h
index 12a9a4b9cead..fb5170d57fcb 100644
--- a/tools/testing/selftests/kvm/include/sparsebit.h
+++ b/tools/testing/selftests/kvm/include/sparsebit.h
@@ -30,26 +30,26 @@ typedef uint64_t sparsebit_num_t;
 
 struct sparsebit *sparsebit_alloc(void);
 void sparsebit_free(struct sparsebit **sbitp);
-void sparsebit_copy(struct sparsebit *dstp, struct sparsebit *src);
+void sparsebit_copy(struct sparsebit *dstp, const struct sparsebit *src);
 
-bool sparsebit_is_set(struct sparsebit *sbit, sparsebit_idx_t idx);
-bool sparsebit_is_set_num(struct sparsebit *sbit,
+bool sparsebit_is_set(const struct sparsebit *sbit, sparsebit_idx_t idx);
+bool sparsebit_is_set_num(const struct sparsebit *sbit,
 			  sparsebit_idx_t idx, sparsebit_num_t num);
-bool sparsebit_is_clear(struct sparsebit *sbit, sparsebit_idx_t idx);
-bool sparsebit_is_clear_num(struct sparsebit *sbit,
+bool sparsebit_is_clear(const struct sparsebit *sbit, sparsebit_idx_t idx);
+bool sparsebit_is_clear_num(const struct sparsebit *sbit,
 			    sparsebit_idx_t idx, sparsebit_num_t num);
-sparsebit_num_t sparsebit_num_set(struct sparsebit *sbit);
-bool sparsebit_any_set(struct sparsebit *sbit);
-bool sparsebit_any_clear(struct sparsebit *sbit);
-bool sparsebit_all_set(struct sparsebit *sbit);
-bool sparsebit_all_clear(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_first_set(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_first_clear(struct sparsebit *sbit);
-sparsebit_idx_t sparsebit_next_set(struct sparsebit *sbit, sparsebit_idx_t prev);
-sparsebit_idx_t sparsebit_next_clear(struct sparsebit *sbit, sparsebit_idx_t prev);
-sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *sbit,
+sparsebit_num_t sparsebit_num_set(const struct sparsebit *sbit);
+bool sparsebit_any_set(const struct sparsebit *sbit);
+bool sparsebit_any_clear(const struct sparsebit *sbit);
+bool sparsebit_all_set(const struct sparsebit *sbit);
+bool sparsebit_all_clear(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_first_set(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *sbit);
+sparsebit_idx_t sparsebit_next_set(const struct sparsebit *sbit, sparsebit_idx_t prev);
+sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *sbit, sparsebit_idx_t prev);
+sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *sbit,
 				       sparsebit_idx_t start, sparsebit_num_t num);
-sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *sbit,
+sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *sbit,
 					 sparsebit_idx_t start, sparsebit_num_t num);
 
 void sparsebit_set(struct sparsebit *sbitp, sparsebit_idx_t idx);
@@ -62,9 +62,9 @@ void sparsebit_clear_num(struct sparsebit *sbitp,
 			 sparsebit_idx_t start, sparsebit_num_t num);
 void sparsebit_clear_all(struct sparsebit *sbitp);
 
-void sparsebit_dump(FILE *stream, struct sparsebit *sbit,
+void sparsebit_dump(FILE *stream, const struct sparsebit *sbit,
 		    unsigned int indent);
-void sparsebit_validate_internal(struct sparsebit *sbit);
+void sparsebit_validate_internal(const struct sparsebit *sbit);
 
 #ifdef __cplusplus
 }
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index 50e0cf41a7dd..6777a5b1fbd2 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -202,7 +202,7 @@ static sparsebit_num_t node_num_set(struct node *nodep)
 /* Returns a pointer to the node that describes the
  * lowest bit index.
  */
-static struct node *node_first(struct sparsebit *s)
+static struct node *node_first(const struct sparsebit *s)
 {
 	struct node *nodep;
 
@@ -216,7 +216,7 @@ static struct node *node_first(struct sparsebit *s)
  * lowest bit index > the index of the node pointed to by np.
  * Returns NULL if no node with a higher index exists.
  */
-static struct node *node_next(struct sparsebit *s, struct node *np)
+static struct node *node_next(const struct sparsebit *s, struct node *np)
 {
 	struct node *nodep = np;
 
@@ -244,7 +244,7 @@ static struct node *node_next(struct sparsebit *s, struct node *np)
  * highest index < the index of the node pointed to by np.
  * Returns NULL if no node with a lower index exists.
  */
-static struct node *node_prev(struct sparsebit *s, struct node *np)
+static struct node *node_prev(const struct sparsebit *s, struct node *np)
 {
 	struct node *nodep = np;
 
@@ -273,7 +273,7 @@ static struct node *node_prev(struct sparsebit *s, struct node *np)
  * subtree and duplicates the bit settings to the newly allocated nodes.
  * Returns the newly allocated copy of subtree.
  */
-static struct node *node_copy_subtree(struct node *subtree)
+static struct node *node_copy_subtree(const struct node *subtree)
 {
 	struct node *root;
 
@@ -307,7 +307,7 @@ static struct node *node_copy_subtree(struct node *subtree)
  * index is within the bits described by the mask bits or the number of
  * contiguous bits set after the mask.  Returns NULL if there is no such node.
  */
-static struct node *node_find(struct sparsebit *s, sparsebit_idx_t idx)
+static struct node *node_find(const struct sparsebit *s, sparsebit_idx_t idx)
 {
 	struct node *nodep;
 
@@ -393,7 +393,7 @@ static struct node *node_add(struct sparsebit *s, sparsebit_idx_t idx)
 }
 
 /* Returns whether all the bits in the sparsebit array are set.  */
-bool sparsebit_all_set(struct sparsebit *s)
+bool sparsebit_all_set(const struct sparsebit *s)
 {
 	/*
 	 * If any nodes there must be at least one bit set.  Only case
@@ -776,7 +776,7 @@ static void node_reduce(struct sparsebit *s, struct node *nodep)
 /* Returns whether the bit at the index given by idx, within the
  * sparsebit array is set or not.
  */
-bool sparsebit_is_set(struct sparsebit *s, sparsebit_idx_t idx)
+bool sparsebit_is_set(const struct sparsebit *s, sparsebit_idx_t idx)
 {
 	struct node *nodep;
 
@@ -922,7 +922,7 @@ static inline sparsebit_idx_t node_first_clear(struct node *nodep, int start)
  * used by test cases after they detect an unexpected condition, as a means
  * to capture diagnostic information.
  */
-static void sparsebit_dump_internal(FILE *stream, struct sparsebit *s,
+static void sparsebit_dump_internal(FILE *stream, const struct sparsebit *s,
 	unsigned int indent)
 {
 	/* Dump the contents of s */
@@ -970,7 +970,7 @@ void sparsebit_free(struct sparsebit **sbitp)
  * sparsebit_alloc().  It can though already have bits set, which
  * if different from src will be cleared.
  */
-void sparsebit_copy(struct sparsebit *d, struct sparsebit *s)
+void sparsebit_copy(struct sparsebit *d, const struct sparsebit *s)
 {
 	/* First clear any bits already set in the destination */
 	sparsebit_clear_all(d);
@@ -982,7 +982,7 @@ void sparsebit_copy(struct sparsebit *d, struct sparsebit *s)
 }
 
 /* Returns whether num consecutive bits starting at idx are all set.  */
-bool sparsebit_is_set_num(struct sparsebit *s,
+bool sparsebit_is_set_num(const struct sparsebit *s,
 	sparsebit_idx_t idx, sparsebit_num_t num)
 {
 	sparsebit_idx_t next_cleared;
@@ -1006,14 +1006,14 @@ bool sparsebit_is_set_num(struct sparsebit *s,
 }
 
 /* Returns whether the bit at the index given by idx.  */
-bool sparsebit_is_clear(struct sparsebit *s,
+bool sparsebit_is_clear(const struct sparsebit *s,
 	sparsebit_idx_t idx)
 {
 	return !sparsebit_is_set(s, idx);
 }
 
 /* Returns whether num consecutive bits starting at idx are all cleared.  */
-bool sparsebit_is_clear_num(struct sparsebit *s,
+bool sparsebit_is_clear_num(const struct sparsebit *s,
 	sparsebit_idx_t idx, sparsebit_num_t num)
 {
 	sparsebit_idx_t next_set;
@@ -1042,13 +1042,13 @@ bool sparsebit_is_clear_num(struct sparsebit *s,
  * value.  Use sparsebit_any_set(), instead of sparsebit_num_set() > 0,
  * to determine if the sparsebit array has any bits set.
  */
-sparsebit_num_t sparsebit_num_set(struct sparsebit *s)
+sparsebit_num_t sparsebit_num_set(const struct sparsebit *s)
 {
 	return s->num_set;
 }
 
 /* Returns whether any bit is set in the sparsebit array.  */
-bool sparsebit_any_set(struct sparsebit *s)
+bool sparsebit_any_set(const struct sparsebit *s)
 {
 	/*
 	 * Nodes only describe set bits.  If any nodes then there
@@ -1071,20 +1071,20 @@ bool sparsebit_any_set(struct sparsebit *s)
 }
 
 /* Returns whether all the bits in the sparsebit array are cleared.  */
-bool sparsebit_all_clear(struct sparsebit *s)
+bool sparsebit_all_clear(const struct sparsebit *s)
 {
 	return !sparsebit_any_set(s);
 }
 
 /* Returns whether all the bits in the sparsebit array are set.  */
-bool sparsebit_any_clear(struct sparsebit *s)
+bool sparsebit_any_clear(const struct sparsebit *s)
 {
 	return !sparsebit_all_set(s);
 }
 
 /* Returns the index of the first set bit.  Abort if no bits are set.
  */
-sparsebit_idx_t sparsebit_first_set(struct sparsebit *s)
+sparsebit_idx_t sparsebit_first_set(const struct sparsebit *s)
 {
 	struct node *nodep;
 
@@ -1098,7 +1098,7 @@ sparsebit_idx_t sparsebit_first_set(struct sparsebit *s)
 /* Returns the index of the first cleared bit.  Abort if
  * no bits are cleared.
  */
-sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s)
+sparsebit_idx_t sparsebit_first_clear(const struct sparsebit *s)
 {
 	struct node *nodep1, *nodep2;
 
@@ -1152,7 +1152,7 @@ sparsebit_idx_t sparsebit_first_clear(struct sparsebit *s)
 /* Returns index of next bit set within s after the index given by prev.
  * Returns 0 if there are no bits after prev that are set.
  */
-sparsebit_idx_t sparsebit_next_set(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_set(const struct sparsebit *s,
 	sparsebit_idx_t prev)
 {
 	sparsebit_idx_t lowest_possible = prev + 1;
@@ -1245,7 +1245,7 @@ sparsebit_idx_t sparsebit_next_set(struct sparsebit *s,
 /* Returns index of next bit cleared within s after the index given by prev.
  * Returns 0 if there are no bits after prev that are cleared.
  */
-sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_clear(const struct sparsebit *s,
 	sparsebit_idx_t prev)
 {
 	sparsebit_idx_t lowest_possible = prev + 1;
@@ -1301,7 +1301,7 @@ sparsebit_idx_t sparsebit_next_clear(struct sparsebit *s,
  * and returns the index of the first sequence of num consecutively set
  * bits.  Returns a value of 0 of no such sequence exists.
  */
-sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_set_num(const struct sparsebit *s,
 	sparsebit_idx_t start, sparsebit_num_t num)
 {
 	sparsebit_idx_t idx;
@@ -1336,7 +1336,7 @@ sparsebit_idx_t sparsebit_next_set_num(struct sparsebit *s,
  * and returns the index of the first sequence of num consecutively cleared
  * bits.  Returns a value of 0 of no such sequence exists.
  */
-sparsebit_idx_t sparsebit_next_clear_num(struct sparsebit *s,
+sparsebit_idx_t sparsebit_next_clear_num(const struct sparsebit *s,
 	sparsebit_idx_t start, sparsebit_num_t num)
 {
 	sparsebit_idx_t idx;
@@ -1584,7 +1584,7 @@ static size_t display_range(FILE *stream, sparsebit_idx_t low,
  * contiguous bits.  This is done because '-' is used to specify command-line
  * options, and sometimes ranges are specified as command-line arguments.
  */
-void sparsebit_dump(FILE *stream, struct sparsebit *s,
+void sparsebit_dump(FILE *stream, const struct sparsebit *s,
 	unsigned int indent)
 {
 	size_t current_line_len = 0;
@@ -1682,7 +1682,7 @@ void sparsebit_dump(FILE *stream, struct sparsebit *s,
  * s.  On error, diagnostic information is printed to stderr and
  * abort is called.
  */
-void sparsebit_validate_internal(struct sparsebit *s)
+void sparsebit_validate_internal(const struct sparsebit *s)
 {
 	bool error_detected = false;
 	struct node *nodep, *prev = NULL;
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V5 2/7] KVM: selftests: add hooks for managing protected guest memory
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 1/7] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 3/7] KVM: selftests: handle protected bits in page tables Peter Gonda
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

Add kvm_vm.protected metadata. Protected VMs memory, potentially
register and other state may not be accessible to KVM. This combined
with a new protected_phy_pages bitmap will allow the selftests to check
if a given pages is accessible.

Originally-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h        | 14 ++++++++++++--
 tools/testing/selftests/kvm/lib/kvm_util.c       | 16 +++++++++++++---
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index c14d531a942a..625f13cf3b58 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -33,6 +33,7 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
 struct userspace_mem_region {
 	struct kvm_userspace_memory_region region;
 	struct sparsebit *unused_phy_pages;
+	struct sparsebit *protected_phy_pages;
 	int fd;
 	off_t offset;
 	void *host_mem;
@@ -90,6 +91,9 @@ struct kvm_vm {
 	vm_vaddr_t handlers;
 	uint32_t dirty_ring_size;
 
+	/* VM protection enabled: SEV, etc*/
+	bool protected;
+
 	/* Cache of information for binary stats interface */
 	int stats_fd;
 	struct kvm_stats_header stats_header;
@@ -638,10 +642,16 @@ const char *exit_reason_str(unsigned int exit_reason);
 
 vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 			     uint32_t memslot);
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
-			      vm_paddr_t paddr_min, uint32_t memslot);
+vm_paddr_t _vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			      vm_paddr_t paddr_min, uint32_t memslot, bool protected);
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
 
+static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+					    vm_paddr_t paddr_min, uint32_t memslot)
+{
+	return _vm_phy_pages_alloc(vm, num, paddr_min, memslot, vm->protected);
+}
+
 /*
  * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
  * loads the test binary into guest memory and creates an IRQ chip (x86 only).
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index f12ebd27f6e5..0ce5cdb52f0c 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -595,6 +595,7 @@ static void __vm_mem_region_delete(struct kvm_vm *vm,
 	vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, &region->region);
 
 	sparsebit_free(&region->unused_phy_pages);
+	sparsebit_free(&region->protected_phy_pages);
 	ret = munmap(region->mmap_start, region->mmap_size);
 	TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret));
 
@@ -935,6 +936,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	}
 
 	region->unused_phy_pages = sparsebit_alloc();
+	region->protected_phy_pages = sparsebit_alloc();
 	sparsebit_set_num(region->unused_phy_pages,
 		guest_paddr >> vm->page_shift, npages);
 	region->region.slot = slot;
@@ -1711,6 +1713,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
 			region->host_mem);
 		fprintf(stream, "%*sunused_phy_pages: ", indent + 2, "");
 		sparsebit_dump(stream, region->unused_phy_pages, 0);
+		if (vm->protected) {
+			fprintf(stream, "%*sprotected_phy_pages: ", indent + 2, "");
+			sparsebit_dump(stream, region->protected_phy_pages, 0);
+		}
 	}
 	fprintf(stream, "%*sMapped Virtual Pages:\n", indent, "");
 	sparsebit_dump(stream, vm->vpages_mapped, indent + 2);
@@ -1807,8 +1813,9 @@ const char *exit_reason_str(unsigned int exit_reason)
  * and their base address is returned. A TEST_ASSERT failure occurs if
  * not enough pages are available at or above paddr_min.
  */
-vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
-			      vm_paddr_t paddr_min, uint32_t memslot)
+vm_paddr_t _vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+			       vm_paddr_t paddr_min, uint32_t memslot,
+			       bool protected)
 {
 	struct userspace_mem_region *region;
 	sparsebit_idx_t pg, base;
@@ -1841,8 +1848,11 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 		abort();
 	}
 
-	for (pg = base; pg < base + num; ++pg)
+	for (pg = base; pg < base + num; ++pg) {
 		sparsebit_clear(region->unused_phy_pages, pg);
+		if (protected)
+			sparsebit_set(region->protected_phy_pages, pg);
+	}
 
 	return base * vm->page_size;
 }
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V5 3/7] KVM: selftests: handle protected bits in page tables
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 1/7] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 2/7] KVM: selftests: add hooks for managing protected guest memory Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 4/7] KVM: selftests: add support for protected vm_vaddr_* allocations Peter Gonda
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

SEV guests rely on an encyption bit which resides within the range that
current code treats as address bits. Guest code will expect these bits
to be set appropriately in their page tables, whereas the rest of the
kvm_util functions will generally expect these bits to not be present.
Introduce pte_me_mask and struct kvm_vm_arch to allow for arch specific
address tagging. Currently just adding x86 c_bit and s_bit support for
SEV and TDX.

Originally-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/arch/arm64/include/asm/kvm_host.h       |  7 ++++++
 tools/arch/riscv/include/asm/kvm_host.h       |  7 ++++++
 tools/arch/s390/include/asm/kvm_host.h        |  7 ++++++
 tools/arch/x86/include/asm/kvm_host.h         | 14 ++++++++++++
 .../selftests/kvm/include/kvm_util_base.h     | 19 ++++++++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 22 ++++++++++++++++++-
 .../selftests/kvm/lib/x86_64/processor.c      | 19 +++++++++++++---
 7 files changed, 91 insertions(+), 4 deletions(-)
 create mode 100644 tools/arch/arm64/include/asm/kvm_host.h
 create mode 100644 tools/arch/riscv/include/asm/kvm_host.h
 create mode 100644 tools/arch/s390/include/asm/kvm_host.h
 create mode 100644 tools/arch/x86/include/asm/kvm_host.h

diff --git a/tools/arch/arm64/include/asm/kvm_host.h b/tools/arch/arm64/include/asm/kvm_host.h
new file mode 100644
index 000000000000..218f5cdf0d86
--- /dev/null
+++ b/tools/arch/arm64/include/asm/kvm_host.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H
+#define _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H
+
+struct kvm_vm_arch {};
+
+#endif  // _TOOLS_LINUX_ASM_ARM64_KVM_HOST_H
diff --git a/tools/arch/riscv/include/asm/kvm_host.h b/tools/arch/riscv/include/asm/kvm_host.h
new file mode 100644
index 000000000000..c8280d5659ce
--- /dev/null
+++ b/tools/arch/riscv/include/asm/kvm_host.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H
+#define _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H
+
+struct kvm_vm_arch {};
+
+#endif  // _TOOLS_LINUX_ASM_RISCV_KVM_HOST_H
diff --git a/tools/arch/s390/include/asm/kvm_host.h b/tools/arch/s390/include/asm/kvm_host.h
new file mode 100644
index 000000000000..4c4c1c1e4bf8
--- /dev/null
+++ b/tools/arch/s390/include/asm/kvm_host.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _TOOLS_LINUX_ASM_S390_KVM_HOST_H
+#define _TOOLS_LINUX_ASM_S390_KVM_HOST_H
+
+struct kvm_vm_arch {};
+
+#endif  // _TOOLS_LINUX_ASM_S390_KVM_HOST_H
diff --git a/tools/arch/x86/include/asm/kvm_host.h b/tools/arch/x86/include/asm/kvm_host.h
new file mode 100644
index 000000000000..03153c18c747
--- /dev/null
+++ b/tools/arch/x86/include/asm/kvm_host.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _TOOLS_LINUX_ASM_X86_KVM_HOST_H
+#define _TOOLS_LINUX_ASM_X86_KVM_HOST_H
+
+#include <stdbool.h>
+#include <stdint.h>
+
+struct kvm_vm_arch {
+	uint64_t pte_me_mask;
+	uint64_t c_bit;
+	uint64_t s_bit;
+};
+
+#endif  // _TOOLS_LINUX_ASM_X86_KVM_HOST_H
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 625f13cf3b58..9aacc6110d09 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -17,6 +17,8 @@
 #include "linux/rbtree.h"
 
 #include <asm/atomic.h>
+#include <asm/kvm.h>
+#include <asm/kvm_host.h>
 
 #include <sys/ioctl.h>
 
@@ -90,6 +92,9 @@ struct kvm_vm {
 	vm_vaddr_t idt;
 	vm_vaddr_t handlers;
 	uint32_t dirty_ring_size;
+	uint64_t gpa_protected_mask;
+
+	struct kvm_vm_arch arch;
 
 	/* VM protection enabled: SEV, etc*/
 	bool protected;
@@ -127,6 +132,7 @@ enum vm_guest_mode {
 	VM_MODE_P40V48_16K,
 	VM_MODE_P40V48_64K,
 	VM_MODE_PXXV48_4K,	/* For 48bits VA but ANY bits PA */
+	VM_MODE_PXXV48_4K_SEV,	/* For 48bits VA but ANY bits PA */
 	VM_MODE_P47V64_4K,
 	VM_MODE_P44V64_4K,
 	VM_MODE_P36V48_4K,
@@ -400,6 +406,17 @@ void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
 vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
 void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
 
+
+static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
+{
+	return gpa & ~vm->gpa_protected_mask;
+}
+
+static inline vm_paddr_t vm_tag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
+{
+	return gpa | vm->gpa_protected_mask;
+}
+
 void vcpu_run(struct kvm_vcpu *vcpu);
 int _vcpu_run(struct kvm_vcpu *vcpu);
 
@@ -863,4 +880,6 @@ static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
 	return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
 }
 
+bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
+
 #endif /* SELFTEST_KVM_UTIL_BASE_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 0ce5cdb52f0c..f5f18a802434 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1363,9 +1363,10 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
  * address providing the memory to the vm physical address is returned.
  * A TEST_ASSERT failure occurs if no region containing gpa exists.
  */
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa_tagged)
 {
 	struct userspace_mem_region *region;
+	vm_paddr_t gpa = vm_untag_gpa(vm, gpa_tagged);
 
 	region = userspace_mem_region_find(vm, gpa, gpa);
 	if (!region) {
@@ -2042,3 +2043,22 @@ void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
 		break;
 	}
 }
+
+bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr)
+{
+	sparsebit_idx_t pg = 0;
+	struct userspace_mem_region *region;
+
+	if (!vm->protected)
+		return false;
+
+	region = userspace_mem_region_find(vm, paddr, paddr);
+	if (!region) {
+		TEST_FAIL("No vm physical memory at 0x%lx", paddr);
+		return false;
+	}
+
+	pg = paddr >> vm->page_shift;
+	return sparsebit_is_set(region->protected_phy_pages, pg);
+
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 39c4409ef56a..377e342ecff7 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -127,6 +127,8 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
 	/* If needed, create page map l4 table. */
 	if (!vm->pgd_created) {
 		vm->pgd = vm_alloc_page_table(vm);
+		vm->pgd |= vm->arch.pte_me_mask;
+
 		vm->pgd_created = true;
 	}
 }
@@ -148,13 +150,17 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
 				       int target_level)
 {
 	uint64_t *pte = virt_get_pte(vm, pt_pfn, vaddr, current_level);
+	uint64_t paddr_raw = vm_untag_gpa(vm, paddr);
 
 	if (!(*pte & PTE_PRESENT_MASK)) {
 		*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK;
 		if (current_level == target_level)
-			*pte |= PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK);
-		else
+			*pte |= PTE_LARGE_MASK | (paddr_raw & PHYSICAL_PAGE_MASK);
+		else {
 			*pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
+			*pte |= vm->arch.pte_me_mask;
+		}
+
 	} else {
 		/*
 		 * Entry already present.  Assert that the caller doesn't want
@@ -192,6 +198,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
 		    "Physical address beyond maximum supported,\n"
 		    "  paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
 		    paddr, vm->max_gfn, vm->page_size);
+	TEST_ASSERT(vm_untag_gpa(vm, paddr) == paddr,
+		    "Unexpected bits in paddr: %lx", paddr);
 
 	/*
 	 * Allocate upper level page tables, if not already present.  Return
@@ -215,6 +223,11 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
 	TEST_ASSERT(!(*pte & PTE_PRESENT_MASK),
 		    "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
 	*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+
+	if (vm_is_gpa_protected(vm, paddr))
+		*pte |= vm->arch.c_bit;
+	else
+		*pte |= vm->arch.s_bit;
 }
 
 void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
@@ -542,7 +555,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 	if (!(pte[index[0]] & PTE_PRESENT_MASK))
 		goto unmapped_gva;
 
-	return (PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
+	return vm_untag_gpa(vm, PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & ~PAGE_MASK);
 
 unmapped_gva:
 	TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V5 4/7] KVM: selftests: add support for protected vm_vaddr_* allocations
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (2 preceding siblings ...)
  2022-10-18 20:58 ` [PATCH V5 3/7] KVM: selftests: handle protected bits in page tables Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

From: Michael Roth <michael.roth@amd.com>

Test programs may wish to allocate shared vaddrs for things like
sharing memory with the guest. Since protected vms will have their
memory encrypted by default an interface is needed to explicitly
request shared pages.

Implement this by splitting the common code out from vm_vaddr_alloc()
and introducing a new vm_vaddr_alloc_shared().

Signed-off-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     |  1 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 21 +++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 9aacc6110d09..4224026fbe25 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -396,6 +396,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
 void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
 vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index f5f18a802434..d753345993d6 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1231,12 +1231,13 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
 }
 
 /*
- * VM Virtual Address Allocate
+ * VM Virtual Address Allocate Shared/Encrypted
  *
  * Input Args:
  *   vm - Virtual Machine
  *   sz - Size in bytes
  *   vaddr_min - Minimum starting virtual address
+ *   encrypt - Whether the region should be handled as encrypted
  *
  * Output Args: None
  *
@@ -1249,13 +1250,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
  * a unique set of pages, with the minimum real allocation being at least
  * a page.
  */
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+static vm_vaddr_t
+_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, bool encrypt)
 {
 	uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
 
 	virt_pgd_alloc(vm);
-	vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages,
-					      KVM_UTIL_MIN_PFN * vm->page_size, 0);
+	vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages,
+					       KVM_UTIL_MIN_PFN * vm->page_size,
+					       0, encrypt);
 
 	/*
 	 * Find an unused range of virtual page addresses of at least
@@ -1276,6 +1279,16 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
 	return vaddr_start;
 }
 
+vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+{
+	return _vm_vaddr_alloc(vm, sz, vaddr_min, vm->protected);
+}
+
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+{
+	return _vm_vaddr_alloc(vm, sz, vaddr_min, false);
+}
+
 /*
  * VM Virtual Address Allocate Pages
  *
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (3 preceding siblings ...)
  2022-10-18 20:58 ` [PATCH V5 4/7] KVM: selftests: add support for protected vm_vaddr_* allocations Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  2022-12-21 21:13   ` Ackerley Tng
  2022-12-22 22:19   ` Vishal Annapurve
  2022-10-18 20:58 ` [PATCH V5 6/7] KVM: selftests: Update ucall pool to allocate from shared memory Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 7/7] KVM: selftests: Add simple sev vm testing Peter Gonda
  6 siblings, 2 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

Add interfaces to allow tests to create SEV guests. The additional
requirements for SEV guests PTs and other state is encapsulated by the
new vm_sev_create_with_one_vcpu() function. This can future be
generalized for more vCPUs but the first set of SEV selftests in this
series only uses a single vCPU.

Originally-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/arch/x86/include/asm/kvm_host.h         |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/kvm_util_base.h     |  15 +-
 .../selftests/kvm/include/x86_64/sev.h        |  22 ++
 tools/testing/selftests/kvm/lib/kvm_util.c    |   4 +-
 .../selftests/kvm/lib/x86_64/processor.c      |   4 +
 tools/testing/selftests/kvm/lib/x86_64/sev.c  | 243 ++++++++++++++++++
 7 files changed, 286 insertions(+), 4 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/sev.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/sev.c

diff --git a/tools/arch/x86/include/asm/kvm_host.h b/tools/arch/x86/include/asm/kvm_host.h
index 03153c18c747..0357a7135835 100644
--- a/tools/arch/x86/include/asm/kvm_host.h
+++ b/tools/arch/x86/include/asm/kvm_host.h
@@ -9,6 +9,7 @@ struct kvm_vm_arch {
 	uint64_t pte_me_mask;
 	uint64_t c_bit;
 	uint64_t s_bit;
+	bool is_pt_protected;
 };
 
 #endif  // _TOOLS_LINUX_ASM_X86_KVM_HOST_H
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 65eb45ff1bff..4f27ef70cf2b 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -58,6 +58,7 @@ LIBKVM_x86_64 += lib/x86_64/processor.c
 LIBKVM_x86_64 += lib/x86_64/svm.c
 LIBKVM_x86_64 += lib/x86_64/ucall.c
 LIBKVM_x86_64 += lib/x86_64/vmx.c
+LIBKVM_x86_64 += lib/x86_64/sev.c
 
 LIBKVM_aarch64 += lib/aarch64/gic.c
 LIBKVM_aarch64 += lib/aarch64/gic_v3.c
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 4224026fbe25..8e4ded757a40 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -68,6 +68,13 @@ struct userspace_mem_regions {
 	DECLARE_HASHTABLE(slot_hash, 9);
 };
 
+/* VM protection policy/configuration. */
+struct protected_vm {
+	bool enabled;
+	bool has_protected_bit;
+	int8_t protected_bit;
+};
+
 struct kvm_vm {
 	int mode;
 	unsigned long type;
@@ -670,6 +677,10 @@ static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 	return _vm_phy_pages_alloc(vm, num, paddr_min, memslot, vm->protected);
 }
 
+uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
+			      uint32_t nr_runnable_vcpus,
+			      uint64_t extra_mem_pages);
+
 /*
  * ____vm_create() does KVM_CREATE_VM and little else.  __vm_create() also
  * loads the test binary into guest memory and creates an IRQ chip (x86 only).
@@ -722,8 +733,8 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm);
 unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
 unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
 unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages);
-static inline unsigned int
-vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
+static inline unsigned int vm_adjust_num_guest_pages(enum vm_guest_mode mode,
+						     unsigned int num_guest_pages)
 {
 	unsigned int n;
 	n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h
new file mode 100644
index 000000000000..1148db928d0b
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/sev.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Helpers used for SEV guests
+ *
+ */
+#ifndef SELFTEST_KVM_SEV_H
+#define SELFTEST_KVM_SEV_H
+
+#include <stdint.h>
+#include <stdbool.h>
+
+#include "kvm_util.h"
+
+#define SEV_POLICY_NO_DBG	(1UL << 0)
+#define SEV_POLICY_ES		(1UL << 2)
+
+bool is_kvm_sev_supported(void);
+
+struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
+					   struct kvm_vcpu **cpu);
+
+#endif /* SELFTEST_KVM_SEV_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index d753345993d6..753b8991eff3 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -254,7 +254,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
 		vm->pgtable_levels = 4;
 		vm->va_bits = 48;
 #else
-		TEST_FAIL("VM_MODE_PXXV48_4K not supported on non-x86 platforms");
+		TEST_FAIL("VM_MODE_PXXV48_4K* not supported on non-x86 platforms");
 #endif
 		break;
 	case VM_MODE_P47V64_4K:
@@ -294,7 +294,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
 	return vm;
 }
 
-static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
+uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
 				     uint32_t nr_runnable_vcpus,
 				     uint64_t extra_mem_pages)
 {
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 377e342ecff7..04a5434ba3dd 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -529,6 +529,10 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
 	uint64_t *pml4e, *pdpe, *pde;
 	uint64_t *pte;
 
+	TEST_ASSERT(
+		!vm->arch.is_pt_protected,
+		"Protected guests have their page tables protected so gva2gpa conversions are not possible.");
+
 	TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use "
 		"unknown or unsupported guest mode, mode: 0x%x", vm->mode);
 
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
new file mode 100644
index 000000000000..faed2ebe63ac
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
@@ -0,0 +1,243 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Helpers used for SEV guests
+ *
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <stdint.h>
+#include <stdbool.h>
+
+#include "kvm_util.h"
+#include "svm_util.h"
+#include "linux/psp-sev.h"
+#include "processor.h"
+#include "sev.h"
+
+#define CPUID_MEM_ENC_LEAF 0x8000001f
+#define CPUID_EBX_CBIT_MASK 0x3f
+
+#define SEV_FW_REQ_VER_MAJOR 0
+#define SEV_FW_REQ_VER_MINOR 17
+
+enum sev_guest_state {
+	SEV_GSTATE_UNINIT = 0,
+	SEV_GSTATE_LUPDATE,
+	SEV_GSTATE_LSECRET,
+	SEV_GSTATE_RUNNING,
+};
+
+static void sev_ioctl(int cmd, void *data)
+{
+	int ret;
+	struct sev_issue_cmd arg;
+
+	arg.cmd = cmd;
+	arg.data = (unsigned long)data;
+	ret = ioctl(open_sev_dev_path_or_exit(), SEV_ISSUE_CMD, &arg);
+	TEST_ASSERT(ret == 0, "SEV ioctl %d failed, error: %d, fw_error: %d",
+		    cmd, ret, arg.error);
+}
+
+static void kvm_sev_ioctl(struct kvm_vm *vm, int cmd, void *data)
+{
+	struct kvm_sev_cmd arg = {0};
+	int ret;
+
+	arg.id = cmd;
+	arg.sev_fd = open_sev_dev_path_or_exit();
+	arg.data = (__u64)data;
+
+	ret = ioctl(vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg);
+	TEST_ASSERT(
+		ret == 0,
+		"SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d",
+		cmd, ret, errno, strerror(errno), arg.error);
+}
+
+static void sev_register_user_region(struct kvm_vm *vm, struct userspace_mem_region *region)
+{
+	struct kvm_enc_region range = {0};
+	int ret;
+
+	range.addr = (__u64)region->region.userspace_addr;
+	;
+	range.size = region->region.memory_size;
+
+	ret = ioctl(vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
+	TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n",
+		    errno);
+}
+
+static void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa, uint64_t size)
+{
+	struct kvm_sev_launch_update_data ksev_update_data = {0};
+
+	pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size);
+
+	ksev_update_data.uaddr = (__u64)addr_gpa2hva(vm, gpa);
+	ksev_update_data.len = size;
+
+	kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data);
+}
+
+static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region)
+{
+	const struct sparsebit *protected_phy_pages =
+		region->protected_phy_pages;
+	const uint64_t memory_size = region->region.memory_size;
+	const vm_paddr_t gpa_start = region->region.guest_phys_addr;
+	sparsebit_idx_t pg = 0;
+
+	sev_register_user_region(vm, region);
+
+	while (pg < (memory_size / vm->page_size)) {
+		sparsebit_idx_t nr_pages;
+
+		if (sparsebit_is_clear(protected_phy_pages, pg)) {
+			pg = sparsebit_next_set(protected_phy_pages, pg);
+			if (!pg)
+				break;
+		}
+
+		nr_pages = sparsebit_next_clear(protected_phy_pages, pg) - pg;
+		if (nr_pages <= 0)
+			nr_pages = 1;
+
+		sev_launch_update_data(vm, gpa_start + pg * vm->page_size,
+				       nr_pages * vm->page_size);
+		pg += nr_pages;
+	}
+}
+
+static void sev_encrypt(struct kvm_vm *vm)
+{
+	int ctr;
+	struct userspace_mem_region *region;
+
+	hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
+		encrypt_region(vm, region);
+	}
+
+	vm->arch.is_pt_protected = true;
+}
+
+bool is_kvm_sev_supported(void)
+{
+	struct sev_user_data_status sev_status;
+
+	sev_ioctl(SEV_PLATFORM_STATUS, &sev_status);
+
+	if (!(sev_status.api_major > SEV_FW_REQ_VER_MAJOR ||
+	      (sev_status.api_major == SEV_FW_REQ_VER_MAJOR &&
+	       sev_status.api_minor >= SEV_FW_REQ_VER_MINOR))) {
+		pr_info("SEV FW version too old. Have API %d.%d (build: %d), need %d.%d, skipping test.\n",
+			sev_status.api_major, sev_status.api_minor,
+			sev_status.build, SEV_FW_REQ_VER_MAJOR,
+			SEV_FW_REQ_VER_MINOR);
+		return false;
+	}
+
+	return true;
+}
+
+static void sev_vm_launch(struct kvm_vm *vm, uint32_t policy)
+{
+	struct kvm_sev_launch_start ksev_launch_start = {0};
+	struct kvm_sev_guest_status ksev_status;
+
+	ksev_launch_start.policy = policy;
+	kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_START, &ksev_launch_start);
+	kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.policy == policy, "Incorrect guest policy.");
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE,
+		    "Unexpected guest state: %d", ksev_status.state);
+
+	ucall_init(vm, 0);
+
+	sev_encrypt(vm);
+}
+
+static void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement)
+{
+	struct kvm_sev_launch_measure ksev_launch_measure;
+	struct kvm_sev_guest_status ksev_guest_status;
+
+	ksev_launch_measure.len = 256;
+	ksev_launch_measure.uaddr = (__u64)measurement;
+	kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_MEASURE, &ksev_launch_measure);
+
+	kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_guest_status);
+	TEST_ASSERT(ksev_guest_status.state == SEV_GSTATE_LSECRET,
+		    "Unexpected guest state: %d", ksev_guest_status.state);
+}
+
+static void sev_vm_launch_finish(struct kvm_vm *vm)
+{
+	struct kvm_sev_guest_status ksev_status;
+
+	kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_LUPDATE ||
+			    ksev_status.state == SEV_GSTATE_LSECRET,
+		    "Unexpected guest state: %d", ksev_status.state);
+
+	kvm_sev_ioctl(vm, KVM_SEV_LAUNCH_FINISH, NULL);
+
+	kvm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &ksev_status);
+	TEST_ASSERT(ksev_status.state == SEV_GSTATE_RUNNING,
+		    "Unexpected guest state: %d", ksev_status.state);
+}
+
+static void configure_sev_pte_masks(struct kvm_vm *vm)
+{
+	uint32_t eax, ebx, ecx, edx, enc_bit;
+
+	cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx);
+	enc_bit = ebx & CPUID_EBX_CBIT_MASK;
+
+	vm->arch.c_bit = 1 << enc_bit;
+	vm->arch.pte_me_mask = vm->arch.c_bit | vm->arch.s_bit;
+	vm->protected = true;
+}
+
+static void sev_vm_measure(struct kvm_vm *vm)
+{
+	uint8_t measurement[512];
+	int i;
+
+	sev_vm_launch_measure(vm, measurement);
+
+	/* TODO: Validate the measurement is as expected. */
+	pr_debug("guest measurement: ");
+	for (i = 0; i < 32; ++i)
+		pr_debug("%02x", measurement[i]);
+	pr_debug("\n");
+}
+
+struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code,
+					   struct kvm_vcpu **cpu)
+{
+	enum vm_guest_mode mode = VM_MODE_PXXV48_4K;
+	uint64_t nr_pages = vm_nr_pages_required(mode, 1, 0);
+	struct kvm_vm *vm;
+
+	vm = ____vm_create(mode, nr_pages);
+
+	kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL);
+
+	configure_sev_pte_masks(vm);
+
+	*cpu = vm_vcpu_add(vm, 0, guest_code);
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	sev_vm_launch(vm, policy);
+
+	sev_vm_measure(vm);
+
+	sev_vm_launch_finish(vm);
+
+	pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n", policy,
+		nr_pages * vm->page_size / 1024);
+
+	return vm;
+}
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V5 6/7] KVM: selftests: Update ucall pool to allocate from shared memory
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (4 preceding siblings ...)
  2022-10-18 20:58 ` [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  2022-10-18 20:58 ` [PATCH V5 7/7] KVM: selftests: Add simple sev vm testing Peter Gonda
  6 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

Update the per VM ucall_header allocation from vm_vaddr_alloc() to
vm_vaddr_alloc_shared(). This allows encrypted guests to use ucall pools
by placing their shared ucall structures in unencrypted (shared) memory.
No behavior change for non encrypted guests.

Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/testing/selftests/kvm/lib/ucall_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index fcae96461e46..b4168e562255 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -22,7 +22,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
 	vm_vaddr_t vaddr;
 	int i;
 
-	vaddr = vm_vaddr_alloc(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
+	vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR);
 	hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
 	memset(hdr, 0, sizeof(*hdr));
 
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH V5 7/7] KVM: selftests: Add simple sev vm testing
  2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
                   ` (5 preceding siblings ...)
  2022-10-18 20:58 ` [PATCH V5 6/7] KVM: selftests: Update ucall pool to allocate from shared memory Peter Gonda
@ 2022-10-18 20:58 ` Peter Gonda
  6 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2022-10-18 20:58 UTC (permalink / raw)
  To: kvm
  Cc: linux-kernel, marcorr, seanjc, michael.roth, thomas.lendacky,
	joro, mizhang, pbonzini, andrew.jones, pgonda, vannapurve

A very simple of booting SEV guests that checks related CPUID bits. This
is a stripped down version of "[PATCH v2 08/13] KVM: selftests: add SEV
boot tests" from Michael but much simpler.

Suggested-by: Michael Roth <michael.roth@amd.com>
Signed-off-by: Peter Gonda <pgonda@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../selftests/kvm/x86_64/sev_all_boot_test.c  | 84 +++++++++++++++++++
 3 files changed, 86 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 2f0d705db9db..813e7610619d 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -35,6 +35,7 @@
 /x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
+/x86_64/sev_all_boot_test
 /x86_64/sev_migrate_tests
 /x86_64/smm_test
 /x86_64/state_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 4f27ef70cf2b..1eb9b2aa7c22 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -126,6 +126,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
 TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_all_boot_test
 TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
 TEST_GEN_PROGS_x86_64 += x86_64/amx_test
 TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
new file mode 100644
index 000000000000..e9e4d7305bc1
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_all_boot_test.c
@@ -0,0 +1,84 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Basic SEV boot tests.
+ *
+ */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "linux/psp-sev.h"
+#include "sev.h"
+
+#define NR_SYNCS 1
+
+#define MSR_AMD64_SEV_BIT  1
+
+static void guest_run_loop(struct kvm_vcpu *vcpu)
+{
+	struct ucall uc;
+	int i;
+
+	for (i = 0; i <= NR_SYNCS; ++i) {
+		vcpu_run(vcpu);
+		switch (get_ucall(vcpu, &uc)) {
+		case UCALL_SYNC:
+			continue;
+		case UCALL_DONE:
+			return;
+		case UCALL_ABORT:
+			REPORT_GUEST_ASSERT(uc);
+		default:
+			TEST_FAIL("Unexpected exit: %s",
+				  exit_reason_str(vcpu->run->exit_reason));
+		}
+	}
+}
+
+static void is_sev_enabled(void)
+{
+	uint64_t sev_status;
+
+	GUEST_ASSERT(this_cpu_has(X86_FEATURE_SEV));
+
+	sev_status = rdmsr(MSR_AMD64_SEV);
+	GUEST_ASSERT(sev_status & 0x1);
+}
+
+static void guest_sev_code(void)
+{
+	GUEST_SYNC(1);
+
+	is_sev_enabled();
+
+	GUEST_DONE();
+}
+
+static void test_sev(void *guest_code, uint64_t policy)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+
+	vm = vm_sev_create_with_one_vcpu(policy, guest_code, &vcpu);
+	TEST_ASSERT(vm, "vm_sev_create_with_one_vcpu() failed to create VM\n");
+
+	guest_run_loop(vcpu);
+
+	kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+	TEST_REQUIRE(is_kvm_sev_supported());
+
+	test_sev(guest_sev_code, SEV_POLICY_NO_DBG);
+	test_sev(guest_sev_code, 0);
+
+	return 0;
+}
-- 
2.38.0.413.g74048e4d9e-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-18 20:58 ` [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
@ 2022-12-21 21:13   ` Ackerley Tng
  2023-01-09 21:19     ` Peter Gonda
  2022-12-22 22:19   ` Vishal Annapurve
  1 sibling, 1 reply; 12+ messages in thread
From: Ackerley Tng @ 2022-12-21 21:13 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, seanjc, michael.roth,
	thomas.lendacky, joro, mizhang, pbonzini, andrew.jones, pgonda,
	vannapurve


> +static void encrypt_region(struct kvm_vm *vm, struct  
> userspace_mem_region *region)
> +{
> +	const struct sparsebit *protected_phy_pages =
> +		region->protected_phy_pages;
> +	const uint64_t memory_size = region->region.memory_size;
> +	const vm_paddr_t gpa_start = region->region.guest_phys_addr;
> +	sparsebit_idx_t pg = 0;
> +
> +	sev_register_user_region(vm, region);
> +
> +	while (pg < (memory_size / vm->page_size)) {
> +		sparsebit_idx_t nr_pages;
> +
> +		if (sparsebit_is_clear(protected_phy_pages, pg)) {
> +			pg = sparsebit_next_set(protected_phy_pages, pg);
> +			if (!pg)
> +				break;
> +		}
> +
> +		nr_pages = sparsebit_next_clear(protected_phy_pages, pg) - pg;
> +		if (nr_pages <= 0)
> +			nr_pages = 1;

I think this may not be correct in the case where the sparsebit has the
range [x, 2**64-1] (inclusive) set. In that case, sparsebit_next_clear()
will return 0, but the number of pages could be more than 1.

> +
> +		sev_launch_update_data(vm, gpa_start + pg * vm->page_size,

Computing the beginning of the gpa range with

gpa_start + pg * vm->page_size

only works if this memory region's gpa_start is 0.

> +				       nr_pages * vm->page_size);
> +		pg += nr_pages;
> +	}
> +}

Here's a suggestion (I'm using this on a TDX version of this patch)


/**
  * Iterate over set ranges within sparsebit @s. In each iteration,
  * @range_begin and @range_end will take the beginning and end of the set  
range,
  * which are of type sparsebit_idx_t.
  *
  * For example, if the range [3, 7] (inclusive) is set, within the  
iteration,
  * @range_begin will take the value 3 and @range_end will take the value 7.
  *
  * Ensure that there is at least one bit set before using this macro with
  * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
  * set.
  */
#define sparsebit_for_each_set_range(s, range_begin, range_end)		\
	for (range_begin = sparsebit_first_set(s),			\
		     range_end =					\
		     sparsebit_next_clear(s, range_begin) - 1;		\
	     range_begin && range_end;					\
	     range_begin = sparsebit_next_set(s, range_end),		\
		     range_end =					\
		     sparsebit_next_clear(s, range_begin) - 1)
/*
  * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the  
-1
  * would then cause an underflow back to 2**64 - 1. This is expected and
  * correct.
  *
  * If the last range in the sparsebit is [x, y] and we try to iterate,
  * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try  
and
  * find the first range, but that's correct because the condition expression
  * would cause us to quit the loop.
  */


static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region  
*region)
{
	const struct sparsebit *protected_phy_pages =
		region->protected_phy_pages;
	const vm_paddr_t gpa_base = region->region.guest_phys_addr;
	const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift;

	sparsebit_idx_t i;
	sparsebit_idx_t j;

	if (!sparsebit_any_set(protected_phy_pages))
		return;

	sev_register_user_region(vm, region);

	sparsebit_for_each_set_range(protected_phy_pages, i, j) {
		const uint64_t size_to_load = (j - i + 1) * vm->page_size;
		const uint64_t offset = (i - lowest_page_in_region) * vm->page_size;
		const uint64_t gpa = gpa_base + offset;

		sev_launch_update_data(vm, gpa, size_to_load);
	}
}

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests
  2022-10-18 20:58 ` [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
  2022-12-21 21:13   ` Ackerley Tng
@ 2022-12-22 22:19   ` Vishal Annapurve
  2023-01-09 21:20     ` Peter Gonda
  1 sibling, 1 reply; 12+ messages in thread
From: Vishal Annapurve @ 2022-12-22 22:19 UTC (permalink / raw)
  To: Peter Gonda
  Cc: kvm, linux-kernel, marcorr, seanjc, michael.roth,
	thomas.lendacky, joro, mizhang, pbonzini, andrew.jones

On Tue, Oct 18, 2022 at 1:59 PM Peter Gonda <pgonda@google.com> wrote:
>
> ...
> +
> +static void configure_sev_pte_masks(struct kvm_vm *vm)
> +{
> +       uint32_t eax, ebx, ecx, edx, enc_bit;
> +
> +       cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx);
> +       enc_bit = ebx & CPUID_EBX_CBIT_MASK;
> +
> +       vm->arch.c_bit = 1 << enc_bit;

This should be 1ULL << enc_bit as the overall result overflows 32 bits.

> +       vm->arch.pte_me_mask = vm->arch.c_bit | vm->arch.s_bit;

Maybe the role of pte_me_mask needs to be discussed in more detail. If
pte_me_mask is to be used only for maintaining/manipulating encryption
of page table memory then maybe it should be just set as
vm->arch.c_bit or better yet replaced with vm->arch.c_bit.

gpa_protected_mask also needs to be set here so that vm_untag_gpa
works as expected.

> +       vm->protected = true;
> +}
> +
> ...
> +}

> --
> 2.38.0.413.g74048e4d9e-goog
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests
  2022-12-21 21:13   ` Ackerley Tng
@ 2023-01-09 21:19     ` Peter Gonda
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2023-01-09 21:19 UTC (permalink / raw)
  To: Ackerley Tng
  Cc: kvm, linux-kernel, marcorr, seanjc, michael.roth,
	thomas.lendacky, joro, mizhang, pbonzini, andrew.jones,
	vannapurve

On Wed, Dec 21, 2022 at 2:13 PM Ackerley Tng <ackerleytng@google.com> wrote:
>
>
> > +static void encrypt_region(struct kvm_vm *vm, struct
> > userspace_mem_region *region)
> > +{
> > +     const struct sparsebit *protected_phy_pages =
> > +             region->protected_phy_pages;
> > +     const uint64_t memory_size = region->region.memory_size;
> > +     const vm_paddr_t gpa_start = region->region.guest_phys_addr;
> > +     sparsebit_idx_t pg = 0;
> > +
> > +     sev_register_user_region(vm, region);
> > +
> > +     while (pg < (memory_size / vm->page_size)) {
> > +             sparsebit_idx_t nr_pages;
> > +
> > +             if (sparsebit_is_clear(protected_phy_pages, pg)) {
> > +                     pg = sparsebit_next_set(protected_phy_pages, pg);
> > +                     if (!pg)
> > +                             break;
> > +             }
> > +
> > +             nr_pages = sparsebit_next_clear(protected_phy_pages, pg) - pg;
> > +             if (nr_pages <= 0)
> > +                     nr_pages = 1;
>
> I think this may not be correct in the case where the sparsebit has the
> range [x, 2**64-1] (inclusive) set. In that case, sparsebit_next_clear()
> will return 0, but the number of pages could be more than 1.
>
> > +
> > +             sev_launch_update_data(vm, gpa_start + pg * vm->page_size,
>
> Computing the beginning of the gpa range with
>
> gpa_start + pg * vm->page_size
>
> only works if this memory region's gpa_start is 0.
>
> > +                                    nr_pages * vm->page_size);
> > +             pg += nr_pages;
> > +     }
> > +}
>
> Here's a suggestion (I'm using this on a TDX version of this patch)

Thanks for this catch and the code. I've pulled this into the V6 I am preparing.

>
>
> /**
>   * Iterate over set ranges within sparsebit @s. In each iteration,
>   * @range_begin and @range_end will take the beginning and end of the set
> range,
>   * which are of type sparsebit_idx_t.
>   *
>   * For example, if the range [3, 7] (inclusive) is set, within the
> iteration,
>   * @range_begin will take the value 3 and @range_end will take the value 7.
>   *
>   * Ensure that there is at least one bit set before using this macro with
>   * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
>   * set.
>   */
> #define sparsebit_for_each_set_range(s, range_begin, range_end)         \
>         for (range_begin = sparsebit_first_set(s),                      \
>                      range_end =                                        \
>                      sparsebit_next_clear(s, range_begin) - 1;          \
>              range_begin && range_end;                                  \
>              range_begin = sparsebit_next_set(s, range_end),            \
>                      range_end =                                        \
>                      sparsebit_next_clear(s, range_begin) - 1)
> /*
>   * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the
> -1
>   * would then cause an underflow back to 2**64 - 1. This is expected and
>   * correct.
>   *
>   * If the last range in the sparsebit is [x, y] and we try to iterate,
>   * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try
> and
>   * find the first range, but that's correct because the condition expression
>   * would cause us to quit the loop.
>   */
>
>
> static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region
> *region)
> {
>         const struct sparsebit *protected_phy_pages =
>                 region->protected_phy_pages;
>         const vm_paddr_t gpa_base = region->region.guest_phys_addr;
>         const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift;
>
>         sparsebit_idx_t i;
>         sparsebit_idx_t j;
>
>         if (!sparsebit_any_set(protected_phy_pages))
>                 return;
>
>         sev_register_user_region(vm, region);
>
>         sparsebit_for_each_set_range(protected_phy_pages, i, j) {
>                 const uint64_t size_to_load = (j - i + 1) * vm->page_size;
>                 const uint64_t offset = (i - lowest_page_in_region) * vm->page_size;
>                 const uint64_t gpa = gpa_base + offset;
>
>                 sev_launch_update_data(vm, gpa, size_to_load);
>         }
> }

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests
  2022-12-22 22:19   ` Vishal Annapurve
@ 2023-01-09 21:20     ` Peter Gonda
  0 siblings, 0 replies; 12+ messages in thread
From: Peter Gonda @ 2023-01-09 21:20 UTC (permalink / raw)
  To: Vishal Annapurve
  Cc: kvm, linux-kernel, marcorr, seanjc, michael.roth,
	thomas.lendacky, joro, mizhang, pbonzini, andrew.jones

On Thu, Dec 22, 2022 at 3:19 PM Vishal Annapurve <vannapurve@google.com> wrote:
>
> On Tue, Oct 18, 2022 at 1:59 PM Peter Gonda <pgonda@google.com> wrote:
> >
> > ...
> > +
> > +static void configure_sev_pte_masks(struct kvm_vm *vm)
> > +{
> > +       uint32_t eax, ebx, ecx, edx, enc_bit;
> > +
> > +       cpuid(CPUID_MEM_ENC_LEAF, &eax, &ebx, &ecx, &edx);
> > +       enc_bit = ebx & CPUID_EBX_CBIT_MASK;
> > +
> > +       vm->arch.c_bit = 1 << enc_bit;
>
> This should be 1ULL << enc_bit as the overall result overflows 32 bits.
>
> > +       vm->arch.pte_me_mask = vm->arch.c_bit | vm->arch.s_bit;
>
> Maybe the role of pte_me_mask needs to be discussed in more detail. If
> pte_me_mask is to be used only for maintaining/manipulating encryption
> of page table memory then maybe it should be just set as
> vm->arch.c_bit or better yet replaced with vm->arch.c_bit.
>
> gpa_protected_mask also needs to be set here so that vm_untag_gpa
> works as expected.

Thanks for speaking with me offline about TDX. I have removed
pte_me_mask entirely and set gpa_protected_mask here in my V6.

>
> > +       vm->protected = true;
> > +}
> > +
> > ...
> > +}
>
> > --
> > 2.38.0.413.g74048e4d9e-goog
> >

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-01-09 21:22 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-18 20:58 [PATCH V5 0/7] KVM: selftests: Add simple SEV test Peter Gonda
2022-10-18 20:58 ` [PATCH V5 1/7] KVM: selftests: sparsebit: add const where appropriate Peter Gonda
2022-10-18 20:58 ` [PATCH V5 2/7] KVM: selftests: add hooks for managing protected guest memory Peter Gonda
2022-10-18 20:58 ` [PATCH V5 3/7] KVM: selftests: handle protected bits in page tables Peter Gonda
2022-10-18 20:58 ` [PATCH V5 4/7] KVM: selftests: add support for protected vm_vaddr_* allocations Peter Gonda
2022-10-18 20:58 ` [PATCH V5 5/7] KVM: selftests: add library for creating/interacting with SEV guests Peter Gonda
2022-12-21 21:13   ` Ackerley Tng
2023-01-09 21:19     ` Peter Gonda
2022-12-22 22:19   ` Vishal Annapurve
2023-01-09 21:20     ` Peter Gonda
2022-10-18 20:58 ` [PATCH V5 6/7] KVM: selftests: Update ucall pool to allocate from shared memory Peter Gonda
2022-10-18 20:58 ` [PATCH V5 7/7] KVM: selftests: Add simple sev vm testing Peter Gonda

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).