All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory
@ 2022-08-30 22:42 Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking Vishal Annapurve
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

This series implements selftests executing SEV VMs to target the feature
floated by Chao via:
https://lore.kernel.org/linux-mm/20220706082016.2603916-12-chao.p.peng@linux.intel.com/T/

Below changes aim to test the fd based approach for guest private memory
in context of SEV VMs executing on AMD SEV compatible platforms.

sev_private_mem_test.c file adds selftest to access private memory from the
guest via private/shared accesses and checking if the contents can be
leaked to/accessed by vmm via shared memory view before/after conversions.

To allow SEV/SEV-ES VMs to toggle the encryption bit during memory
conversion, support is added for mapping guest pagetables to guest va
ranges and passing the mapping information to guests via shared pages.

This series has dependency on following patch series:
1) V7 series patches from Chao mentioned above.
2) https://lore.kernel.org/lkml/20220810152033.946942-1-pgonda@google.com/T/#u
  - Series posted by Peter containing patches from Michael and Sean
3) https://lore.kernel.org/lkml/Ywa9T+jKUpaHLu%2Fl@google.com/T/
  - Series posted for similar selftests executing non-confidential VMs.

Github link for the patches posted as part of this series:
https://github.com/vishals4gh/linux/commits/sev_upm_selftests_rfcv2

Vishal Annapurve (8):
  selftests: kvm: x86_64: Add support for pagetable tracking
  kvm: Add HVA range operator
  arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA
  selftests: kvm: sev: Support memslots with private memory
  selftests: kvm: Update usage of private mem lib for SEV VMs
  selftests: kvm: Support executing SEV VMs with private memory
  selftests: kvm: Refactor testing logic for private memory
  selftests: kvm: Add private memory test for SEV VMs

 arch/x86/kvm/svm/sev.c                        |  99 ++++++-
 include/linux/kvm_host.h                      |   8 +
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   2 +
 .../selftests/kvm/include/kvm_util_base.h     | 105 +++++++
 .../kvm/include/x86_64/private_mem.h          |  10 +-
 .../include/x86_64/private_mem_test_helper.h  |  13 +
 .../selftests/kvm/include/x86_64/sev.h        |   2 +
 tools/testing/selftests/kvm/lib/kvm_util.c    |  78 ++++-
 .../selftests/kvm/lib/x86_64/private_mem.c    | 189 ++++++++++--
 .../kvm/lib/x86_64/private_mem_test_helper.c  | 273 ++++++++++++++++++
 .../selftests/kvm/lib/x86_64/processor.c      |  32 ++
 tools/testing/selftests/kvm/lib/x86_64/sev.c  |  15 +-
 .../selftests/kvm/x86_64/private_mem_test.c   | 246 +---------------
 .../kvm/x86_64/sev_private_mem_test.c         |  21 ++
 virt/kvm/kvm_main.c                           |  87 +++++-
 16 files changed, 880 insertions(+), 301 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c

-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 2/8] kvm: Add HVA range operator Vishal Annapurve
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Add support for mapping guest pagetable pages to a contiguous guest virtual
address range and sharing the physical to virtual mappings with the guest
in a pre-defined format.

This functionality will allow the guests to modify their page table
entries. One such usecase for CC VMs is to toggle encryption bit in
their ptes to switch from encrypted to shared memory and vice a versa.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 .../selftests/kvm/include/kvm_util_base.h     | 105 ++++++++++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    |  78 ++++++++++++-
 .../selftests/kvm/lib/x86_64/processor.c      |  32 ++++++
 3 files changed, 214 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index dfe454f228e7..f57ced56da1b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -74,6 +74,11 @@ struct vm_memcrypt {
 	int8_t enc_bit;
 };
 
+struct pgt_page {
+	vm_paddr_t paddr;
+	struct list_head list;
+};
+
 struct kvm_vm {
 	int mode;
 	unsigned long type;
@@ -98,6 +103,10 @@ struct kvm_vm {
 	vm_vaddr_t handlers;
 	uint32_t dirty_ring_size;
 	struct vm_memcrypt memcrypt;
+	struct list_head pgt_pages;
+	bool track_pgt_pages;
+	uint32_t num_pgt_pages;
+	vm_vaddr_t pgt_vaddr_start;
 
 	/* Cache of information for binary stats interface */
 	int stats_fd;
@@ -184,6 +193,23 @@ struct vm_guest_mode_params {
 	unsigned int page_size;
 	unsigned int page_shift;
 };
+
+/*
+ * Structure shared with the guest containing information about:
+ * - Starting virtual address for num_pgt_pages physical pagetable
+ *   page addresses tracked via paddrs array
+ * - page size of the guest
+ *
+ * Guest can walk through its pagetables using this information to
+ *   read/modify pagetable attributes.
+ */
+struct guest_pgt_info {
+	uint64_t num_pgt_pages;
+	uint64_t pgt_vaddr_start;
+	uint64_t page_size;
+	uint64_t paddrs[];
+};
+
 extern const struct vm_guest_mode_params vm_guest_mode_params[];
 
 int open_path_or_exit(const char *path, int flags);
@@ -394,6 +420,49 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
 
 struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
 vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min);
+
+/*
+ * function called by guest code to translate physical address of a pagetable
+ * page to guest virtual address.
+ *
+ * input args:
+ *	gpgt_info - pointer to the guest_pgt_info structure containing info
+ *		about guest virtual address mappings for guest physical
+ *		addresses of page table pages.
+ *	pgt_pa - physical address of guest page table page to be translated
+ *		to a virtual address.
+ *
+ * output args: none
+ *
+ * return:
+ *	pointer to the pagetable page, null in case physical address is not
+ *	tracked via given guest_pgt_info structure.
+ */
+void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info, uint64_t pgt_pa);
+
+/*
+ * Allocate and setup a page to be shared with guest containing guest_pgt_info
+ * structure.
+ *
+ * Note:
+ *	1) vm_set_pgt_alloc_tracking function should be used to start tracking
+ *		of physical page table page allocation.
+ *	2) This function should be invoked after needed pagetable pages are
+ *		mapped to the VM using virt_pg_map.
+ *
+ * input args:
+ *	vm - virtual machine
+ *	vaddr_min - Minimum guest virtual address to start mapping the
+ *		guest_pgt_info structure page(s).
+ *
+ * output args: none
+ *
+ * return:
+ *	virtual address mapping guest_pgt_info structure.
+ */
+vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min);
+
 vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
 vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
 vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
@@ -647,10 +716,46 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
 
 const char *exit_reason_str(unsigned int exit_reason);
 
+#ifdef __x86_64__
+/*
+ * Guest called function to get a pointer to pte corresponding to a given
+ * guest virtual address and pointer to the guest_pgt_info structure.
+ *
+ * input args:
+ *	gpgt_info - pointer to guest_pgt_info structure containing information
+ *		about guest virtual addresses mapped to pagetable physical
+ *		addresses.
+ *	vaddr - guest virtual address
+ *
+ * output args: none
+ *
+ * return:
+ *	pointer to the pte corresponding to guest virtual address,
+ *	Null if pte is not found
+ */
+uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t vaddr);
+#endif
+
 vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 			     uint32_t memslot);
 vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
 			      vm_paddr_t paddr_min, uint32_t memslot);
+
+/*
+ * Enable tracking of physical guest pagetable pages for the given vm.
+ * This function should be called right after vm creation before any pages are
+ * mapped into the VM using vm_alloc_* / vm_vaddr_alloc* functions.
+ *
+ * input args:
+ *	vm - virtual machine
+ *
+ * output args: none
+ *
+ * return:
+ *	None
+ */
+void vm_set_pgt_alloc_tracking(struct kvm_vm *vm);
+
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
 
 /*
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index f153c71d6988..243d04a3d4b6 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -155,6 +155,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
 	TEST_ASSERT(vm != NULL, "Insufficient Memory");
 
 	INIT_LIST_HEAD(&vm->vcpus);
+	INIT_LIST_HEAD(&vm->pgt_pages);
 	vm->regions.gpa_tree = RB_ROOT;
 	vm->regions.hva_tree = RB_ROOT;
 	hash_init(vm->regions.slot_hash);
@@ -573,6 +574,7 @@ void kvm_vm_free(struct kvm_vm *vmp)
 {
 	int ctr;
 	struct hlist_node *node;
+	struct pgt_page *entry, *nentry;
 	struct userspace_mem_region *region;
 
 	if (vmp == NULL)
@@ -588,6 +590,9 @@ void kvm_vm_free(struct kvm_vm *vmp)
 	hash_for_each_safe(vmp->regions.slot_hash, ctr, node, region, slot_node)
 		__vm_mem_region_delete(vmp, region, false);
 
+	list_for_each_entry_safe(entry, nentry, &vmp->pgt_pages, list)
+		free(entry);
+
 	/* Free sparsebit arrays. */
 	sparsebit_free(&vmp->vpages_valid);
 	sparsebit_free(&vmp->vpages_mapped);
@@ -1195,9 +1200,24 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
 /* Arbitrary minimum physical address used for virtual translation tables. */
 #define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
 
+void vm_set_pgt_alloc_tracking(struct kvm_vm *vm)
+{
+	vm->track_pgt_pages = true;
+}
+
 vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
 {
-	return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+	struct pgt_page *pgt;
+	vm_paddr_t paddr = vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+
+	if (vm->track_pgt_pages) {
+		pgt = calloc(1, sizeof(*pgt));
+		TEST_ASSERT(pgt != NULL, "Insufficient memory");
+		pgt->paddr = addr_gpa2raw(vm, paddr);
+		list_add(&pgt->list, &vm->pgt_pages);
+		vm->num_pgt_pages++;
+	}
+	return paddr;
 }
 
 /*
@@ -1286,6 +1306,27 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
 	return pgidx_start * vm->page_size;
 }
 
+void vm_map_page_table(struct kvm_vm *vm, vm_vaddr_t vaddr_min)
+{
+	struct pgt_page *pgt_page_entry;
+	vm_vaddr_t vaddr;
+
+	/* Stop tracking further pgt pages, mapping pagetable may itself need
+	 * new pages.
+	 */
+	vm->track_pgt_pages = false;
+	vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm,
+		vm->num_pgt_pages * vm->page_size, vaddr_min);
+	vaddr = vaddr_start;
+	list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) {
+		/* Map the virtual page. */
+		virt_pg_map(vm, vaddr, addr_raw2gpa(vm, pgt_page_entry->paddr));
+		sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift);
+		vaddr += vm->page_size;
+	}
+	vm->pgt_vaddr_start = vaddr_start;
+}
+
 /*
  * VM Virtual Address Allocate Shared/Encrypted
  *
@@ -1345,6 +1386,41 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_
 	return _vm_vaddr_alloc(vm, sz, vaddr_min, false);
 }
 
+void *guest_code_get_pgt_vaddr(struct guest_pgt_info *gpgt_info,
+		uint64_t pgt_pa)
+{
+	uint64_t num_pgt_pages = gpgt_info->num_pgt_pages;
+	uint64_t pgt_vaddr_start = gpgt_info->pgt_vaddr_start;
+	uint64_t page_size = gpgt_info->page_size;
+
+	for (uint32_t i = 0; i < num_pgt_pages; i++) {
+		if (gpgt_info->paddrs[i] == pgt_pa)
+			return (void *)(pgt_vaddr_start + i * page_size);
+	}
+	return NULL;
+}
+
+vm_vaddr_t vm_setup_pgt_info_buf(struct kvm_vm *vm, vm_vaddr_t vaddr_min)
+{
+	struct pgt_page *pgt_page_entry;
+	struct guest_pgt_info *gpgt_info;
+	uint64_t info_size = sizeof(*gpgt_info) + (sizeof(uint64_t) * vm->num_pgt_pages);
+	uint64_t num_pages = align_up(info_size, vm->page_size);
+	vm_vaddr_t buf_start = vm_vaddr_alloc(vm, num_pages, vaddr_min);
+	uint32_t i = 0;
+
+	gpgt_info = (struct guest_pgt_info *)addr_gva2hva(vm, buf_start);
+	gpgt_info->num_pgt_pages = vm->num_pgt_pages;
+	gpgt_info->pgt_vaddr_start = vm->pgt_vaddr_start;
+	gpgt_info->page_size = vm->page_size;
+	list_for_each_entry(pgt_page_entry, &vm->pgt_pages, list) {
+		gpgt_info->paddrs[i] = pgt_page_entry->paddr;
+		i++;
+	}
+	TEST_ASSERT((i == vm->num_pgt_pages), "pgt entries mismatch with the counter");
+	return buf_start;
+}
+
 /*
  * VM Virtual Address Allocate Pages
  *
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 09d757a0b148..02252cabf9ec 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -217,6 +217,38 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
 	__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
 }
 
+uint64_t *guest_code_get_pte(struct guest_pgt_info *gpgt_info, uint64_t vaddr)
+{
+	uint16_t index[4];
+	uint64_t *pml4e, *pdpe, *pde, *pte;
+	uint64_t pgt_paddr = get_cr3();
+	uint64_t page_size = gpgt_info->page_size;
+
+	index[0] = (vaddr >> 12) & 0x1ffu;
+	index[1] = (vaddr >> 21) & 0x1ffu;
+	index[2] = (vaddr >> 30) & 0x1ffu;
+	index[3] = (vaddr >> 39) & 0x1ffu;
+
+	pml4e = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr);
+	GUEST_ASSERT(pml4e && (pml4e[index[3]] & PTE_PRESENT_MASK));
+
+	pgt_paddr = (PTE_GET_PFN(pml4e[index[3]]) * page_size);
+	pdpe = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr);
+	GUEST_ASSERT(pdpe && (pdpe[index[2]] & PTE_PRESENT_MASK) &&
+		!(pdpe[index[2]] & PTE_LARGE_MASK));
+
+	pgt_paddr = (PTE_GET_PFN(pdpe[index[2]]) * page_size);
+	pde = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr);
+	GUEST_ASSERT(pde && (pde[index[1]] & PTE_PRESENT_MASK) &&
+		!(pde[index[1]] & PTE_LARGE_MASK));
+
+	pgt_paddr = (PTE_GET_PFN(pde[index[1]]) * page_size);
+	pte = guest_code_get_pgt_vaddr(gpgt_info, pgt_paddr);
+	GUEST_ASSERT(pte && (pte[index[0]] & PTE_PRESENT_MASK));
+
+	return (uint64_t *)&pte[index[0]];
+}
+
 static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm,
 					  struct kvm_vcpu *vcpu,
 					  uint64_t vaddr)
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 2/8] kvm: Add HVA range operator
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 3/8] arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA Vishal Annapurve
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Introduce HVA range operator so that other KVM subsystems
can operate on HVA range.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 include/linux/kvm_host.h |  6 +++++
 virt/kvm/kvm_main.c      | 48 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 4508fa0e8fb6..c860e6d6408d 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1398,6 +1398,12 @@ void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
 void kvm_mmu_updating_begin(struct kvm *kvm, gfn_t start, gfn_t end);
 void kvm_mmu_updating_end(struct kvm *kvm, gfn_t start, gfn_t end);
 
+typedef int (*kvm_hva_range_op_t)(struct kvm *kvm,
+				struct kvm_gfn_range *range, void *data);
+
+int kvm_vm_do_hva_range_op(struct kvm *kvm, unsigned long hva_start,
+		unsigned long hva_end, kvm_hva_range_op_t handler, void *data);
+
 long kvm_arch_dev_ioctl(struct file *filp,
 			unsigned int ioctl, unsigned long arg);
 long kvm_arch_vcpu_ioctl(struct file *filp,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 7597949fe031..16cb9ab59143 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -647,6 +647,54 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
 	return (int)ret;
 }
 
+int kvm_vm_do_hva_range_op(struct kvm *kvm, unsigned long hva_start,
+		unsigned long hva_end, kvm_hva_range_op_t handler, void *data)
+{
+	int ret = 0;
+	struct kvm_gfn_range gfn_range;
+	struct kvm_memory_slot *slot;
+	struct kvm_memslots *slots;
+	int i, idx;
+
+	if (WARN_ON_ONCE(hva_end <= hva_start))
+		return -EINVAL;
+
+	idx = srcu_read_lock(&kvm->srcu);
+
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		struct interval_tree_node *node;
+
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot_in_hva_range(node, slots,
+					  hva_start, hva_end - 1) {
+			unsigned long start, end;
+
+			slot = container_of(node, struct kvm_memory_slot,
+				hva_node[slots->node_idx]);
+			start = max(hva_start, slot->userspace_addr);
+			end = min(hva_end, slot->userspace_addr +
+						  (slot->npages << PAGE_SHIFT));
+
+			/*
+			 * {gfn(page) | page intersects with [hva_start, hva_end)} =
+			 * {gfn_start, gfn_start+1, ..., gfn_end-1}.
+			 */
+			gfn_range.start = hva_to_gfn_memslot(start, slot);
+			gfn_range.end = hva_to_gfn_memslot(end + PAGE_SIZE - 1, slot);
+			gfn_range.slot = slot;
+
+			ret = handler(kvm, &gfn_range, data);
+			if (ret)
+				goto e_ret;
+		}
+	}
+
+e_ret:
+	srcu_read_unlock(&kvm->srcu, idx);
+
+	return ret;
+}
+
 static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn,
 						unsigned long start,
 						unsigned long end,
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 3/8] arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 2/8] kvm: Add HVA range operator Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory Vishal Annapurve
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

This change adds handling of HVA ranges to copy contents
to private memory while doing sev launch update data.

mem_attr array is updated during LAUNCH_UPDATE_DATA to ensure
that encrypted memory is marked as private.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 arch/x86/kvm/svm/sev.c   | 99 ++++++++++++++++++++++++++++++++++++----
 include/linux/kvm_host.h |  2 +
 virt/kvm/kvm_main.c      | 39 ++++++++++------
 3 files changed, 116 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 309bcdb2f929..673dca318cd4 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -492,23 +492,22 @@ static unsigned long get_num_contig_pages(unsigned long idx,
 	return pages;
 }
 
-static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+int sev_launch_update_shared_gfn_handler(struct kvm *kvm,
+	struct kvm_gfn_range *range, struct kvm_sev_cmd *argp)
 {
 	unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
-	struct kvm_sev_launch_update_data params;
 	struct sev_data_launch_update_data data;
 	struct page **inpages;
 	int ret;
 
-	if (!sev_guest(kvm))
-		return -ENOTTY;
-
-	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data, sizeof(params)))
-		return -EFAULT;
+	vaddr = gfn_to_hva_memslot(range->slot, range->start);
+	if (kvm_is_error_hva(vaddr)) {
+		pr_err("vaddr is erroneous 0x%lx\n", vaddr);
+		return -EINVAL;
+	}
 
-	vaddr = params.uaddr;
-	size = params.len;
+	size = (range->end - range->start) << PAGE_SHIFT;
 	vaddr_end = vaddr + size;
 
 	/* Lock the user memory. */
@@ -560,6 +559,88 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
 	return ret;
 }
 
+int sev_launch_update_priv_gfn_handler(struct kvm *kvm,
+	struct kvm_gfn_range *range, struct kvm_sev_cmd *argp)
+{
+	struct sev_data_launch_update_data data;
+	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+	gfn_t gfn;
+	kvm_pfn_t pfn;
+	struct kvm_memory_slot *memslot = range->slot;
+	int ret = 0;
+
+	data.reserved = 0;
+	data.handle = sev->handle;
+
+	for (gfn = range->start; gfn < range->end; gfn++) {
+		int order;
+		void *kvaddr;
+
+		ret = kvm_private_mem_get_pfn(memslot,
+			gfn, &pfn, &order);
+		if (ret)
+			return ret;
+
+		kvaddr = pfn_to_kaddr(pfn);
+		if (!virt_addr_valid(kvaddr)) {
+			pr_err("Invalid kvaddr 0x%lx\n", (uint64_t)kvaddr);
+			ret = -EINVAL;
+			goto e_ret;
+		}
+
+		ret = kvm_read_guest_page(kvm, gfn, kvaddr, 0, PAGE_SIZE);
+		if (ret) {
+			pr_err("guest read failed 0x%lx\n", ret);
+			goto e_ret;
+		}
+
+		if (!this_cpu_has(X86_FEATURE_SME_COHERENT))
+			clflush_cache_range(kvaddr, PAGE_SIZE);
+
+		data.len = PAGE_SIZE;
+		data.address = __sme_set(pfn << PAGE_SHIFT);
+		ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_DATA, &data, &argp->error);
+		if (ret)
+			goto e_ret;
+
+		kvm_private_mem_put_pfn(memslot, pfn);
+	}
+	kvm_vm_set_region_attr(kvm, range->start, range->end,
+		true /* priv_attr */);
+
+	return ret;
+
+e_ret:
+	kvm_private_mem_put_pfn(memslot, pfn);
+	return ret;
+}
+
+int sev_launch_update_gfn_handler(struct kvm *kvm,
+	struct kvm_gfn_range *range, void *data)
+{
+	struct kvm_sev_cmd *argp = (struct kvm_sev_cmd *)data;
+
+	if (kvm_slot_can_be_private(range->slot))
+		return sev_launch_update_priv_gfn_handler(kvm, range, argp);
+
+	return sev_launch_update_shared_gfn_handler(kvm, range, argp);
+}
+
+static int sev_launch_update_data(struct kvm *kvm,
+		struct kvm_sev_cmd *argp)
+{
+	struct kvm_sev_launch_update_data params;
+
+	if (!sev_guest(kvm))
+		return -ENOTTY;
+
+	if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data, sizeof(params)))
+		return -EFAULT;
+
+	return kvm_vm_do_hva_range_op(kvm, params.uaddr, params.uaddr + params.len,
+		sev_launch_update_gfn_handler, argp);
+}
+
 static int sev_es_sync_vmsa(struct vcpu_svm *svm)
 {
 	struct sev_es_save_area *save = svm->sev_es.vmsa;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index c860e6d6408d..5d0054e957b4 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -980,6 +980,8 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 void kvm_exit(void);
 
 void kvm_get_kvm(struct kvm *kvm);
+int kvm_vm_set_region_attr(struct kvm *kvm, unsigned long gfn_start,
+	unsigned long gfn_end, bool priv_attr);
 bool kvm_get_kvm_safe(struct kvm *kvm);
 void kvm_put_kvm(struct kvm *kvm);
 bool file_is_kvm(struct file *file);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 16cb9ab59143..9463737c2172 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -981,7 +981,7 @@ static int kvm_vm_populate_private_mem(struct kvm *kvm, unsigned long gfn_start,
 	}
 
 	mutex_lock(&kvm->slots_lock);
-	for (gfn = gfn_start; gfn <= gfn_end; gfn++) {
+	for (gfn = gfn_start; gfn < gfn_end; gfn++) {
 		int order;
 		void *kvaddr;
 
@@ -1012,12 +1012,29 @@ static int kvm_vm_populate_private_mem(struct kvm *kvm, unsigned long gfn_start,
 }
 #endif
 
+int kvm_vm_set_region_attr(struct kvm *kvm, unsigned long gfn_start,
+	unsigned long gfn_end, bool priv_attr)
+{
+	int r;
+	void *entry;
+	unsigned long index;
+
+	entry = priv_attr ? xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL;
+
+	for (index = gfn_start; index < gfn_end; index++) {
+		r = xa_err(xa_store(&kvm->mem_attr_array, index, entry,
+				GFP_KERNEL_ACCOUNT));
+		if (r)
+			break;
+	}
+
+	return r;
+}
+
 static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl,
 					     struct kvm_enc_region *region)
 {
 	unsigned long start, end;
-	unsigned long index;
-	void *entry;
 	int r;
 
 	if (region->size == 0 || region->addr + region->size < region->addr)
@@ -1026,22 +1043,14 @@ static int kvm_vm_ioctl_set_encrypted_region(struct kvm *kvm, unsigned int ioctl
 		return -EINVAL;
 
 	start = region->addr >> PAGE_SHIFT;
-	end = (region->addr + region->size - 1) >> PAGE_SHIFT;
-
-	entry = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION ?
-				xa_mk_value(KVM_MEM_ATTR_PRIVATE) : NULL;
-
-	for (index = start; index <= end; index++) {
-		r = xa_err(xa_store(&kvm->mem_attr_array, index, entry,
-				GFP_KERNEL_ACCOUNT));
-		if (r)
-			break;
-	}
+	end = (region->addr + region->size) >> PAGE_SHIFT;
+	r = kvm_vm_set_region_attr(kvm, start, end,
+		(ioctl == KVM_MEMORY_ENCRYPT_REG_REGION));
 
 	kvm_zap_gfn_range(kvm, start, end + 1);
 
 #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING
-	if (!kvm->vm_entry_attempted && (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION))
+	if (!r && !kvm->vm_entry_attempted && (ioctl == KVM_MEMORY_ENCRYPT_REG_REGION))
 		r = kvm_vm_populate_private_mem(kvm, start, end);
 #endif
 
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
                   ` (2 preceding siblings ...)
  2022-08-30 22:42 ` [RFC V2 PATCH 3/8] arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs Vishal Annapurve
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Introduce an additional helper API to create a SEV VM with private
memory memslots.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/include/x86_64/sev.h |  2 ++
 tools/testing/selftests/kvm/lib/x86_64/sev.c     | 15 ++++++++++++---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/sev.h b/tools/testing/selftests/kvm/include/x86_64/sev.h
index b6552ea1c716..628801707917 100644
--- a/tools/testing/selftests/kvm/include/x86_64/sev.h
+++ b/tools/testing/selftests/kvm/include/x86_64/sev.h
@@ -38,6 +38,8 @@ void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data);
 struct kvm_vm *sev_get_vm(struct sev_vm *sev);
 uint8_t sev_get_enc_bit(struct sev_vm *sev);
 
+struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages,
+	uint32_t memslot_flags);
 struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages);
 void sev_vm_free(struct sev_vm *sev);
 void sev_vm_launch(struct sev_vm *sev);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
index 44b5ce5cd8db..6a329ea17f9f 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
@@ -171,7 +171,8 @@ void sev_vm_free(struct sev_vm *sev)
 	free(sev);
 }
 
-struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
+struct sev_vm *sev_vm_create_with_flags(uint32_t policy, uint64_t npages,
+	uint32_t memslot_flags)
 {
 	struct sev_vm *sev;
 	struct kvm_vm *vm;
@@ -188,9 +189,12 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
 	vm->vpages_mapped = sparsebit_alloc();
 	vm_set_memory_encryption(vm, true, true, sev->enc_bit);
 	pr_info("SEV cbit: %d\n", sev->enc_bit);
-	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages, 0);
-	sev_register_user_region(sev, addr_gpa2hva(vm, 0),
+	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, npages,
+		memslot_flags);
+	if (!(memslot_flags & KVM_MEM_PRIVATE)) {
+		sev_register_user_region(sev, addr_gpa2hva(vm, 0),
 				 npages * vm->page_size);
+	}
 
 	pr_info("SEV guest created, policy: 0x%x, size: %lu KB\n",
 		sev->sev_policy, npages * vm->page_size / 1024);
@@ -198,6 +202,11 @@ struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
 	return sev;
 }
 
+struct sev_vm *sev_vm_create(uint32_t policy, uint64_t npages)
+{
+	return sev_vm_create_with_flags(policy, npages, 0);
+}
+
 void sev_vm_launch(struct sev_vm *sev)
 {
 	struct kvm_sev_launch_start ksev_launch_start = {0};
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
                   ` (3 preceding siblings ...)
  2022-08-30 22:42 ` [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 6/8] selftests: kvm: Support executing SEV VMs with private memory Vishal Annapurve
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Add/update APIs to allow reusing private mem lib for SEV VMs.
Memory conversion for SEV VMs includes updating guest pagetables
based on virtual addresses to toggle C-bit.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 .../kvm/include/x86_64/private_mem.h          |   9 +-
 .../selftests/kvm/lib/x86_64/private_mem.c    | 103 +++++++++++++-----
 2 files changed, 83 insertions(+), 29 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h
index 645bf3f61d1e..183b53b8c486 100644
--- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h
+++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h
@@ -14,10 +14,10 @@ enum mem_conversion_type {
 	TO_SHARED
 };
 
-void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa,
-	uint64_t size);
-void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa,
-	uint64_t size);
+void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva,
+	uint64_t gpa, uint64_t size);
+void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva,
+	uint64_t gpa, uint64_t size);
 
 void guest_map_ucall_page_shared(void);
 
@@ -45,6 +45,7 @@ struct vm_setup_info {
 	struct test_setup_info test_info;
 	guest_code_fn guest_fn;
 	io_exit_handler ioexit_cb;
+	uint32_t policy; /* Used for Sev VMs */
 };
 
 void execute_vm_with_private_mem(struct vm_setup_info *info);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c
index f6dcfa4d353f..28d93754e1f2 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c
@@ -22,12 +22,45 @@
 #include <kvm_util.h>
 #include <private_mem.h>
 #include <processor.h>
+#include <sev.h>
+
+#define GUEST_PGT_MIN_VADDR	0x10000
+
+/* Variables populated by userspace logic and consumed by guest code */
+static bool is_sev_vm;
+static struct guest_pgt_info *sev_gpgt_info;
+static uint8_t sev_enc_bit;
+
+static void sev_guest_set_clr_pte_bit(uint64_t vaddr_start, uint64_t mem_size,
+	bool set)
+{
+	uint64_t vaddr = vaddr_start;
+	uint32_t guest_page_size = sev_gpgt_info->page_size;
+	uint32_t num_pages;
+
+	GUEST_ASSERT(!(mem_size % guest_page_size) && !(vaddr_start %
+		guest_page_size));
+
+	num_pages = mem_size / guest_page_size;
+	for (uint32_t i = 0; i < num_pages; i++) {
+		uint64_t *pte = guest_code_get_pte(sev_gpgt_info, vaddr);
+
+		GUEST_ASSERT(pte);
+		if (set)
+			*pte |= (1ULL << sev_enc_bit);
+		else
+			*pte &= ~(1ULL << sev_enc_bit);
+		asm volatile("invlpg (%0)" :: "r"(vaddr) : "memory");
+		vaddr += guest_page_size;
+	}
+}
 
 /*
  * Execute KVM hypercall to change memory access type for a given gpa range.
  *
  * Input Args:
  *   type - memory conversion type TO_SHARED/TO_PRIVATE
+ *   gva - starting gva address
  *   gpa - starting gpa address
  *   size - size of the range starting from gpa for which memory access needs
  *     to be changed
@@ -40,9 +73,12 @@
  * for a given gpa range. This API is useful in exercising implicit conversion
  * path.
  */
-void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa,
-	uint64_t size)
+void guest_update_mem_access(enum mem_conversion_type type, uint64_t gva,
+	uint64_t gpa, uint64_t size)
 {
+	if (is_sev_vm)
+		sev_guest_set_clr_pte_bit(gva, size, type == TO_PRIVATE ? true : false);
+
 	int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT,
 		type == TO_PRIVATE ? KVM_MARK_GPA_RANGE_ENC_ACCESS :
 			KVM_CLR_GPA_RANGE_ENC_ACCESS, 0);
@@ -54,6 +90,7 @@ void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa,
  *
  * Input Args:
  *   type - memory conversion type TO_SHARED/TO_PRIVATE
+ *   gva - starting gva address
  *   gpa - starting gpa address
  *   size - size of the range starting from gpa for which memory type needs
  *     to be changed
@@ -65,9 +102,12 @@ void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa,
  * Function called by guest logic in selftests to update the memory type for a
  * given gpa range. This API is useful in exercising explicit conversion path.
  */
-void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa,
-	uint64_t size)
+void guest_update_mem_map(enum mem_conversion_type type, uint64_t gva,
+	uint64_t gpa, uint64_t size)
 {
+	if (is_sev_vm)
+		sev_guest_set_clr_pte_bit(gva, size, type == TO_PRIVATE ? true : false);
+
 	int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT,
 		type == TO_PRIVATE ? KVM_MAP_GPA_RANGE_ENCRYPTED :
 			KVM_MAP_GPA_RANGE_DECRYPTED, 0);
@@ -90,30 +130,15 @@ void guest_update_mem_map(enum mem_conversion_type type, uint64_t gpa,
 void guest_map_ucall_page_shared(void)
 {
 	vm_paddr_t ucall_paddr = get_ucall_pool_paddr();
+	GUEST_ASSERT(ucall_paddr);
 
-	guest_update_mem_access(TO_SHARED, ucall_paddr, 1 << MIN_PAGE_SHIFT);
+	int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, ucall_paddr, 1,
+		KVM_MAP_GPA_RANGE_DECRYPTED, 0);
+	GUEST_ASSERT_1(!ret, ret);
 }
 
-/*
- * Execute KVM ioctl to back/unback private memory for given gpa range.
- *
- * Input Args:
- *   vm - kvm_vm handle
- *   gpa - starting gpa address
- *   size - size of the gpa range
- *   op - mem_op indicating whether private memory needs to be allocated or
- *     unbacked
- *
- * Output Args: None
- *
- * Return: None
- *
- * Function called by host userspace logic in selftests to back/unback private
- * memory for gpa ranges. This function is useful to setup initial boot private
- * memory and then convert memory during runtime.
- */
-void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
-	enum mem_op op)
+static void vm_update_private_mem_internal(struct kvm_vm *vm, uint64_t gpa,
+	uint64_t size, enum mem_op op, bool encrypt)
 {
 	int priv_memfd;
 	uint64_t priv_offset, guest_phys_base, fd_offset;
@@ -142,6 +167,10 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
 	TEST_ASSERT(ret == 0, "fallocate failed\n");
 	enc_region.addr = gpa;
 	enc_region.size = size;
+
+	if (!encrypt)
+		return;
+
 	if (op == ALLOCATE_MEM) {
 		printf("doing encryption for gpa 0x%lx size 0x%lx\n", gpa, size);
 		vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &enc_region);
@@ -151,6 +180,30 @@ void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
 	}
 }
 
+/*
+ * Execute KVM ioctl to back/unback private memory for given gpa range.
+ *
+ * Input Args:
+ *   vm - kvm_vm handle
+ *   gpa - starting gpa address
+ *   size - size of the gpa range
+ *   op - mem_op indicating whether private memory needs to be allocated or
+ *     unbacked
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Function called by host userspace logic in selftests to back/unback private
+ * memory for gpa ranges. This function is useful to setup initial boot private
+ * memory and then convert memory during runtime.
+ */
+void vm_update_private_mem(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+	enum mem_op op)
+{
+	vm_update_private_mem_internal(vm, gpa, size, op, true /* encrypt */);
+}
+
 static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm,
 				volatile struct kvm_run *run)
 {
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 6/8] selftests: kvm: Support executing SEV VMs with private memory
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
                   ` (4 preceding siblings ...)
  2022-08-30 22:42 ` [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for " Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 8/8] selftests: kvm: Add private memory test for SEV VMs Vishal Annapurve
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Add support of executing SEV VMs for testing private memory conversion
scenarios.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 .../kvm/include/x86_64/private_mem.h          |  1 +
 .../selftests/kvm/lib/x86_64/private_mem.c    | 86 +++++++++++++++++++
 2 files changed, 87 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem.h b/tools/testing/selftests/kvm/include/x86_64/private_mem.h
index 183b53b8c486..d3ef88da837c 100644
--- a/tools/testing/selftests/kvm/include/x86_64/private_mem.h
+++ b/tools/testing/selftests/kvm/include/x86_64/private_mem.h
@@ -49,5 +49,6 @@ struct vm_setup_info {
 };
 
 void execute_vm_with_private_mem(struct vm_setup_info *info);
+void execute_sev_vm_with_private_mem(struct vm_setup_info *info);
 
 #endif /* SELFTEST_KVM_PRIVATE_MEM_H */
diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c
index 28d93754e1f2..0eb8f92d19e8 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/private_mem.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem.c
@@ -348,3 +348,89 @@ void execute_vm_with_private_mem(struct vm_setup_info *info)
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
 }
+
+/*
+ * Execute Sev vm with private memory memslots.
+ *
+ * Input Args:
+ *   info - pointer to a structure containing information about setting up a SEV
+ *   VM with private memslots
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Function called by host userspace logic in selftests to execute SEV vm
+ * logic. It will install two memslots:
+ * 1) memslot 0 : containing all the boot code/stack pages
+ * 2) test_mem_slot : containing the region of memory that would be used to test
+ *   private/shared memory accesses to a memory backed by private memslots
+ */
+void execute_sev_vm_with_private_mem(struct vm_setup_info *info)
+{
+	uint8_t measurement[512];
+	struct sev_vm *sev;
+	struct kvm_vm *vm;
+	struct kvm_enable_cap cap;
+	struct kvm_vcpu *vcpu;
+	uint32_t memslot0_pages = info->memslot0_pages;
+	uint64_t test_area_gpa, test_area_size;
+	struct test_setup_info *test_info = &info->test_info;
+
+	sev = sev_vm_create_with_flags(info->policy, memslot0_pages, KVM_MEM_PRIVATE);
+	TEST_ASSERT(sev, "Sev VM creation failed");
+	vm = sev_get_vm(sev);
+	vm->use_ucall_pool = true;
+	vm_set_pgt_alloc_tracking(vm);
+	vm_create_irqchip(vm);
+
+	TEST_ASSERT(info->guest_fn, "guest_fn not present");
+	vcpu = vm_vcpu_add(vm, 0, info->guest_fn);
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	vm_check_cap(vm, KVM_CAP_EXIT_HYPERCALL);
+	cap.cap = KVM_CAP_EXIT_HYPERCALL;
+	cap.flags = 0;
+	cap.args[0] = (1 << KVM_HC_MAP_GPA_RANGE);
+	vm_ioctl(vm, KVM_ENABLE_CAP, &cap);
+
+	TEST_ASSERT(test_info->test_area_size, "Test mem size not present");
+
+	test_area_size = test_info->test_area_size;
+	test_area_gpa = test_info->test_area_gpa;
+	vm_userspace_mem_region_add(vm, test_info->test_area_mem_src, test_area_gpa,
+		test_info->test_area_slot, test_area_size / vm->page_size,
+		KVM_MEM_PRIVATE);
+	vm_update_private_mem(vm, test_area_gpa, test_area_size, ALLOCATE_MEM);
+
+	virt_map(vm, test_area_gpa, test_area_gpa, test_area_size/vm->page_size);
+
+	vm_map_page_table(vm, GUEST_PGT_MIN_VADDR);
+	sev_gpgt_info = (struct guest_pgt_info *)vm_setup_pgt_info_buf(vm,
+			GUEST_PGT_MIN_VADDR);
+	sev_enc_bit = sev_get_enc_bit(sev);
+	is_sev_vm = true;
+	sync_global_to_guest(vm, sev_enc_bit);
+	sync_global_to_guest(vm, sev_gpgt_info);
+	sync_global_to_guest(vm, is_sev_vm);
+
+	vm_update_private_mem_internal(vm, 0, (memslot0_pages << MIN_PAGE_SHIFT),
+		ALLOCATE_MEM, false);
+
+	/* Allocations/setup done. Encrypt initial guest payload. */
+	sev_vm_launch(sev);
+
+	/* Dump the initial measurement. A test to actually verify it would be nice. */
+	sev_vm_launch_measure(sev, measurement);
+	pr_info("guest measurement: ");
+	for (uint32_t i = 0; i < 32; ++i)
+		pr_info("%02x", measurement[i]);
+	pr_info("\n");
+
+	sev_vm_launch_finish(sev);
+
+	vcpu_work(vm, vcpu, info);
+
+	sev_vm_free(sev);
+	is_sev_vm = false;
+}
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for private memory
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
                   ` (5 preceding siblings ...)
  2022-08-30 22:42 ` [RFC V2 PATCH 6/8] selftests: kvm: Support executing SEV VMs with private memory Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  2022-08-30 22:42 ` [RFC V2 PATCH 8/8] selftests: kvm: Add private memory test for SEV VMs Vishal Annapurve
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Move all of the logic to execute memory conversion tests into library to
allow sharing the logic between normal non-confidential VMs and SEV VMs.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../include/x86_64/private_mem_test_helper.h  |  13 +
 .../kvm/lib/x86_64/private_mem_test_helper.c  | 273 ++++++++++++++++++
 .../selftests/kvm/x86_64/private_mem_test.c   | 246 +---------------
 4 files changed, 289 insertions(+), 244 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h
 create mode 100644 tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c5fc8ea2c843..36874fedff4a 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -52,6 +52,7 @@ LIBKVM_x86_64 += lib/x86_64/apic.c
 LIBKVM_x86_64 += lib/x86_64/handlers.S
 LIBKVM_x86_64 += lib/x86_64/perf_test_util.c
 LIBKVM_x86_64 += lib/x86_64/private_mem.c
+LIBKVM_x86_64 += lib/x86_64/private_mem_test_helper.c
 LIBKVM_x86_64 += lib/x86_64/processor.c
 LIBKVM_x86_64 += lib/x86_64/svm.c
 LIBKVM_x86_64 += lib/x86_64/ucall.c
diff --git a/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h
new file mode 100644
index 000000000000..31bc559cd813
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/private_mem_test_helper.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022, Google LLC.
+ */
+
+#ifndef SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H
+#define SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H
+
+void execute_memory_conversion_tests(void);
+
+void execute_sev_memory_conversion_tests(void);
+
+#endif  // SELFTEST_KVM_PRIVATE_MEM_TEST_HELPER_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c
new file mode 100644
index 000000000000..ce53bef7896e
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/private_mem_test_helper.c
@@ -0,0 +1,273 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022, Google LLC.
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <test_util.h>
+#include <kvm_util.h>
+#include <private_mem.h>
+#include <private_mem_test_helper.h>
+#include <processor.h>
+#include <sev.h>
+
+#define VM_MEMSLOT0_PAGES	(512 * 10)
+
+#define TEST_AREA_SLOT		10
+#define TEST_AREA_GPA		0xC0000000
+#define TEST_AREA_SIZE		(2 * 1024 * 1024)
+#define GUEST_TEST_MEM_OFFSET	(1 * 1024 * 1024)
+#define GUEST_TEST_MEM_SIZE	(10 * 4096)
+
+#define VM_STAGE_PROCESSED(x)	pr_info("Processed stage %s\n", #x)
+
+#define TEST_MEM_DATA_PAT1	0x66
+#define TEST_MEM_DATA_PAT2	0x99
+#define TEST_MEM_DATA_PAT3	0x33
+#define TEST_MEM_DATA_PAT4	0xaa
+#define TEST_MEM_DATA_PAT5	0x12
+
+static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat)
+{
+	uint8_t *buf = (uint8_t *)mem;
+
+	for (uint32_t i = 0; i < size; i++) {
+		if (buf[i] != pat)
+			return false;
+	}
+
+	return true;
+}
+
+/*
+ * Add custom implementation for memset to avoid using standard/builtin memset
+ * which may use features like SSE/GOT that don't work with guest vm execution
+ * within selftests.
+ */
+void *memset(void *mem, int byte, size_t size)
+{
+	uint8_t *buf = (uint8_t *)mem;
+
+	for (uint32_t i = 0; i < size; i++)
+		buf[i] = byte;
+
+	return buf;
+}
+
+static void populate_test_area(void *test_area_base, uint64_t pat)
+{
+	memset(test_area_base, pat, TEST_AREA_SIZE);
+}
+
+static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat)
+{
+	memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE);
+}
+
+static bool verify_test_area(void *test_area_base, uint64_t area_pat,
+	uint64_t guest_pat)
+{
+	void *test_area1_base = test_area_base;
+	uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET;
+	void *guest_test_mem = test_area_base + test_area1_size;
+	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
+	void *test_area2_base = guest_test_mem + guest_test_size;
+	uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET +
+			GUEST_TEST_MEM_SIZE));
+
+	return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) &&
+		verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) &&
+		verify_mem_contents(test_area2_base, test_area2_size, area_pat));
+}
+
+#define GUEST_STARTED			0
+#define GUEST_PRIVATE_MEM_POPULATED	1
+#define GUEST_SHARED_MEM_POPULATED	2
+#define GUEST_PRIVATE_MEM_POPULATED2	3
+#define GUEST_IMPLICIT_MEM_CONV1	4
+#define GUEST_IMPLICIT_MEM_CONV2	5
+
+/*
+ * Run memory conversion tests supporting two types of conversion:
+ * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause
+ *   userspace exit to back/unback private memory. Subsequent accesses by guest
+ *   to the gpa range will not cause exit to userspace.
+ * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as
+ *   private/shared without exiting to userspace. Subsequent accesses by guest
+ *   to the gpa range will result in KVM EPT/NPT faults and then exit to
+ *   userspace for each page.
+ *
+ * Test memory conversion scenarios with following steps:
+ * 1) Access private memory using private access and verify that memory contents
+ *   are not visible to userspace.
+ * 2) Convert memory to shared using explicit/implicit conversions and ensure
+ *   that userspace is able to access the shared regions.
+ * 3) Convert memory back to private using explicit/implicit conversions and
+ *   ensure that userspace is again not able to access converted private
+ *   regions.
+ */
+static void guest_conv_test_fn(bool test_explicit_conv)
+{
+	void *test_area_base = (void *)TEST_AREA_GPA;
+	void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET);
+	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
+
+	guest_map_ucall_page_shared();
+	GUEST_SYNC(GUEST_STARTED);
+
+	populate_test_area(test_area_base, TEST_MEM_DATA_PAT1);
+	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED);
+	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1,
+		TEST_MEM_DATA_PAT1));
+
+	if (test_explicit_conv)
+		guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem,
+			(uint64_t)guest_test_mem, guest_test_size);
+	else {
+		guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem,
+			(uint64_t)guest_test_mem, guest_test_size);
+		GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1);
+	}
+
+	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2);
+
+	GUEST_SYNC(GUEST_SHARED_MEM_POPULATED);
+	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1,
+		TEST_MEM_DATA_PAT5));
+
+	if (test_explicit_conv)
+		guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem,
+			(uint64_t)guest_test_mem, guest_test_size);
+	else {
+		guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem,
+			(uint64_t)guest_test_mem, guest_test_size);
+		GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2);
+	}
+
+	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3);
+	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2);
+
+	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1,
+		TEST_MEM_DATA_PAT3));
+	GUEST_DONE();
+}
+
+static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1)
+{
+	void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA);
+	void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET);
+	uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET);
+	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
+
+	switch (uc_arg1) {
+	case GUEST_STARTED:
+		populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4);
+		VM_STAGE_PROCESSED(GUEST_STARTED);
+		break;
+	case GUEST_PRIVATE_MEM_POPULATED:
+		TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4,
+				TEST_MEM_DATA_PAT4), "failed");
+		VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED);
+		break;
+	case GUEST_SHARED_MEM_POPULATED:
+		TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4,
+				TEST_MEM_DATA_PAT2), "failed");
+		populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5);
+		VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED);
+		break;
+	case GUEST_PRIVATE_MEM_POPULATED2:
+		TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4,
+				TEST_MEM_DATA_PAT5), "failed");
+		VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2);
+		break;
+	case GUEST_IMPLICIT_MEM_CONV1:
+		/*
+		 * For first implicit conversion, memory is already private so
+		 * mark it private again just to zap the pte entries for the gpa
+		 * range, so that subsequent accesses from the guest will
+		 * generate ept/npt fault and memory conversion path will be
+		 * exercised by KVM.
+		 */
+		vm_update_private_mem(vm, guest_mem_gpa, guest_test_size,
+				ALLOCATE_MEM);
+		VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1);
+		break;
+	case GUEST_IMPLICIT_MEM_CONV2:
+		/*
+		 * For second implicit conversion, memory is already shared so
+		 * mark it shared again just to zap the pte entries for the gpa
+		 * range, so that subsequent accesses from the guest will
+		 * generate ept/npt fault and memory conversion path will be
+		 * exercised by KVM.
+		 */
+		vm_update_private_mem(vm, guest_mem_gpa, guest_test_size,
+				UNBACK_MEM);
+		VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2);
+		break;
+	default:
+		TEST_FAIL("Unknown stage %d\n", uc_arg1);
+		break;
+	}
+}
+
+static void guest_explicit_conv_test_fn(void)
+{
+	guest_conv_test_fn(true);
+}
+
+static void guest_implicit_conv_test_fn(void)
+{
+	guest_conv_test_fn(false);
+}
+
+/*
+ * Execute implicit and explicit memory conversion tests with non-confidential
+ * VMs using memslots with private memory.
+ */
+void execute_memory_conversion_tests(void)
+{
+	struct vm_setup_info info;
+	struct test_setup_info *test_info = &info.test_info;
+
+	info.vm_mem_src = VM_MEM_SRC_ANONYMOUS;
+	info.memslot0_pages = VM_MEMSLOT0_PAGES;
+	test_info->test_area_gpa = TEST_AREA_GPA;
+	test_info->test_area_size = TEST_AREA_SIZE;
+	test_info->test_area_slot = TEST_AREA_SLOT;
+	test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS;
+	info.ioexit_cb = conv_test_ioexit_fn;
+
+	info.guest_fn = guest_explicit_conv_test_fn;
+	execute_vm_with_private_mem(&info);
+
+	info.guest_fn = guest_implicit_conv_test_fn;
+	execute_vm_with_private_mem(&info);
+}
+
+/*
+ * Execute implicit and explicit memory conversion tests with SEV VMs using
+ * memslots with private memory.
+ */
+void execute_sev_memory_conversion_tests(void)
+{
+	struct vm_setup_info info;
+	struct test_setup_info *test_info = &info.test_info;
+
+	info.vm_mem_src = VM_MEM_SRC_ANONYMOUS;
+	info.memslot0_pages = VM_MEMSLOT0_PAGES;
+	test_info->test_area_gpa = TEST_AREA_GPA;
+	test_info->test_area_size = TEST_AREA_SIZE;
+	test_info->test_area_slot = TEST_AREA_SLOT;
+	test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS;
+	info.ioexit_cb = conv_test_ioexit_fn;
+
+	info.policy = SEV_POLICY_NO_DBG;
+	info.guest_fn = guest_explicit_conv_test_fn;
+	execute_sev_vm_with_private_mem(&info);
+
+	info.guest_fn = guest_implicit_conv_test_fn;
+	execute_sev_vm_with_private_mem(&info);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_test.c
index 52430b97bd0b..49da626e5807 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_test.c
@@ -1,263 +1,21 @@
 // SPDX-License-Identifier: GPL-2.0
 /*
- * tools/testing/selftests/kvm/lib/kvm_util.c
- *
  * Copyright (C) 2022, Google LLC.
  */
 #define _GNU_SOURCE /* for program_invocation_short_name */
-#include <fcntl.h>
-#include <limits.h>
-#include <sched.h>
-#include <signal.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
-#include <sys/ioctl.h>
-
-#include <linux/compiler.h>
-#include <linux/kernel.h>
-#include <linux/kvm_para.h>
-#include <linux/memfd.h>
 
 #include <test_util.h>
 #include <kvm_util.h>
-#include <private_mem.h>
-#include <processor.h>
-
-#define VM_MEMSLOT0_PAGES	(512 * 10)
-
-#define TEST_AREA_SLOT		10
-#define TEST_AREA_GPA		0xC0000000
-#define TEST_AREA_SIZE		(2 * 1024 * 1024)
-#define GUEST_TEST_MEM_OFFSET	(1 * 1024 * 1024)
-#define GUEST_TEST_MEM_SIZE	(10 * 4096)
-
-#define VM_STAGE_PROCESSED(x)	pr_info("Processed stage %s\n", #x)
-
-#define TEST_MEM_DATA_PAT1	0x66
-#define TEST_MEM_DATA_PAT2	0x99
-#define TEST_MEM_DATA_PAT3	0x33
-#define TEST_MEM_DATA_PAT4	0xaa
-#define TEST_MEM_DATA_PAT5	0x12
-
-static bool verify_mem_contents(void *mem, uint32_t size, uint8_t pat)
-{
-	uint8_t *buf = (uint8_t *)mem;
-
-	for (uint32_t i = 0; i < size; i++) {
-		if (buf[i] != pat)
-			return false;
-	}
-
-	return true;
-}
-
-/*
- * Add custom implementation for memset to avoid using standard/builtin memset
- * which may use features like SSE/GOT that don't work with guest vm execution
- * within selftests.
- */
-void *memset(void *mem, int byte, size_t size)
-{
-	uint8_t *buf = (uint8_t *)mem;
-
-	for (uint32_t i = 0; i < size; i++)
-		buf[i] = byte;
-
-	return buf;
-}
-
-static void populate_test_area(void *test_area_base, uint64_t pat)
-{
-	memset(test_area_base, pat, TEST_AREA_SIZE);
-}
-
-static void populate_guest_test_mem(void *guest_test_mem, uint64_t pat)
-{
-	memset(guest_test_mem, pat, GUEST_TEST_MEM_SIZE);
-}
-
-static bool verify_test_area(void *test_area_base, uint64_t area_pat,
-	uint64_t guest_pat)
-{
-	void *test_area1_base = test_area_base;
-	uint64_t test_area1_size = GUEST_TEST_MEM_OFFSET;
-	void *guest_test_mem = test_area_base + test_area1_size;
-	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
-	void *test_area2_base = guest_test_mem + guest_test_size;
-	uint64_t test_area2_size = (TEST_AREA_SIZE - (GUEST_TEST_MEM_OFFSET +
-			GUEST_TEST_MEM_SIZE));
-
-	return (verify_mem_contents(test_area1_base, test_area1_size, area_pat) &&
-		verify_mem_contents(guest_test_mem, guest_test_size, guest_pat) &&
-		verify_mem_contents(test_area2_base, test_area2_size, area_pat));
-}
-
-#define GUEST_STARTED			0
-#define GUEST_PRIVATE_MEM_POPULATED	1
-#define GUEST_SHARED_MEM_POPULATED	2
-#define GUEST_PRIVATE_MEM_POPULATED2	3
-#define GUEST_IMPLICIT_MEM_CONV1	4
-#define GUEST_IMPLICIT_MEM_CONV2	5
-
-/*
- * Run memory conversion tests supporting two types of conversion:
- * 1) Explicit: Execute KVM hypercall to map/unmap gpa range which will cause
- *   userspace exit to back/unback private memory. Subsequent accesses by guest
- *   to the gpa range will not cause exit to userspace.
- * 2) Implicit: Execute KVM hypercall to update memory access to a gpa range as
- *   private/shared without exiting to userspace. Subsequent accesses by guest
- *   to the gpa range will result in KVM EPT/NPT faults and then exit to
- *   userspace for each page.
- *
- * Test memory conversion scenarios with following steps:
- * 1) Access private memory using private access and verify that memory contents
- *   are not visible to userspace.
- * 2) Convert memory to shared using explicit/implicit conversions and ensure
- *   that userspace is able to access the shared regions.
- * 3) Convert memory back to private using explicit/implicit conversions and
- *   ensure that userspace is again not able to access converted private
- *   regions.
- */
-static void guest_conv_test_fn(bool test_explicit_conv)
-{
-	void *test_area_base = (void *)TEST_AREA_GPA;
-	void *guest_test_mem = (void *)(TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET);
-	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
-
-	guest_map_ucall_page_shared();
-	GUEST_SYNC(GUEST_STARTED);
-
-	populate_test_area(test_area_base, TEST_MEM_DATA_PAT1);
-	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED);
-	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1,
-		TEST_MEM_DATA_PAT1));
-
-	if (test_explicit_conv)
-		guest_update_mem_map(TO_SHARED, (uint64_t)guest_test_mem,
-			guest_test_size);
-	else {
-		guest_update_mem_access(TO_SHARED, (uint64_t)guest_test_mem,
-			guest_test_size);
-		GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV1);
-	}
-
-	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT2);
-
-	GUEST_SYNC(GUEST_SHARED_MEM_POPULATED);
-	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1,
-		TEST_MEM_DATA_PAT5));
-
-	if (test_explicit_conv)
-		guest_update_mem_map(TO_PRIVATE, (uint64_t)guest_test_mem,
-			guest_test_size);
-	else {
-		guest_update_mem_access(TO_PRIVATE, (uint64_t)guest_test_mem,
-			guest_test_size);
-		GUEST_SYNC(GUEST_IMPLICIT_MEM_CONV2);
-	}
-
-	populate_guest_test_mem(guest_test_mem, TEST_MEM_DATA_PAT3);
-	GUEST_SYNC(GUEST_PRIVATE_MEM_POPULATED2);
-
-	GUEST_ASSERT(verify_test_area(test_area_base, TEST_MEM_DATA_PAT1,
-		TEST_MEM_DATA_PAT3));
-	GUEST_DONE();
-}
-
-static void conv_test_ioexit_fn(struct kvm_vm *vm, uint32_t uc_arg1)
-{
-	void *test_area_hva = addr_gpa2hva(vm, TEST_AREA_GPA);
-	void *guest_test_mem_hva = (test_area_hva + GUEST_TEST_MEM_OFFSET);
-	uint64_t guest_mem_gpa = (TEST_AREA_GPA + GUEST_TEST_MEM_OFFSET);
-	uint64_t guest_test_size = GUEST_TEST_MEM_SIZE;
-
-	switch (uc_arg1) {
-	case GUEST_STARTED:
-		populate_test_area(test_area_hva, TEST_MEM_DATA_PAT4);
-		VM_STAGE_PROCESSED(GUEST_STARTED);
-		break;
-	case GUEST_PRIVATE_MEM_POPULATED:
-		TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4,
-				TEST_MEM_DATA_PAT4), "failed");
-		VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED);
-		break;
-	case GUEST_SHARED_MEM_POPULATED:
-		TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4,
-				TEST_MEM_DATA_PAT2), "failed");
-		populate_guest_test_mem(guest_test_mem_hva, TEST_MEM_DATA_PAT5);
-		VM_STAGE_PROCESSED(GUEST_SHARED_MEM_POPULATED);
-		break;
-	case GUEST_PRIVATE_MEM_POPULATED2:
-		TEST_ASSERT(verify_test_area(test_area_hva, TEST_MEM_DATA_PAT4,
-				TEST_MEM_DATA_PAT5), "failed");
-		VM_STAGE_PROCESSED(GUEST_PRIVATE_MEM_POPULATED2);
-		break;
-	case GUEST_IMPLICIT_MEM_CONV1:
-		/*
-		 * For first implicit conversion, memory is already private so
-		 * mark it private again just to zap the pte entries for the gpa
-		 * range, so that subsequent accesses from the guest will
-		 * generate ept/npt fault and memory conversion path will be
-		 * exercised by KVM.
-		 */
-		vm_update_private_mem(vm, guest_mem_gpa, guest_test_size,
-				ALLOCATE_MEM);
-		VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV1);
-		break;
-	case GUEST_IMPLICIT_MEM_CONV2:
-		/*
-		 * For second implicit conversion, memory is already shared so
-		 * mark it shared again just to zap the pte entries for the gpa
-		 * range, so that subsequent accesses from the guest will
-		 * generate ept/npt fault and memory conversion path will be
-		 * exercised by KVM.
-		 */
-		vm_update_private_mem(vm, guest_mem_gpa, guest_test_size,
-				UNBACK_MEM);
-		VM_STAGE_PROCESSED(GUEST_IMPLICIT_MEM_CONV2);
-		break;
-	default:
-		TEST_FAIL("Unknown stage %d\n", uc_arg1);
-		break;
-	}
-}
-
-static void guest_explicit_conv_test_fn(void)
-{
-	guest_conv_test_fn(true);
-}
-
-static void guest_implicit_conv_test_fn(void)
-{
-	guest_conv_test_fn(false);
-}
-
-static void execute_memory_conversion_test(void)
-{
-	struct vm_setup_info info;
-	struct test_setup_info *test_info = &info.test_info;
-
-	info.vm_mem_src = VM_MEM_SRC_ANONYMOUS;
-	info.memslot0_pages = VM_MEMSLOT0_PAGES;
-	test_info->test_area_gpa = TEST_AREA_GPA;
-	test_info->test_area_size = TEST_AREA_SIZE;
-	test_info->test_area_slot = TEST_AREA_SLOT;
-	test_info->test_area_mem_src = VM_MEM_SRC_ANONYMOUS;
-	info.ioexit_cb = conv_test_ioexit_fn;
-
-	info.guest_fn = guest_explicit_conv_test_fn;
-	execute_vm_with_private_mem(&info);
-
-	info.guest_fn = guest_implicit_conv_test_fn;
-	execute_vm_with_private_mem(&info);
-}
+#include <private_mem_test_helper.h>
 
 int main(int argc, char *argv[])
 {
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
 
-	execute_memory_conversion_test();
+	execute_memory_conversion_tests();
 	return 0;
 }
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC V2 PATCH 8/8] selftests: kvm: Add private memory test for SEV VMs
  2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
                   ` (6 preceding siblings ...)
  2022-08-30 22:42 ` [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for " Vishal Annapurve
@ 2022-08-30 22:42 ` Vishal Annapurve
  7 siblings, 0 replies; 9+ messages in thread
From: Vishal Annapurve @ 2022-08-30 22:42 UTC (permalink / raw)
  To: x86, kvm, linux-kernel, linux-kselftest
  Cc: pbonzini, vkuznets, wanpengli, jmattson, joro, tglx, mingo, bp,
	dave.hansen, hpa, shuah, yang.zhong, drjones, ricarkol,
	aaronlewis, wei.w.wang, kirill.shutemov, corbet, hughd, jlayton,
	bfields, akpm, chao.p.peng, yu.c.zhang, jun.nakajima,
	dave.hansen, michael.roth, qperret, steven.price, ak, david,
	luto, vbabka, marcorr, erdemaktas, pgonda, nikunj, seanjc,
	diviness, maz, dmatlack, axelrasmussen, maciej.szmigiero,
	mizhang, bgardon, Vishal Annapurve

Add a selftest placeholder for executing private memory
conversion tests with SEV VMs.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |  1 +
 tools/testing/selftests/kvm/Makefile          |  1 +
 .../kvm/x86_64/sev_private_mem_test.c         | 21 +++++++++++++++++++
 3 files changed, 23 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 095b67dc632e..757d4cac19b4 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -37,6 +37,7 @@
 /x86_64/set_sregs_test
 /x86_64/sev_all_boot_test
 /x86_64/sev_migrate_tests
+/x86_64/sev_private_mem_test
 /x86_64/smm_test
 /x86_64/state_test
 /x86_64/svm_vmcall_test
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 36874fedff4a..3f8030c46b72 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -98,6 +98,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
 TEST_GEN_PROGS_x86_64 += x86_64/private_mem_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_private_mem_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
 TEST_GEN_PROGS_x86_64 += x86_64/state_test
 TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c
new file mode 100644
index 000000000000..2c8edbaef627
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_private_mem_test.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022, Google LLC.
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <test_util.h>
+#include <kvm_util.h>
+#include <private_mem_test_helper.h>
+
+int main(int argc, char *argv[])
+{
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	execute_sev_memory_conversion_tests();
+	return 0;
+}
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-08-30 22:44 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-30 22:42 [RFC V2 PATCH 0/8] selftests: KVM: SEV: selftests for fd-based private memory Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 1/8] selftests: kvm: x86_64: Add support for pagetable tracking Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 2/8] kvm: Add HVA range operator Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 3/8] arch: x86: sev: Populate private memory fd during LAUNCH_UPDATE_DATA Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 4/8] selftests: kvm: sev: Support memslots with private memory Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 5/8] selftests: kvm: Update usage of private mem lib for SEV VMs Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 6/8] selftests: kvm: Support executing SEV VMs with private memory Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 7/8] selftests: kvm: Refactor testing logic for " Vishal Annapurve
2022-08-30 22:42 ` [RFC V2 PATCH 8/8] selftests: kvm: Add private memory test for SEV VMs Vishal Annapurve

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.