All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts
@ 2022-09-26 17:52 David Matlack
  2022-09-26 22:20 ` Sean Christopherson
  0 siblings, 1 reply; 3+ messages in thread
From: David Matlack @ 2022-09-26 17:52 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, David Matlack, Jim Mattson, Peter Xu,
	Yang Zhong, Wei Wang, Ben Gardon, kvm

Map the test's huge page region with 2MiB virtual mappings so that KVM
can shadow the region with huge pages. This fixes nx_huge_pages_test on
hosts where TDP hardware support is disabled.

Purposely do not skip this test on TDP-disabled hosts. While we don't
care about NX Huge Pages on TDP-disabled hosts from a security
perspective, KVM does support it, and so we should test it.

Signed-off-by: David Matlack <dmatlack@google.com>
---
 .../selftests/kvm/include/x86_64/processor.h        |  2 ++
 tools/testing/selftests/kvm/lib/x86_64/processor.c  | 13 +++++++++++++
 .../selftests/kvm/x86_64/nx_huge_pages_test.c       |  9 +++++++--
 3 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 0cbc71b7af50..4ffaa79fd8d6 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -855,6 +855,8 @@ enum pg_level {
 #define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G)
 
 void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level);
+void virt_map_2m(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+		 uint64_t nr_2m_pages);
 
 /*
  * Basic CPU control in CR0
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 2e6e61bbe81b..df8a1498ea28 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -214,6 +214,19 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
 	__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
 }
 
+void virt_map_2m(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+		 uint64_t nr_2m_pages)
+{
+	int i;
+
+	for (i = 0; i < nr_2m_pages; i++) {
+		__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_2M);
+
+		vaddr += PG_SIZE_2M;
+		paddr += PG_SIZE_2M;
+	}
+}
+
 static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm,
 					  struct kvm_vcpu *vcpu,
 					  uint64_t vaddr)
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
index cc6421716400..a850769692b7 100644
--- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
@@ -22,7 +22,8 @@
 #define HPAGE_GPA		(4UL << 30) /* 4G prevents collision w/ slot 0 */
 #define HPAGE_GVA		HPAGE_GPA /* GVA is arbitrary, so use GPA. */
 #define PAGES_PER_2MB_HUGE_PAGE 512
-#define HPAGE_SLOT_NPAGES	(3 * PAGES_PER_2MB_HUGE_PAGE)
+#define HPAGE_SLOT_2MB_PAGES	3
+#define HPAGE_SLOT_NPAGES	(HPAGE_SLOT_2MB_PAGES * PAGES_PER_2MB_HUGE_PAGE)
 
 /*
  * Passed by nx_huge_pages_test.sh to provide an easy warning if this test is
@@ -141,7 +142,11 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
 				    HPAGE_GPA, HPAGE_SLOT,
 				    HPAGE_SLOT_NPAGES, 0);
 
-	virt_map(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_NPAGES);
+	/*
+	 * Use 2MiB virtual mappings so that KVM can map the region with huge
+	 * pages even if TDP is disabled.
+	 */
+	virt_map_2m(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_2MB_PAGES);
 
 	hva = addr_gpa2hva(vm, HPAGE_GPA);
 	memset(hva, RETURN_OPCODE, HPAGE_SLOT_NPAGES * PAGE_SIZE);

base-commit: 372d07084593dc7a399bf9bee815711b1fb1bcf2
prerequisite-patch-id: 2e3661ba8856c29b769499bac525b6943d9284b8
-- 
2.37.3.998.g577e59143f-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts
  2022-09-26 17:52 [PATCH] KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts David Matlack
@ 2022-09-26 22:20 ` Sean Christopherson
  2022-09-27 12:13   ` Paolo Bonzini
  0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2022-09-26 22:20 UTC (permalink / raw)
  To: David Matlack
  Cc: Paolo Bonzini, Jim Mattson, Peter Xu, Yang Zhong, Wei Wang,
	Ben Gardon, kvm

On Mon, Sep 26, 2022, David Matlack wrote:
> +void virt_map_2m(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> +		 uint64_t nr_2m_pages)

Gah, the selftest APIs are so frustrating.  Taking the number of pages in virt_map()
and vm_userspace_mem_region_add() is asinine.  Tests either assume a specific page
size, e.g. x86 tests, or manually calculate the number of pages from the number of
bytes, only for those calculations to be reversed.  E.g. set_memory_region_test's
usage of getpagesize().

As a baby step toward providing sane APIs and being able to create huge mappings
in common tests, what about refactoring virt_map() to take the size in bytes (see
below), and then assert in arch code that the page size matches vm->page_size,
except for x86, which translates back to its page "levels".

Then max_guest_memory_test can use virt_map() and __virt_map(), and we could even
delete vm_calc_num_guest_pages().  aarch64's usage in steal_time_init() can be
hardcoded to a single page, the size of the allocation is hardcoded to 64 bytes,
I see no reason to dance around that and pretend that page sizes can be smaller
than 64 bytes.

Then for proper support, we can figure out how to enumerate the allowed page sizes
in the guest for use in common tests.

And in parallel, we can cajole someone into refactoring vm_userspace_mem_region_add()
to take the size in bytes.

E.g.

void __virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
		uint64_t nr_bytes, size_t page_size)
{
	uint64_t nr_pages = DIV_ROUND_UP(nr_bytes, page_size);

	TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow");
	TEST_ASSERT(paddr + size > paddr, "Paddr overflow");

	while (npages--) {
		virt_pg_map(vm, vaddr, paddr);
		vaddr += page_size;
		paddr += page_size;
	}
}

void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
	      uint64_t nr_bytes)
{
	__virt_map(vm, vaddr, paddr, nr_bytes, vm->page_size);
}

and in max_guest_memory_test:

#ifdef __x86_64__
		/* TODO: use huge pages for other architectures. */
		__virt_map(vm, gpa, gpa, slot_size, PG_SIZE_1G);
#else
		virt_map(vm, gpa, gpa, slot_size);
#endif

> +{
> +	int i;
> +
> +	for (i = 0; i < nr_2m_pages; i++) {
> +		__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_2M);
> +
> +		vaddr += PG_SIZE_2M;
> +		paddr += PG_SIZE_2M;
> +	}
> +}
> +
>  static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm,
>  					  struct kvm_vcpu *vcpu,
>  					  uint64_t vaddr)
> diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
> index cc6421716400..a850769692b7 100644
> --- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
> @@ -22,7 +22,8 @@
>  #define HPAGE_GPA		(4UL << 30) /* 4G prevents collision w/ slot 0 */
>  #define HPAGE_GVA		HPAGE_GPA /* GVA is arbitrary, so use GPA. */
>  #define PAGES_PER_2MB_HUGE_PAGE 512
> -#define HPAGE_SLOT_NPAGES	(3 * PAGES_PER_2MB_HUGE_PAGE)
> +#define HPAGE_SLOT_2MB_PAGES	3
> +#define HPAGE_SLOT_NPAGES	(HPAGE_SLOT_2MB_PAGES * PAGES_PER_2MB_HUGE_PAGE)
>  
>  /*
>   * Passed by nx_huge_pages_test.sh to provide an easy warning if this test is
> @@ -141,7 +142,11 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
>  				    HPAGE_GPA, HPAGE_SLOT,
>  				    HPAGE_SLOT_NPAGES, 0);
>  
> -	virt_map(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_NPAGES);
> +	/*
> +	 * Use 2MiB virtual mappings so that KVM can map the region with huge
> +	 * pages even if TDP is disabled.
> +	 */
> +	virt_map_2m(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_2MB_PAGES);

Hmm, what about probing TDP support and deliberately using 4KiB pages when TDP is
enabled?  That would give a bit of bonus coverage by verifying that KVM creates
huge pages irrespective of guest mapping level.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts
  2022-09-26 22:20 ` Sean Christopherson
@ 2022-09-27 12:13   ` Paolo Bonzini
  0 siblings, 0 replies; 3+ messages in thread
From: Paolo Bonzini @ 2022-09-27 12:13 UTC (permalink / raw)
  To: Sean Christopherson, David Matlack
  Cc: Jim Mattson, Peter Xu, Yang Zhong, Wei Wang, Ben Gardon, kvm

On 9/27/22 00:20, Sean Christopherson wrote:
> void __virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> 		uint64_t nr_bytes, size_t page_size)
> {
> 	uint64_t nr_pages = DIV_ROUND_UP(nr_bytes, page_size);
> 
> 	TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow");
> 	TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
> 
> 	while (npages--) {
> 		virt_pg_map(vm, vaddr, paddr);
> 		vaddr += page_size;
> 		paddr += page_size;
> 	}
> }
> 
> void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> 	      uint64_t nr_bytes)
> {
> 	__virt_map(vm, vaddr, paddr, nr_bytes, vm->page_size);
> }

I would just keep nr_pages in virt_map to begin with, for the sake of 
this patch.  Changing virt_map can be done later (and should be separate 
anyway).

>> -	virt_map(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_NPAGES);
>> +	/*
>> +	 * Use 2MiB virtual mappings so that KVM can map the region with huge
>> +	 * pages even if TDP is disabled.
>> +	 */
>> +	virt_map_2m(vm, HPAGE_GVA, HPAGE_GPA, HPAGE_SLOT_2MB_PAGES);
> 
> Hmm, what about probing TDP support and deliberately using 4KiB pages when TDP is
> enabled?  That would give a bit of bonus coverage by verifying that KVM creates
> huge pages irrespective of guest mapping level.

Nice idea indeed.

Paolo


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-09-27 12:13 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-26 17:52 [PATCH] KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts David Matlack
2022-09-26 22:20 ` Sean Christopherson
2022-09-27 12:13   ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.