linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups
@ 2021-02-10 23:06 Sean Christopherson
  2021-02-10 23:06 ` [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array Sean Christopherson
                   ` (15 more replies)
  0 siblings, 16 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Fix hugepage bugs in the KVM selftests that specifically affect dirty
logging and demand paging tests.  Found while attempting to verify KVM
changes/fixes related to hugepages and dirty logging (patches incoming in
a separate series).

Clean up the perf_test_args util on top of the hugepage fixes to clarify
what "page size" means, and to improve confidence in the code doing what
it thinks it's doing.  In a few cases, users of perf_test_args were
duplicating (approximating?) calculations made by perf_test_args, and it
wasn't obvious that both pieces of code were guaranteed to end up with the
same result.

Sean Christopherson (15):
  KVM: selftests: Explicitly state indicies for vm_guest_mode_params
    array
  KVM: selftests: Expose align() helpers to tests
  KVM: selftests: Align HVA for HugeTLB-backed memslots
  KVM: selftests: Force stronger HVA alignment (1gb) for hugepages
  KVM: selftests: Require GPA to be aligned when backed by hugepages
  KVM: selftests: Use shorthand local var to access struct
    perf_tests_args
  KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args
  KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging
    test
  KVM: selftests: Move per-VM GPA into perf_test_args
  KVM: selftests: Remove perf_test_args.host_page_size
  KVM: selftests: Create VM with adjusted number of guest pages for perf
    tests
  KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation
  KVM: selftests: Sync perf_test_args to guest during VM creation
  KVM: selftests: Track size of per-VM memslot in perf_test_args
  KVM: selftests: Get rid of gorilla math in memslots modification test

 .../selftests/kvm/demand_paging_test.c        |  39 ++---
 .../selftests/kvm/dirty_log_perf_test.c       |  10 +-
 .../testing/selftests/kvm/include/kvm_util.h  |  28 ++++
 .../selftests/kvm/include/perf_test_util.h    |  18 +--
 tools/testing/selftests/kvm/lib/kvm_util.c    |  36 ++---
 .../selftests/kvm/lib/perf_test_util.c        | 139 ++++++++++--------
 .../kvm/memslot_modification_stress_test.c    |  16 +-
 7 files changed, 145 insertions(+), 141 deletions(-)

-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  0:50   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 02/15] KVM: selftests: Expose align() helpers to tests Sean Christopherson
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Explicitly state the indices when populating vm_guest_mode_params to
make it marginally easier to visualize what's going on.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index d787cb802b4a..960f4c5129ff 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -154,13 +154,13 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
 	       "Missing new mode strings?");
 
 const struct vm_guest_mode_params vm_guest_mode_params[] = {
-	{ 52, 48,  0x1000, 12 },
-	{ 52, 48, 0x10000, 16 },
-	{ 48, 48,  0x1000, 12 },
-	{ 48, 48, 0x10000, 16 },
-	{ 40, 48,  0x1000, 12 },
-	{ 40, 48, 0x10000, 16 },
-	{  0,  0,  0x1000, 12 },
+	[VM_MODE_P52V48_4K]	= { 52, 48,  0x1000, 12 },
+	[VM_MODE_P52V48_64K]	= { 52, 48, 0x10000, 16 },
+	[VM_MODE_P48V48_4K]	= { 48, 48,  0x1000, 12 },
+	[VM_MODE_P48V48_64K]	= { 48, 48, 0x10000, 16 },
+	[VM_MODE_P40V48_4K]	= { 40, 48,  0x1000, 12 },
+	[VM_MODE_P40V48_64K]	= { 40, 48, 0x10000, 16 },
+	[VM_MODE_PXXV48_4K]	= {  0,  0,  0x1000, 12 },
 };
 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
 	       "Missing new mode params?");
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 02/15] KVM: selftests: Expose align() helpers to tests
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
  2021-02-10 23:06 ` [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  0:49   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots Sean Christopherson
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Refactor align() to work with non-pointers, add align_ptr() for use with
pointers, and expose both helpers so that they can be used by tests
and/or other utilities.  The align() helper in particular will be used
to ensure gpa alignment for hugepages.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 15 +++++++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c     | 11 +----------
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 2d7eb6989e83..4b5d2362a68a 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -79,6 +79,21 @@ struct vm_guest_mode_params {
 };
 extern const struct vm_guest_mode_params vm_guest_mode_params[];
 
+/* Aligns x up to the next multiple of size. Size must be a power of 2. */
+static inline uint64_t align(uint64_t x, uint64_t size)
+{
+	uint64_t mask = size - 1;
+
+	TEST_ASSERT(size != 0 && !(size & (size - 1)),
+		    "size not a power of 2: %lu", size);
+	return ((x + mask) & ~mask);
+}
+
+static inline void *align_ptr(void *x, size_t size)
+{
+	return (void *)align((unsigned long)x, size);
+}
+
 int kvm_check_cap(long cap);
 int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
 int vcpu_enable_cap(struct kvm_vm *vm, uint32_t vcpu_id,
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 960f4c5129ff..584167c6dbc7 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -21,15 +21,6 @@
 #define KVM_UTIL_PGS_PER_HUGEPG 512
 #define KVM_UTIL_MIN_PFN	2
 
-/* Aligns x up to the next multiple of size. Size must be a power of 2. */
-static void *align(void *x, size_t size)
-{
-	size_t mask = size - 1;
-	TEST_ASSERT(size != 0 && !(size & (size - 1)),
-		    "size not a power of 2: %lu", size);
-	return (void *) (((size_t) x + mask) & ~mask);
-}
-
 /*
  * Capability
  *
@@ -757,7 +748,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 		    region->mmap_start, errno);
 
 	/* Align host address */
-	region->host_mem = align(region->mmap_start, alignment);
+	region->host_mem = align_ptr(region->mmap_start, alignment);
 
 	/* As needed perform madvise */
 	if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
  2021-02-10 23:06 ` [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array Sean Christopherson
  2021-02-10 23:06 ` [PATCH 02/15] KVM: selftests: Expose align() helpers to tests Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  0:52   ` Ben Gardon
  2021-02-25  7:40   ` wangyanan (Y)
  2021-02-10 23:06 ` [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages Sean Christopherson
                   ` (12 subsequent siblings)
  15 siblings, 2 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
assert so any future backing types are forced to assess whether or not
they need to be aligned.

Cc: Ben Gardon <bgardon@google.com>
Cc: Yanan Wang <wangyanan55@huawei.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 584167c6dbc7..deaeb47b5a6d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	alignment = 1;
 #endif
 
-	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
+	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
+	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
 		alignment = max(huge_page_size, alignment);
+	else
+		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
 
 	/* Add enough memory to align up if necessary */
 	if (alignment > 1)
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (2 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-25  7:57   ` wangyanan (Y)
  2021-02-10 23:06 ` [PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages Sean Christopherson
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Align the HVA for hugepage memslots to 1gb, as opposed to incorrectly
assuming all architectures' hugepages are 512*page_size.

For x86, multiplying by 512 is correct, but only for 2mb pages, e.g.
systems that support 1gb pages will never be able to use them for mapping
guest memory, and thus those flows will not be exercised.

For arm64, powerpc, and s390 (and mips?), hardcoding the multiplier to
512 is either flat out wrong, or at best correct only in certain
configurations.

Hardcoding the _alignment_ to 1gb is a compromise between correctness and
simplicity.  Due to the myriad flavors of hugepages across architectures,
attempting to enumerate the exact hugepage size is difficult, and likely
requires probing the kernel.

But, there is no need for precision since a stronger alignment will not
prevent creating a smaller hugepage.  For all but the most extreme cases,
e.g. arm64's 16gb contiguous PMDs, aligning to 1gb is sufficient to allow
KVM to back the guest with hugepages.

Add the new alignment in kvm_util.h so that it can be used by callers of
vm_userspace_mem_region_add(), e.g. to also ensure GPAs are aligned.

Cc: Ben Gardon <bgardon@google.com>
Cc: Yanan Wang <wangyanan55@huawei.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/include/kvm_util.h | 13 +++++++++++++
 tools/testing/selftests/kvm/lib/kvm_util.c     |  4 +---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 4b5d2362a68a..a7dbdf46aa51 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -68,6 +68,19 @@ enum vm_guest_mode {
 #define MIN_PAGE_SIZE		(1U << MIN_PAGE_SHIFT)
 #define PTES_PER_MIN_PAGE	ptes_per_page(MIN_PAGE_SIZE)
 
+/*
+ * KVM_UTIL_HUGEPAGE_ALIGNMENT is selftest's required alignment for both host
+ * and guest addresses when backing guest memory with hugepages.  This is not
+ * the exact size of hugepages, rather it's a size that should allow backing
+ * the guest with hugepages on all architectures.  Precisely tracking the exact
+ * sizes across all architectures is more pain than gain, e.g. x86 supports 2mb
+ * and 1gb hugepages, arm64 supports 2mb and 1gb hugepages when using 4kb pages
+ * and 512mb hugepages when using 64kb pages (ignoring contiguous TLB entries),
+ * powerpc radix supports 1gb hugepages when using 64kb pages, s390 supports 1mb
+ * hugepages, and so on and so forth.
+ */
+#define KVM_UTIL_HUGEPAGE_ALIGNMENT	(1ULL << 30)
+
 #define vm_guest_mode_string(m) vm_guest_mode_string[m]
 extern const char * const vm_guest_mode_string[];
 
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index deaeb47b5a6d..2e497fbab6ae 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -18,7 +18,6 @@
 #include <unistd.h>
 #include <linux/kernel.h>
 
-#define KVM_UTIL_PGS_PER_HUGEPG 512
 #define KVM_UTIL_MIN_PFN	2
 
 /*
@@ -670,7 +669,6 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 {
 	int ret;
 	struct userspace_mem_region *region;
-	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
 	size_t alignment;
 
 	TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages,
@@ -733,7 +731,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 
 	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
 	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
-		alignment = max(huge_page_size, alignment);
+		alignment = max((size_t)KVM_UTIL_HUGEPAGE_ALIGNMENT, alignment);
 	else
 		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
 
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (3 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:01   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 06/15] KVM: selftests: Use shorthand local var to access struct perf_tests_args Sean Christopherson
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Assert that the GPA for a memslot backed by a hugepage is 1gb aligned,
and fix perf_test_util accordingly.  Lack of GPA alignment prevents KVM
from backing the guest with hugepages, e.g. x86's write-protection of
hugepages when dirty logging is activated is otherwise not exercised.

Add a comment explaining that guest_page_size is for non-huge pages to
try and avoid confusion about what it actually tracks.

Cc: Ben Gardon <bgardon@google.com>
Cc: Yanan Wang <wangyanan55@huawei.com>
Cc: Andrew Jones <drjones@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c       | 2 ++
 tools/testing/selftests/kvm/lib/perf_test_util.c | 9 +++++++++
 2 files changed, 11 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 2e497fbab6ae..855d20784ba7 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -735,6 +735,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
 	else
 		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
 
+	ASSERT_EQ(guest_paddr, align(guest_paddr, alignment));
+
 	/* Add enough memory to align up if necessary */
 	if (alignment > 1)
 		region->mmap_size += alignment;
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 81490b9b4e32..f187b86f2e14 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -58,6 +58,11 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
 	perf_test_args.host_page_size = getpagesize();
+
+	/*
+	 * Snapshot the non-huge page size.  This is used by the guest code to
+	 * access/dirty pages at the logging granularity.
+	 */
 	perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
 
 	guest_num_pages = vm_adjust_num_guest_pages(mode,
@@ -87,6 +92,10 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
 			      perf_test_args.guest_page_size;
 	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
+	if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
+	    backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
+		guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
+
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
 	guest_test_phys_mem &= ~((1 << 20) - 1);
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 06/15] KVM: selftests: Use shorthand local var to access struct perf_tests_args
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (4 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:09   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 07/15] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args Sean Christopherson
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Use 'pta' as a local pointer to the global perf_tests_args in order to
shorten line lengths and make the code borderline readable.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 .../selftests/kvm/lib/perf_test_util.c        | 36 ++++++++++---------
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index f187b86f2e14..73b0fccc28b9 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -23,7 +23,8 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
  */
 static void guest_code(uint32_t vcpu_id)
 {
-	struct perf_test_vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
+	struct perf_test_args *pta = &perf_test_args;
+	struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_id];
 	uint64_t gva;
 	uint64_t pages;
 	int i;
@@ -36,9 +37,9 @@ static void guest_code(uint32_t vcpu_id)
 
 	while (true) {
 		for (i = 0; i < pages; i++) {
-			uint64_t addr = gva + (i * perf_test_args.guest_page_size);
+			uint64_t addr = gva + (i * pta->guest_page_size);
 
-			if (i % perf_test_args.wr_fract == 0)
+			if (i % pta->wr_fract == 0)
 				*(uint64_t *)addr = 0x0123456789ABCDEF;
 			else
 				READ_ONCE(*(uint64_t *)addr);
@@ -52,32 +53,32 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
 				   enum vm_mem_backing_src_type backing_src)
 {
+	struct perf_test_args *pta = &perf_test_args;
 	struct kvm_vm *vm;
 	uint64_t guest_num_pages;
 
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	perf_test_args.host_page_size = getpagesize();
+	pta->host_page_size = getpagesize();
 
 	/*
 	 * Snapshot the non-huge page size.  This is used by the guest code to
 	 * access/dirty pages at the logging granularity.
 	 */
-	perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
+	pta->guest_page_size = vm_guest_mode_params[mode].page_size;
 
 	guest_num_pages = vm_adjust_num_guest_pages(mode,
-				(vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size);
+				(vcpus * vcpu_memory_bytes) / pta->guest_page_size);
 
-	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
+	TEST_ASSERT(vcpu_memory_bytes % pta->host_page_size == 0,
 		    "Guest memory size is not host page size aligned.");
-	TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,
+	TEST_ASSERT(vcpu_memory_bytes % pta->guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
 	vm = vm_create_with_vcpus(mode, vcpus,
-				  (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size,
+				  (vcpus * vcpu_memory_bytes) / pta->guest_page_size,
 				  0, guest_code, NULL);
-
-	perf_test_args.vm = vm;
+	pta->vm = vm;
 
 	/*
 	 * If there should be more memory in the guest test region than there
@@ -90,8 +91,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 		    vcpu_memory_bytes);
 
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
-			      perf_test_args.guest_page_size;
-	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
+			      pta->guest_page_size;
+	guest_test_phys_mem &= ~(pta->host_page_size - 1);
 	if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
 	    backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
 		guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
@@ -125,30 +126,31 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
 			   uint64_t vcpu_memory_bytes,
 			   bool partition_vcpu_memory_access)
 {
+	struct perf_test_args *pta = &perf_test_args;
 	vm_paddr_t vcpu_gpa;
 	struct perf_test_vcpu_args *vcpu_args;
 	int vcpu_id;
 
 	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
-		vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
+		vcpu_args = &pta->vcpu_args[vcpu_id];
 
 		vcpu_args->vcpu_id = vcpu_id;
 		if (partition_vcpu_memory_access) {
 			vcpu_args->gva = guest_test_virt_mem +
 					 (vcpu_id * vcpu_memory_bytes);
 			vcpu_args->pages = vcpu_memory_bytes /
-					   perf_test_args.guest_page_size;
+					   pta->guest_page_size;
 			vcpu_gpa = guest_test_phys_mem +
 				   (vcpu_id * vcpu_memory_bytes);
 		} else {
 			vcpu_args->gva = guest_test_virt_mem;
 			vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
-					   perf_test_args.guest_page_size;
+					   pta->guest_page_size;
 			vcpu_gpa = guest_test_phys_mem;
 		}
 
 		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
 			 vcpu_id, vcpu_gpa, vcpu_gpa +
-			 (vcpu_args->pages * perf_test_args.guest_page_size));
+			 (vcpu_args->pages * pta->guest_page_size));
 	}
 }
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 07/15] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (5 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 06/15] KVM: selftests: Use shorthand local var to access struct perf_tests_args Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:24   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 08/15] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test Sean Christopherson
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Capture the per-vCPU GPA in perf_test_vcpu_args so that tests can get
the GPA without having to calculate the GPA on their own.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/include/perf_test_util.h | 1 +
 tools/testing/selftests/kvm/lib/perf_test_util.c     | 9 ++++-----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 005f2143adeb..4d53238b139f 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -18,6 +18,7 @@
 #define PERF_TEST_MEM_SLOT_INDEX	1
 
 struct perf_test_vcpu_args {
+	uint64_t gpa;
 	uint64_t gva;
 	uint64_t pages;
 
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 73b0fccc28b9..f22ce1836547 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -127,7 +127,6 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
 			   bool partition_vcpu_memory_access)
 {
 	struct perf_test_args *pta = &perf_test_args;
-	vm_paddr_t vcpu_gpa;
 	struct perf_test_vcpu_args *vcpu_args;
 	int vcpu_id;
 
@@ -140,17 +139,17 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
 					 (vcpu_id * vcpu_memory_bytes);
 			vcpu_args->pages = vcpu_memory_bytes /
 					   pta->guest_page_size;
-			vcpu_gpa = guest_test_phys_mem +
-				   (vcpu_id * vcpu_memory_bytes);
+			vcpu_args->gpa = guest_test_phys_mem +
+					 (vcpu_id * vcpu_memory_bytes);
 		} else {
 			vcpu_args->gva = guest_test_virt_mem;
 			vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
 					   pta->guest_page_size;
-			vcpu_gpa = guest_test_phys_mem;
+			vcpu_args->gpa = guest_test_phys_mem;
 		}
 
 		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
-			 vcpu_id, vcpu_gpa, vcpu_gpa +
+			 vcpu_id, vcpu_args->gpa, vcpu_args->gpa +
 			 (vcpu_args->pages * pta->guest_page_size));
 	}
 }
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 08/15] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (6 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 07/15] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:23   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args Sean Christopherson
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Grab the per-vCPU GPA and number of pages from perf_util in the demand
paging test instead of duplicating perf_util's calculations.

Note, this may or may not result in a functional change.  It's not clear
that the test's calculations are guaranteed to yield the same value as
perf_util, e.g. if guest_percpu_mem_size != vcpu_args->pages.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 .../selftests/kvm/demand_paging_test.c        | 20 +++++--------------
 1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 5f7a229c3af1..0cbf111e6c21 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -294,24 +294,13 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd");
 
 		for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
-			vm_paddr_t vcpu_gpa;
+			struct perf_test_vcpu_args *vcpu_args;
 			void *vcpu_hva;
-			uint64_t vcpu_mem_size;
 
-
-			if (p->partition_vcpu_memory_access) {
-				vcpu_gpa = guest_test_phys_mem +
-					   (vcpu_id * guest_percpu_mem_size);
-				vcpu_mem_size = guest_percpu_mem_size;
-			} else {
-				vcpu_gpa = guest_test_phys_mem;
-				vcpu_mem_size = guest_percpu_mem_size * nr_vcpus;
-			}
-			PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
-				       vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_mem_size);
+			vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
 
 			/* Cache the HVA pointer of the region */
-			vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
+			vcpu_hva = addr_gpa2hva(vm, vcpu_args->gpa);
 
 			/*
 			 * Set up user fault fd to handle demand paging
@@ -325,7 +314,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 						&uffd_handler_threads[vcpu_id],
 						pipefds[vcpu_id * 2],
 						p->uffd_delay, &uffd_args[vcpu_id],
-						vcpu_hva, vcpu_mem_size);
+						vcpu_hva,
+						vcpu_args->pages * perf_test_args.guest_page_size);
 			if (r < 0)
 				exit(-r);
 		}
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (7 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 08/15] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:22   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 10/15] KVM: selftests: Remove perf_test_args.host_page_size Sean Christopherson
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Move the per-VM GPA into perf_test_args instead of storing it as a
separate global variable.  It's not obvious that guest_test_phys_mem
holds a GPA, nor that it's connected/coupled with per_vcpu->gpa.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 .../selftests/kvm/include/perf_test_util.h    |  8 +-----
 .../selftests/kvm/lib/perf_test_util.c        | 28 ++++++++-----------
 .../kvm/memslot_modification_stress_test.c    |  2 +-
 3 files changed, 13 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 4d53238b139f..cccf1c44bddb 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -29,6 +29,7 @@ struct perf_test_vcpu_args {
 struct perf_test_args {
 	struct kvm_vm *vm;
 	uint64_t host_page_size;
+	uint64_t gpa;
 	uint64_t guest_page_size;
 	int wr_fract;
 
@@ -37,13 +38,6 @@ struct perf_test_args {
 
 extern struct perf_test_args perf_test_args;
 
-/*
- * Guest physical memory offset of the testing memory slot.
- * This will be set to the topmost valid physical address minus
- * the test memory size.
- */
-extern uint64_t guest_test_phys_mem;
-
 struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
 				   enum vm_mem_backing_src_type backing_src);
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index f22ce1836547..03f125236021 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -9,8 +9,6 @@
 
 struct perf_test_args perf_test_args;
 
-uint64_t guest_test_phys_mem;
-
 /*
  * Guest virtual memory offset of the testing memory slot.
  * Must not conflict with identity mapped test code.
@@ -87,29 +85,25 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
 		    "Requested more guest memory than address space allows.\n"
 		    "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
-		    guest_num_pages, vm_get_max_gfn(vm), vcpus,
-		    vcpu_memory_bytes);
+		    guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_memory_bytes);
 
-	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
-			      pta->guest_page_size;
-	guest_test_phys_mem &= ~(pta->host_page_size - 1);
+	pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
+	pta->gpa &= ~(pta->host_page_size - 1);
 	if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
 	    backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
-		guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
-
+		pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
-	guest_test_phys_mem &= ~((1 << 20) - 1);
+	pta->gpa &= ~((1 << 20) - 1);
 #endif
-	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
+	pr_info("guest physical test memory offset: 0x%lx\n", pta->gpa);
 
 	/* Add an extra memory slot for testing */
-	vm_userspace_mem_region_add(vm, backing_src, guest_test_phys_mem,
-				    PERF_TEST_MEM_SLOT_INDEX,
-				    guest_num_pages, 0);
+	vm_userspace_mem_region_add(vm, backing_src, pta->gpa,
+				    PERF_TEST_MEM_SLOT_INDEX, guest_num_pages, 0);
 
 	/* Do mapping for the demand paging memory slot */
-	virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0);
+	virt_map(vm, guest_test_virt_mem, pta->gpa, guest_num_pages, 0);
 
 	ucall_init(vm, NULL);
 
@@ -139,13 +133,13 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
 					 (vcpu_id * vcpu_memory_bytes);
 			vcpu_args->pages = vcpu_memory_bytes /
 					   pta->guest_page_size;
-			vcpu_args->gpa = guest_test_phys_mem +
+			vcpu_args->gpa = pta->gpa +
 					 (vcpu_id * vcpu_memory_bytes);
 		} else {
 			vcpu_args->gva = guest_test_virt_mem;
 			vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
 					   pta->guest_page_size;
-			vcpu_args->gpa = guest_test_phys_mem;
+			vcpu_args->gpa = pta->gpa;
 		}
 
 		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 6096bf0a5b34..569bb1f55bdf 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -121,7 +121,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	add_remove_memslot(vm, p->memslot_modification_delay,
 			   p->nr_memslot_modifications,
-			   guest_test_phys_mem +
+			   perf_test_args.gpa +
 			   (guest_percpu_mem_size * nr_vcpus) +
 			   perf_test_args.host_page_size +
 			   perf_test_args.guest_page_size);
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 10/15] KVM: selftests: Remove perf_test_args.host_page_size
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (8 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:26   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 11/15] KVM: selftests: Create VM with adjusted number of guest pages for perf tests Sean Christopherson
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Remove perf_test_args.host_page_size and instead use getpagesize() so
that it's somewhat obvious that, for tests that care about the host page
size, they care about the system page size, not the hardware page size,
e.g. that the logic is unchanged if hugepages are in play.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/demand_paging_test.c          | 8 ++++----
 tools/testing/selftests/kvm/include/perf_test_util.h      | 1 -
 tools/testing/selftests/kvm/lib/perf_test_util.c          | 6 ++----
 .../selftests/kvm/memslot_modification_stress_test.c      | 2 +-
 4 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 0cbf111e6c21..b937a65b0e6d 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -83,7 +83,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 
 	copy.src = (uint64_t)guest_data_prototype;
 	copy.dst = addr;
-	copy.len = perf_test_args.host_page_size;
+	copy.len = getpagesize();
 	copy.mode = 0;
 
 	clock_gettime(CLOCK_MONOTONIC, &start);
@@ -100,7 +100,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
 	PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid,
 		       timespec_to_ns(ts_diff));
 	PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n",
-		       perf_test_args.host_page_size, addr, tid);
+		       getpagesize(), addr, tid);
 
 	return 0;
 }
@@ -271,10 +271,10 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	perf_test_args.wr_fract = 1;
 
-	guest_data_prototype = malloc(perf_test_args.host_page_size);
+	guest_data_prototype = malloc(getpagesize());
 	TEST_ASSERT(guest_data_prototype,
 		    "Failed to allocate buffer for guest data pattern");
-	memset(guest_data_prototype, 0xAB, perf_test_args.host_page_size);
+	memset(guest_data_prototype, 0xAB, getpagesize());
 
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index cccf1c44bddb..223fe6b79a04 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -28,7 +28,6 @@ struct perf_test_vcpu_args {
 
 struct perf_test_args {
 	struct kvm_vm *vm;
-	uint64_t host_page_size;
 	uint64_t gpa;
 	uint64_t guest_page_size;
 	int wr_fract;
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 03f125236021..982a86c8eeaa 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -57,8 +57,6 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
-	pta->host_page_size = getpagesize();
-
 	/*
 	 * Snapshot the non-huge page size.  This is used by the guest code to
 	 * access/dirty pages at the logging granularity.
@@ -68,7 +66,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	guest_num_pages = vm_adjust_num_guest_pages(mode,
 				(vcpus * vcpu_memory_bytes) / pta->guest_page_size);
 
-	TEST_ASSERT(vcpu_memory_bytes % pta->host_page_size == 0,
+	TEST_ASSERT(vcpu_memory_bytes % getpagesize() == 0,
 		    "Guest memory size is not host page size aligned.");
 	TEST_ASSERT(vcpu_memory_bytes % pta->guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
@@ -88,7 +86,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 		    guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_memory_bytes);
 
 	pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
-	pta->gpa &= ~(pta->host_page_size - 1);
+	pta->gpa &= ~(getpagesize() - 1);
 	if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
 	    backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
 		pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 569bb1f55bdf..b3b8f08e91ad 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -123,7 +123,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 			   p->nr_memslot_modifications,
 			   perf_test_args.gpa +
 			   (guest_percpu_mem_size * nr_vcpus) +
-			   perf_test_args.host_page_size +
+			   getpagesize() +
 			   perf_test_args.guest_page_size);
 
 	run_vcpus = false;
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 11/15] KVM: selftests: Create VM with adjusted number of guest pages for perf tests
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (9 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 10/15] KVM: selftests: Remove perf_test_args.host_page_size Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11  1:32   ` Ben Gardon
  2021-02-10 23:06 ` [PATCH 12/15] KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation Sean Christopherson
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Use the already computed guest_num_pages when creating the so called
extra VM pages for a perf test, and add a comment explaining why the
pages are allocated as extra pages.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/lib/perf_test_util.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 982a86c8eeaa..9b0cfdf10772 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -71,9 +71,12 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	TEST_ASSERT(vcpu_memory_bytes % pta->guest_page_size == 0,
 		    "Guest memory size is not guest page size aligned.");
 
-	vm = vm_create_with_vcpus(mode, vcpus,
-				  (vcpus * vcpu_memory_bytes) / pta->guest_page_size,
-				  0, guest_code, NULL);
+	/*
+	 * Pass guest_num_pages to populate the page tables for test memory.
+	 * The memory is also added to memslot 0, but that's a benign side
+	 * effect as KVM allows aliasing HVAs in memslots.
+	 */
+	vm = vm_create_with_vcpus(mode, vcpus, 0, guest_num_pages, guest_code, NULL);
 	pta->vm = vm;
 
 	/*
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 12/15] KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (10 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 11/15] KVM: selftests: Create VM with adjusted number of guest pages for perf tests Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-10 23:06 ` [PATCH 13/15] KVM: selftests: Sync perf_test_args to guest during " Sean Christopherson
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Fill the per-vCPU args when creating the perf_test VM instead of having
the caller do so.  This helps ensure that any adjustments to the number
of pages (and thus vcpu_memory_bytes) are reflected in the per-VM args.
Automatically filling the per-vCPU args will also allow a future patch
to do the sync to the guest during creation.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 .../selftests/kvm/demand_paging_test.c        |  6 +-
 .../selftests/kvm/dirty_log_perf_test.c       |  6 +-
 .../selftests/kvm/include/perf_test_util.h    |  6 +-
 .../selftests/kvm/lib/perf_test_util.c        | 74 ++++++++++---------
 .../kvm/memslot_modification_stress_test.c    |  6 +-
 5 files changed, 49 insertions(+), 49 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index b937a65b0e6d..00f2c795b68d 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -267,7 +267,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	int r;
 
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
-				 VM_MEM_SRC_ANONYMOUS);
+				 VM_MEM_SRC_ANONYMOUS,
+				 p->partition_vcpu_memory_access);
 
 	perf_test_args.wr_fract = 1;
 
@@ -279,9 +280,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size,
-			      p->partition_vcpu_memory_access);
-
 	if (p->use_uffd) {
 		uffd_handler_threads =
 			malloc(nr_vcpus * sizeof(*uffd_handler_threads));
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 04a2641261be..2c809452eac1 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -114,7 +114,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct timespec clear_dirty_log_total = (struct timespec){0};
 
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
-				 p->backing_src);
+				 p->backing_src,
+				 p->partition_vcpu_memory_access);
 
 	perf_test_args.wr_fract = p->wr_fract;
 
@@ -132,9 +133,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size,
-			      p->partition_vcpu_memory_access);
-
 	sync_global_to_guest(vm, perf_test_args);
 
 	/* Start the iterations */
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 223fe6b79a04..3a21e82a0173 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -39,10 +39,8 @@ extern struct perf_test_args perf_test_args;
 
 struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
-				   enum vm_mem_backing_src_type backing_src);
+				   enum vm_mem_backing_src_type backing_src,
+				   bool partition_vcpu_memory_access);
 void perf_test_destroy_vm(struct kvm_vm *vm);
-void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
-			   uint64_t vcpu_memory_bytes,
-			   bool partition_vcpu_memory_access);
 
 #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 9b0cfdf10772..3aa99365726b 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -47,9 +47,45 @@ static void guest_code(uint32_t vcpu_id)
 	}
 }
 
+
+static void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
+				  uint64_t vcpu_memory_bytes,
+				  bool partition_vcpu_memory_access)
+{
+	struct perf_test_args *pta = &perf_test_args;
+	struct perf_test_vcpu_args *vcpu_args;
+	int vcpu_id;
+
+	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
+		vcpu_args = &pta->vcpu_args[vcpu_id];
+
+		vcpu_args->vcpu_id = vcpu_id;
+		if (partition_vcpu_memory_access) {
+			vcpu_args->gva = guest_test_virt_mem +
+					 (vcpu_id * vcpu_memory_bytes);
+			vcpu_args->pages = vcpu_memory_bytes /
+					   pta->guest_page_size;
+			vcpu_args->gpa = pta->gpa +
+					 (vcpu_id * vcpu_memory_bytes);
+		} else {
+			vcpu_args->gva = guest_test_virt_mem;
+			vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
+					   pta->guest_page_size;
+			vcpu_args->gpa = pta->gpa;
+		}
+
+		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
+			 vcpu_id, vcpu_args->gpa, vcpu_args->gpa +
+			 (vcpu_args->pages * pta->guest_page_size));
+	}
+}
+
+
+
 struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
-				   enum vm_mem_backing_src_type backing_src)
+				   enum vm_mem_backing_src_type backing_src,
+				   bool partition_vcpu_memory_access)
 {
 	struct perf_test_args *pta = &perf_test_args;
 	struct kvm_vm *vm;
@@ -65,6 +101,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 
 	guest_num_pages = vm_adjust_num_guest_pages(mode,
 				(vcpus * vcpu_memory_bytes) / pta->guest_page_size);
+	vcpu_memory_bytes = (guest_num_pages * pta->guest_page_size) / vcpus;
 
 	TEST_ASSERT(vcpu_memory_bytes % getpagesize() == 0,
 		    "Guest memory size is not host page size aligned.");
@@ -106,6 +143,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	/* Do mapping for the demand paging memory slot */
 	virt_map(vm, guest_test_virt_mem, pta->gpa, guest_num_pages, 0);
 
+	perf_test_setup_vcpus(vm, vcpus, vcpu_memory_bytes,
+			      partition_vcpu_memory_access);
+
 	ucall_init(vm, NULL);
 
 	return vm;
@@ -116,35 +156,3 @@ void perf_test_destroy_vm(struct kvm_vm *vm)
 	ucall_uninit(vm);
 	kvm_vm_free(vm);
 }
-
-void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
-			   uint64_t vcpu_memory_bytes,
-			   bool partition_vcpu_memory_access)
-{
-	struct perf_test_args *pta = &perf_test_args;
-	struct perf_test_vcpu_args *vcpu_args;
-	int vcpu_id;
-
-	for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
-		vcpu_args = &pta->vcpu_args[vcpu_id];
-
-		vcpu_args->vcpu_id = vcpu_id;
-		if (partition_vcpu_memory_access) {
-			vcpu_args->gva = guest_test_virt_mem +
-					 (vcpu_id * vcpu_memory_bytes);
-			vcpu_args->pages = vcpu_memory_bytes /
-					   pta->guest_page_size;
-			vcpu_args->gpa = pta->gpa +
-					 (vcpu_id * vcpu_memory_bytes);
-		} else {
-			vcpu_args->gva = guest_test_virt_mem;
-			vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
-					   pta->guest_page_size;
-			vcpu_args->gpa = pta->gpa;
-		}
-
-		pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
-			 vcpu_id, vcpu_args->gpa, vcpu_args->gpa +
-			 (vcpu_args->pages * pta->guest_page_size));
-	}
-}
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index b3b8f08e91ad..949822833b6b 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -98,16 +98,14 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	int vcpu_id;
 
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
-				 VM_MEM_SRC_ANONYMOUS);
+				 VM_MEM_SRC_ANONYMOUS,
+				 p->partition_vcpu_memory_access);
 
 	perf_test_args.wr_fract = 1;
 
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size,
-			      p->partition_vcpu_memory_access);
-
 	/* Export the shared variables to the guest */
 	sync_global_to_guest(vm, perf_test_args);
 
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 13/15] KVM: selftests: Sync perf_test_args to guest during VM creation
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (11 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 12/15] KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-10 23:06 ` [PATCH 14/15] KVM: selftests: Track size of per-VM memslot in perf_test_args Sean Christopherson
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Copy perf_test_args to the guest during VM creation instead of relying on
the caller to do so at their leisure.  Ideally, tests wouldn't even be
able to modify perf_test_args, i.e. they would have no motivation to do
the sync, but enforcing that is arguably a net negative for readability.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/demand_paging_test.c           | 7 +------
 tools/testing/selftests/kvm/dirty_log_perf_test.c          | 6 +-----
 tools/testing/selftests/kvm/include/perf_test_util.h       | 1 +
 tools/testing/selftests/kvm/lib/perf_test_util.c           | 6 ++++++
 .../selftests/kvm/memslot_modification_stress_test.c       | 7 +------
 5 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 00f2c795b68d..d06ff8f37c53 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -267,11 +267,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	int r;
 
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
-				 VM_MEM_SRC_ANONYMOUS,
+				 VM_MEM_SRC_ANONYMOUS, 1,
 				 p->partition_vcpu_memory_access);
 
-	perf_test_args.wr_fract = 1;
-
 	guest_data_prototype = malloc(getpagesize());
 	TEST_ASSERT(guest_data_prototype,
 		    "Failed to allocate buffer for guest data pattern");
@@ -319,9 +317,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		}
 	}
 
-	/* Export the shared variables to the guest */
-	sync_global_to_guest(vm, perf_test_args);
-
 	pr_info("Finished creating vCPUs and starting uffd threads\n");
 
 	clock_gettime(CLOCK_MONOTONIC, &start);
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 2c809452eac1..9ab24bf50c60 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -114,11 +114,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct timespec clear_dirty_log_total = (struct timespec){0};
 
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
-				 p->backing_src,
+				 p->backing_src, p->wr_fract,
 				 p->partition_vcpu_memory_access);
 
-	perf_test_args.wr_fract = p->wr_fract;
-
 	guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm);
 	guest_num_pages = vm_adjust_num_guest_pages(mode, guest_num_pages);
 	host_num_pages = vm_num_host_pages(mode, guest_num_pages);
@@ -133,8 +131,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	sync_global_to_guest(vm, perf_test_args);
-
 	/* Start the iterations */
 	iteration = 0;
 	host_quit = false;
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 3a21e82a0173..330e528f206f 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -40,6 +40,7 @@ extern struct perf_test_args perf_test_args;
 struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
 				   enum vm_mem_backing_src_type backing_src,
+				   int wr_fract,
 				   bool partition_vcpu_memory_access);
 void perf_test_destroy_vm(struct kvm_vm *vm);
 
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 3aa99365726b..6f41fe2685cb 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -85,6 +85,7 @@ static void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
 struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
 				   enum vm_mem_backing_src_type backing_src,
+				   int wr_fract,
 				   bool partition_vcpu_memory_access)
 {
 	struct perf_test_args *pta = &perf_test_args;
@@ -93,6 +94,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 
 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
 
+	pta->wr_fract = wr_fract;
+
 	/*
 	 * Snapshot the non-huge page size.  This is used by the guest code to
 	 * access/dirty pages at the logging granularity.
@@ -148,6 +151,9 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 
 	ucall_init(vm, NULL);
 
+	/* Export the shared variables to the guest */
+	sync_global_to_guest(vm, perf_test_args);
+
 	return vm;
 }
 
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 949822833b6b..5ea9d7ef248e 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -98,17 +98,12 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	int vcpu_id;
 
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
-				 VM_MEM_SRC_ANONYMOUS,
+				 VM_MEM_SRC_ANONYMOUS, 1,
 				 p->partition_vcpu_memory_access);
 
-	perf_test_args.wr_fract = 1;
-
 	vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
 	TEST_ASSERT(vcpu_threads, "Memory allocation failed");
 
-	/* Export the shared variables to the guest */
-	sync_global_to_guest(vm, perf_test_args);
-
 	pr_info("Finished creating vCPUs\n");
 
 	for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++)
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 14/15] KVM: selftests: Track size of per-VM memslot in perf_test_args
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (12 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 13/15] KVM: selftests: Sync perf_test_args to guest during " Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-10 23:06 ` [PATCH 15/15] KVM: selftests: Get rid of gorilla math in memslots modification test Sean Christopherson
  2021-02-11 11:58 ` [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Andrew Jones
  15 siblings, 0 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Capture the final size of the dedicated memslot created by perf_test_args
so that tests don't have to try and guesstimate the final result.  This
will be used by the memslots modification test to remove some truly
mind-boggling code.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 tools/testing/selftests/kvm/include/perf_test_util.h | 1 +
 tools/testing/selftests/kvm/lib/perf_test_util.c     | 8 +++++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index 330e528f206f..4da2d2dbf4c2 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -30,6 +30,7 @@ struct perf_test_args {
 	struct kvm_vm *vm;
 	uint64_t gpa;
 	uint64_t guest_page_size;
+	uint64_t nr_bytes;
 	int wr_fract;
 
 	struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS];
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 6f41fe2685cb..00953d15388f 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -80,8 +80,6 @@ static void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
 	}
 }
 
-
-
 struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 				   uint64_t vcpu_memory_bytes,
 				   enum vm_mem_backing_src_type backing_src,
@@ -104,7 +102,11 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 
 	guest_num_pages = vm_adjust_num_guest_pages(mode,
 				(vcpus * vcpu_memory_bytes) / pta->guest_page_size);
-	vcpu_memory_bytes = (guest_num_pages * pta->guest_page_size) / vcpus;
+	pta->nr_bytes = guest_num_pages * pta->guest_page_size;
+
+	TEST_ASSERT(pta->nr_bytes % vcpus == 0,
+		    "Guest pages adjustment yielded a weird number of pages.");
+	vcpu_memory_bytes = pta->nr_bytes / vcpus;
 
 	TEST_ASSERT(vcpu_memory_bytes % getpagesize() == 0,
 		    "Guest memory size is not host page size aligned.");
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 15/15] KVM: selftests: Get rid of gorilla math in memslots modification test
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (13 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 14/15] KVM: selftests: Track size of per-VM memslot in perf_test_args Sean Christopherson
@ 2021-02-10 23:06 ` Sean Christopherson
  2021-02-11 11:58 ` [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Andrew Jones
  15 siblings, 0 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-02-10 23:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, Ben Gardon, Yanan Wang,
	Andrew Jones, Peter Xu, Aaron Lewis

Use the recently added perf_test_args.nr_bytes to define the location of
the dummy memslot created/removed by the memslot modification test.
Presumably, the goal of the existing code is simply to ensure the GPA of
the dummy memslot doesn't overlap with perf_test's memslot.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 .../testing/selftests/kvm/memslot_modification_stress_test.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 5ea9d7ef248e..cfc2b75619ba 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -114,10 +114,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 
 	add_remove_memslot(vm, p->memslot_modification_delay,
 			   p->nr_memslot_modifications,
-			   perf_test_args.gpa +
-			   (guest_percpu_mem_size * nr_vcpus) +
-			   getpagesize() +
-			   perf_test_args.guest_page_size);
+			   perf_test_args.gpa + perf_test_args.nr_bytes);
 
 	run_vcpus = false;
 
-- 
2.30.0.478.g8a0d178c01-goog


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 02/15] KVM: selftests: Expose align() helpers to tests
  2021-02-10 23:06 ` [PATCH 02/15] KVM: selftests: Expose align() helpers to tests Sean Christopherson
@ 2021-02-11  0:49   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  0:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Refactor align() to work with non-pointers, add align_ptr() for use with
> pointers, and expose both helpers so that they can be used by tests
> and/or other utilities.  The align() helper in particular will be used
> to ensure gpa alignment for hugepages.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/include/kvm_util.h | 15 +++++++++++++++
>  tools/testing/selftests/kvm/lib/kvm_util.c     | 11 +----------
>  2 files changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 2d7eb6989e83..4b5d2362a68a 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -79,6 +79,21 @@ struct vm_guest_mode_params {
>  };
>  extern const struct vm_guest_mode_params vm_guest_mode_params[];
>
> +/* Aligns x up to the next multiple of size. Size must be a power of 2. */

It might also be worth updating this comment to clarify that the
function rounds down, not up.

> +static inline uint64_t align(uint64_t x, uint64_t size)
> +{
> +       uint64_t mask = size - 1;
> +
> +       TEST_ASSERT(size != 0 && !(size & (size - 1)),
> +                   "size not a power of 2: %lu", size);
> +       return ((x + mask) & ~mask);
> +}
> +
> +static inline void *align_ptr(void *x, size_t size)
> +{
> +       return (void *)align((unsigned long)x, size);
> +}
> +
>  int kvm_check_cap(long cap);
>  int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
>  int vcpu_enable_cap(struct kvm_vm *vm, uint32_t vcpu_id,
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 960f4c5129ff..584167c6dbc7 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -21,15 +21,6 @@
>  #define KVM_UTIL_PGS_PER_HUGEPG 512
>  #define KVM_UTIL_MIN_PFN       2
>
> -/* Aligns x up to the next multiple of size. Size must be a power of 2. */
> -static void *align(void *x, size_t size)
> -{
> -       size_t mask = size - 1;
> -       TEST_ASSERT(size != 0 && !(size & (size - 1)),
> -                   "size not a power of 2: %lu", size);
> -       return (void *) (((size_t) x + mask) & ~mask);
> -}
> -
>  /*
>   * Capability
>   *
> @@ -757,7 +748,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>                     region->mmap_start, errno);
>
>         /* Align host address */
> -       region->host_mem = align(region->mmap_start, alignment);
> +       region->host_mem = align_ptr(region->mmap_start, alignment);
>
>         /* As needed perform madvise */
>         if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) {
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array
  2021-02-10 23:06 ` [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array Sean Christopherson
@ 2021-02-11  0:50   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  0:50 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Explicitly state the indices when populating vm_guest_mode_params to
> make it marginally easier to visualize what's going on.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index d787cb802b4a..960f4c5129ff 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -154,13 +154,13 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
>                "Missing new mode strings?");
>
>  const struct vm_guest_mode_params vm_guest_mode_params[] = {
> -       { 52, 48,  0x1000, 12 },
> -       { 52, 48, 0x10000, 16 },
> -       { 48, 48,  0x1000, 12 },
> -       { 48, 48, 0x10000, 16 },
> -       { 40, 48,  0x1000, 12 },
> -       { 40, 48, 0x10000, 16 },
> -       {  0,  0,  0x1000, 12 },
> +       [VM_MODE_P52V48_4K]     = { 52, 48,  0x1000, 12 },
> +       [VM_MODE_P52V48_64K]    = { 52, 48, 0x10000, 16 },
> +       [VM_MODE_P48V48_4K]     = { 48, 48,  0x1000, 12 },
> +       [VM_MODE_P48V48_64K]    = { 48, 48, 0x10000, 16 },
> +       [VM_MODE_P40V48_4K]     = { 40, 48,  0x1000, 12 },
> +       [VM_MODE_P40V48_64K]    = { 40, 48, 0x10000, 16 },
> +       [VM_MODE_PXXV48_4K]     = {  0,  0,  0x1000, 12 },
>  };
>  _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
>                "Missing new mode params?");
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots
  2021-02-10 23:06 ` [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots Sean Christopherson
@ 2021-02-11  0:52   ` Ben Gardon
  2021-02-25  7:40   ` wangyanan (Y)
  1 sibling, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  0:52 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
> assert so any future backing types are forced to assess whether or not
> they need to be aligned.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 584167c6dbc7..deaeb47b5a6d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>         alignment = 1;
>  #endif
>
> -       if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> +       if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> +           src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
>                 alignment = max(huge_page_size, alignment);
> +       else
> +               ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>
>         /* Add enough memory to align up if necessary */
>         if (alignment > 1)
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages
  2021-02-10 23:06 ` [PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages Sean Christopherson
@ 2021-02-11  1:01   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:01 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Assert that the GPA for a memslot backed by a hugepage is 1gb aligned,
> and fix perf_test_util accordingly.  Lack of GPA alignment prevents KVM
> from backing the guest with hugepages, e.g. x86's write-protection of
> hugepages when dirty logging is activated is otherwise not exercised.
>
> Add a comment explaining that guest_page_size is for non-huge pages to
> try and avoid confusion about what it actually tracks.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  tools/testing/selftests/kvm/lib/kvm_util.c       | 2 ++
>  tools/testing/selftests/kvm/lib/perf_test_util.c | 9 +++++++++
>  2 files changed, 11 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 2e497fbab6ae..855d20784ba7 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -735,6 +735,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>         else
>                 ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>
> +       ASSERT_EQ(guest_paddr, align(guest_paddr, alignment));
> +
>         /* Add enough memory to align up if necessary */
>         if (alignment > 1)
>                 region->mmap_size += alignment;
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 81490b9b4e32..f187b86f2e14 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -58,6 +58,11 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>
>         perf_test_args.host_page_size = getpagesize();
> +
> +       /*
> +        * Snapshot the non-huge page size.  This is used by the guest code to
> +        * access/dirty pages at the logging granularity.
> +        */
>         perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
>
>         guest_num_pages = vm_adjust_num_guest_pages(mode,
> @@ -87,6 +92,10 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>         guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
>                               perf_test_args.guest_page_size;
>         guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
> +       if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
> +           backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> +               guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);

You could use the align helper here as well. That would make this a
little easier for me to read.

> +
>  #ifdef __s390x__
>         /* Align to 1M (segment size) */
>         guest_test_phys_mem &= ~((1 << 20) - 1);
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 06/15] KVM: selftests: Use shorthand local var to access struct perf_tests_args
  2021-02-10 23:06 ` [PATCH 06/15] KVM: selftests: Use shorthand local var to access struct perf_tests_args Sean Christopherson
@ 2021-02-11  1:09   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:09 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Use 'pta' as a local pointer to the global perf_tests_args in order to
> shorten line lengths and make the code borderline readable.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  .../selftests/kvm/lib/perf_test_util.c        | 36 ++++++++++---------
>  1 file changed, 19 insertions(+), 17 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index f187b86f2e14..73b0fccc28b9 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -23,7 +23,8 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
>   */
>  static void guest_code(uint32_t vcpu_id)
>  {
> -       struct perf_test_vcpu_args *vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
> +       struct perf_test_args *pta = &perf_test_args;
> +       struct perf_test_vcpu_args *vcpu_args = &pta->vcpu_args[vcpu_id];
>         uint64_t gva;
>         uint64_t pages;
>         int i;
> @@ -36,9 +37,9 @@ static void guest_code(uint32_t vcpu_id)
>
>         while (true) {
>                 for (i = 0; i < pages; i++) {
> -                       uint64_t addr = gva + (i * perf_test_args.guest_page_size);
> +                       uint64_t addr = gva + (i * pta->guest_page_size);
>
> -                       if (i % perf_test_args.wr_fract == 0)
> +                       if (i % pta->wr_fract == 0)
>                                 *(uint64_t *)addr = 0x0123456789ABCDEF;
>                         else
>                                 READ_ONCE(*(uint64_t *)addr);
> @@ -52,32 +53,32 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>                                    uint64_t vcpu_memory_bytes,
>                                    enum vm_mem_backing_src_type backing_src)
>  {
> +       struct perf_test_args *pta = &perf_test_args;
>         struct kvm_vm *vm;
>         uint64_t guest_num_pages;
>
>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>
> -       perf_test_args.host_page_size = getpagesize();
> +       pta->host_page_size = getpagesize();
>
>         /*
>          * Snapshot the non-huge page size.  This is used by the guest code to
>          * access/dirty pages at the logging granularity.
>          */
> -       perf_test_args.guest_page_size = vm_guest_mode_params[mode].page_size;
> +       pta->guest_page_size = vm_guest_mode_params[mode].page_size;
>
>         guest_num_pages = vm_adjust_num_guest_pages(mode,
> -                               (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size);
> +                               (vcpus * vcpu_memory_bytes) / pta->guest_page_size);
>
> -       TEST_ASSERT(vcpu_memory_bytes % perf_test_args.host_page_size == 0,
> +       TEST_ASSERT(vcpu_memory_bytes % pta->host_page_size == 0,
>                     "Guest memory size is not host page size aligned.");
> -       TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0,
> +       TEST_ASSERT(vcpu_memory_bytes % pta->guest_page_size == 0,
>                     "Guest memory size is not guest page size aligned.");
>
>         vm = vm_create_with_vcpus(mode, vcpus,
> -                                 (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size,
> +                                 (vcpus * vcpu_memory_bytes) / pta->guest_page_size,
>                                   0, guest_code, NULL);
> -
> -       perf_test_args.vm = vm;
> +       pta->vm = vm;
>
>         /*
>          * If there should be more memory in the guest test region than there
> @@ -90,8 +91,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>                     vcpu_memory_bytes);
>
>         guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
> -                             perf_test_args.guest_page_size;
> -       guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
> +                             pta->guest_page_size;
> +       guest_test_phys_mem &= ~(pta->host_page_size - 1);

Not really germane to this patch, but the align macro could be used
here as well.

>         if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
>             backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
>                 guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> @@ -125,30 +126,31 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
>                            uint64_t vcpu_memory_bytes,
>                            bool partition_vcpu_memory_access)
>  {
> +       struct perf_test_args *pta = &perf_test_args;
>         vm_paddr_t vcpu_gpa;
>         struct perf_test_vcpu_args *vcpu_args;
>         int vcpu_id;
>
>         for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> -               vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
> +               vcpu_args = &pta->vcpu_args[vcpu_id];
>
>                 vcpu_args->vcpu_id = vcpu_id;
>                 if (partition_vcpu_memory_access) {
>                         vcpu_args->gva = guest_test_virt_mem +
>                                          (vcpu_id * vcpu_memory_bytes);
>                         vcpu_args->pages = vcpu_memory_bytes /
> -                                          perf_test_args.guest_page_size;
> +                                          pta->guest_page_size;
>                         vcpu_gpa = guest_test_phys_mem +
>                                    (vcpu_id * vcpu_memory_bytes);
>                 } else {
>                         vcpu_args->gva = guest_test_virt_mem;
>                         vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
> -                                          perf_test_args.guest_page_size;
> +                                          pta->guest_page_size;
>                         vcpu_gpa = guest_test_phys_mem;
>                 }
>
>                 pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
>                          vcpu_id, vcpu_gpa, vcpu_gpa +
> -                        (vcpu_args->pages * perf_test_args.guest_page_size));
> +                        (vcpu_args->pages * pta->guest_page_size));
>         }
>  }
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args
  2021-02-10 23:06 ` [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args Sean Christopherson
@ 2021-02-11  1:22   ` Ben Gardon
  2021-02-11  1:56     ` Sean Christopherson
  0 siblings, 1 reply; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:22 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Move the per-VM GPA into perf_test_args instead of storing it as a
> separate global variable.  It's not obvious that guest_test_phys_mem
> holds a GPA, nor that it's connected/coupled with per_vcpu->gpa.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  .../selftests/kvm/include/perf_test_util.h    |  8 +-----
>  .../selftests/kvm/lib/perf_test_util.c        | 28 ++++++++-----------
>  .../kvm/memslot_modification_stress_test.c    |  2 +-
>  3 files changed, 13 insertions(+), 25 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
> index 4d53238b139f..cccf1c44bddb 100644
> --- a/tools/testing/selftests/kvm/include/perf_test_util.h
> +++ b/tools/testing/selftests/kvm/include/perf_test_util.h
> @@ -29,6 +29,7 @@ struct perf_test_vcpu_args {
>  struct perf_test_args {
>         struct kvm_vm *vm;
>         uint64_t host_page_size;
> +       uint64_t gpa;
>         uint64_t guest_page_size;
>         int wr_fract;
>
> @@ -37,13 +38,6 @@ struct perf_test_args {
>
>  extern struct perf_test_args perf_test_args;
>
> -/*
> - * Guest physical memory offset of the testing memory slot.
> - * This will be set to the topmost valid physical address minus
> - * the test memory size.
> - */
> -extern uint64_t guest_test_phys_mem;
> -
>  struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>                                    uint64_t vcpu_memory_bytes,
>                                    enum vm_mem_backing_src_type backing_src);
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index f22ce1836547..03f125236021 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -9,8 +9,6 @@
>
>  struct perf_test_args perf_test_args;
>
> -uint64_t guest_test_phys_mem;
> -
>  /*
>   * Guest virtual memory offset of the testing memory slot.
>   * Must not conflict with identity mapped test code.
> @@ -87,29 +85,25 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>         TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
>                     "Requested more guest memory than address space allows.\n"
>                     "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
> -                   guest_num_pages, vm_get_max_gfn(vm), vcpus,
> -                   vcpu_memory_bytes);
> +                   guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_memory_bytes);
>
> -       guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
> -                             pta->guest_page_size;
> -       guest_test_phys_mem &= ~(pta->host_page_size - 1);
> +       pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
> +       pta->gpa &= ~(pta->host_page_size - 1);

Also not related to this patch, but another case for align.

>         if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
>             backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> -               guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> -
> +               pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);

also align

>  #ifdef __s390x__
>         /* Align to 1M (segment size) */
> -       guest_test_phys_mem &= ~((1 << 20) - 1);
> +       pta->gpa &= ~((1 << 20) - 1);

And here again (oof)

>  #endif
> -       pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
> +       pr_info("guest physical test memory offset: 0x%lx\n", pta->gpa);
>
>         /* Add an extra memory slot for testing */
> -       vm_userspace_mem_region_add(vm, backing_src, guest_test_phys_mem,
> -                                   PERF_TEST_MEM_SLOT_INDEX,
> -                                   guest_num_pages, 0);
> +       vm_userspace_mem_region_add(vm, backing_src, pta->gpa,
> +                                   PERF_TEST_MEM_SLOT_INDEX, guest_num_pages, 0);
>
>         /* Do mapping for the demand paging memory slot */
> -       virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages, 0);
> +       virt_map(vm, guest_test_virt_mem, pta->gpa, guest_num_pages, 0);
>
>         ucall_init(vm, NULL);
>
> @@ -139,13 +133,13 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
>                                          (vcpu_id * vcpu_memory_bytes);
>                         vcpu_args->pages = vcpu_memory_bytes /
>                                            pta->guest_page_size;
> -                       vcpu_args->gpa = guest_test_phys_mem +
> +                       vcpu_args->gpa = pta->gpa +
>                                          (vcpu_id * vcpu_memory_bytes);
>                 } else {
>                         vcpu_args->gva = guest_test_virt_mem;
>                         vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
>                                            pta->guest_page_size;
> -                       vcpu_args->gpa = guest_test_phys_mem;
> +                       vcpu_args->gpa = pta->gpa;
>                 }
>
>                 pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
> diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> index 6096bf0a5b34..569bb1f55bdf 100644
> --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> @@ -121,7 +121,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>
>         add_remove_memslot(vm, p->memslot_modification_delay,
>                            p->nr_memslot_modifications,
> -                          guest_test_phys_mem +
> +                          perf_test_args.gpa +
>                            (guest_percpu_mem_size * nr_vcpus) +
>                            perf_test_args.host_page_size +
>                            perf_test_args.guest_page_size);
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 08/15] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test
  2021-02-10 23:06 ` [PATCH 08/15] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test Sean Christopherson
@ 2021-02-11  1:23   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:23 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Grab the per-vCPU GPA and number of pages from perf_util in the demand
> paging test instead of duplicating perf_util's calculations.
>
> Note, this may or may not result in a functional change.  It's not clear
> that the test's calculations are guaranteed to yield the same value as
> perf_util, e.g. if guest_percpu_mem_size != vcpu_args->pages.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  .../selftests/kvm/demand_paging_test.c        | 20 +++++--------------
>  1 file changed, 5 insertions(+), 15 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 5f7a229c3af1..0cbf111e6c21 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -294,24 +294,13 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>                 TEST_ASSERT(pipefds, "Unable to allocate memory for pipefd");
>
>                 for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) {
> -                       vm_paddr_t vcpu_gpa;
> +                       struct perf_test_vcpu_args *vcpu_args;
>                         void *vcpu_hva;
> -                       uint64_t vcpu_mem_size;
>
> -
> -                       if (p->partition_vcpu_memory_access) {
> -                               vcpu_gpa = guest_test_phys_mem +
> -                                          (vcpu_id * guest_percpu_mem_size);
> -                               vcpu_mem_size = guest_percpu_mem_size;
> -                       } else {
> -                               vcpu_gpa = guest_test_phys_mem;
> -                               vcpu_mem_size = guest_percpu_mem_size * nr_vcpus;
> -                       }
> -                       PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n",
> -                                      vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_mem_size);
> +                       vcpu_args = &perf_test_args.vcpu_args[vcpu_id];
>
>                         /* Cache the HVA pointer of the region */
> -                       vcpu_hva = addr_gpa2hva(vm, vcpu_gpa);
> +                       vcpu_hva = addr_gpa2hva(vm, vcpu_args->gpa);
>
>                         /*
>                          * Set up user fault fd to handle demand paging
> @@ -325,7 +314,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>                                                 &uffd_handler_threads[vcpu_id],
>                                                 pipefds[vcpu_id * 2],
>                                                 p->uffd_delay, &uffd_args[vcpu_id],
> -                                               vcpu_hva, vcpu_mem_size);
> +                                               vcpu_hva,
> +                                               vcpu_args->pages * perf_test_args.guest_page_size);
>                         if (r < 0)
>                                 exit(-r);
>                 }
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/15] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args
  2021-02-10 23:06 ` [PATCH 07/15] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args Sean Christopherson
@ 2021-02-11  1:24   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:24 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Capture the per-vCPU GPA in perf_test_vcpu_args so that tests can get
> the GPA without having to calculate the GPA on their own.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/include/perf_test_util.h | 1 +
>  tools/testing/selftests/kvm/lib/perf_test_util.c     | 9 ++++-----
>  2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
> index 005f2143adeb..4d53238b139f 100644
> --- a/tools/testing/selftests/kvm/include/perf_test_util.h
> +++ b/tools/testing/selftests/kvm/include/perf_test_util.h
> @@ -18,6 +18,7 @@
>  #define PERF_TEST_MEM_SLOT_INDEX       1
>
>  struct perf_test_vcpu_args {
> +       uint64_t gpa;
>         uint64_t gva;
>         uint64_t pages;
>
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 73b0fccc28b9..f22ce1836547 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -127,7 +127,6 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
>                            bool partition_vcpu_memory_access)
>  {
>         struct perf_test_args *pta = &perf_test_args;
> -       vm_paddr_t vcpu_gpa;
>         struct perf_test_vcpu_args *vcpu_args;
>         int vcpu_id;
>
> @@ -140,17 +139,17 @@ void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
>                                          (vcpu_id * vcpu_memory_bytes);
>                         vcpu_args->pages = vcpu_memory_bytes /
>                                            pta->guest_page_size;
> -                       vcpu_gpa = guest_test_phys_mem +
> -                                  (vcpu_id * vcpu_memory_bytes);
> +                       vcpu_args->gpa = guest_test_phys_mem +
> +                                        (vcpu_id * vcpu_memory_bytes);
>                 } else {
>                         vcpu_args->gva = guest_test_virt_mem;
>                         vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
>                                            pta->guest_page_size;
> -                       vcpu_gpa = guest_test_phys_mem;
> +                       vcpu_args->gpa = guest_test_phys_mem;
>                 }
>
>                 pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
> -                        vcpu_id, vcpu_gpa, vcpu_gpa +
> +                        vcpu_id, vcpu_args->gpa, vcpu_args->gpa +
>                          (vcpu_args->pages * pta->guest_page_size));
>         }
>  }
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 10/15] KVM: selftests: Remove perf_test_args.host_page_size
  2021-02-10 23:06 ` [PATCH 10/15] KVM: selftests: Remove perf_test_args.host_page_size Sean Christopherson
@ 2021-02-11  1:26   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:26 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Remove perf_test_args.host_page_size and instead use getpagesize() so
> that it's somewhat obvious that, for tests that care about the host page
> size, they care about the system page size, not the hardware page size,
> e.g. that the logic is unchanged if hugepages are in play.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/demand_paging_test.c          | 8 ++++----
>  tools/testing/selftests/kvm/include/perf_test_util.h      | 1 -
>  tools/testing/selftests/kvm/lib/perf_test_util.c          | 6 ++----
>  .../selftests/kvm/memslot_modification_stress_test.c      | 2 +-
>  4 files changed, 7 insertions(+), 10 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 0cbf111e6c21..b937a65b0e6d 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -83,7 +83,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
>
>         copy.src = (uint64_t)guest_data_prototype;
>         copy.dst = addr;
> -       copy.len = perf_test_args.host_page_size;
> +       copy.len = getpagesize();
>         copy.mode = 0;
>
>         clock_gettime(CLOCK_MONOTONIC, &start);
> @@ -100,7 +100,7 @@ static int handle_uffd_page_request(int uffd, uint64_t addr)
>         PER_PAGE_DEBUG("UFFDIO_COPY %d \t%ld ns\n", tid,
>                        timespec_to_ns(ts_diff));
>         PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n",
> -                      perf_test_args.host_page_size, addr, tid);
> +                      getpagesize(), addr, tid);
>
>         return 0;
>  }
> @@ -271,10 +271,10 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>
>         perf_test_args.wr_fract = 1;
>
> -       guest_data_prototype = malloc(perf_test_args.host_page_size);
> +       guest_data_prototype = malloc(getpagesize());
>         TEST_ASSERT(guest_data_prototype,
>                     "Failed to allocate buffer for guest data pattern");
> -       memset(guest_data_prototype, 0xAB, perf_test_args.host_page_size);
> +       memset(guest_data_prototype, 0xAB, getpagesize());
>
>         vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
>         TEST_ASSERT(vcpu_threads, "Memory allocation failed");
> diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
> index cccf1c44bddb..223fe6b79a04 100644
> --- a/tools/testing/selftests/kvm/include/perf_test_util.h
> +++ b/tools/testing/selftests/kvm/include/perf_test_util.h
> @@ -28,7 +28,6 @@ struct perf_test_vcpu_args {
>
>  struct perf_test_args {
>         struct kvm_vm *vm;
> -       uint64_t host_page_size;
>         uint64_t gpa;
>         uint64_t guest_page_size;
>         int wr_fract;
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 03f125236021..982a86c8eeaa 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -57,8 +57,6 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>
>         pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
>
> -       pta->host_page_size = getpagesize();
> -
>         /*
>          * Snapshot the non-huge page size.  This is used by the guest code to
>          * access/dirty pages at the logging granularity.
> @@ -68,7 +66,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>         guest_num_pages = vm_adjust_num_guest_pages(mode,
>                                 (vcpus * vcpu_memory_bytes) / pta->guest_page_size);
>
> -       TEST_ASSERT(vcpu_memory_bytes % pta->host_page_size == 0,
> +       TEST_ASSERT(vcpu_memory_bytes % getpagesize() == 0,
>                     "Guest memory size is not host page size aligned.");
>         TEST_ASSERT(vcpu_memory_bytes % pta->guest_page_size == 0,
>                     "Guest memory size is not guest page size aligned.");
> @@ -88,7 +86,7 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>                     guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_memory_bytes);
>
>         pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
> -       pta->gpa &= ~(pta->host_page_size - 1);
> +       pta->gpa &= ~(getpagesize() - 1);
>         if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
>             backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
>                 pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> index 569bb1f55bdf..b3b8f08e91ad 100644
> --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> @@ -123,7 +123,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>                            p->nr_memslot_modifications,
>                            perf_test_args.gpa +
>                            (guest_percpu_mem_size * nr_vcpus) +
> -                          perf_test_args.host_page_size +
> +                          getpagesize() +
>                            perf_test_args.guest_page_size);
>
>         run_vcpus = false;
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 11/15] KVM: selftests: Create VM with adjusted number of guest pages for perf tests
  2021-02-10 23:06 ` [PATCH 11/15] KVM: selftests: Create VM with adjusted number of guest pages for perf tests Sean Christopherson
@ 2021-02-11  1:32   ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11  1:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021 at 3:06 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Use the already computed guest_num_pages when creating the so called
> extra VM pages for a perf test, and add a comment explaining why the
> pages are allocated as extra pages.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/lib/perf_test_util.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 982a86c8eeaa..9b0cfdf10772 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -71,9 +71,12 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>         TEST_ASSERT(vcpu_memory_bytes % pta->guest_page_size == 0,
>                     "Guest memory size is not guest page size aligned.");
>
> -       vm = vm_create_with_vcpus(mode, vcpus,
> -                                 (vcpus * vcpu_memory_bytes) / pta->guest_page_size,
> -                                 0, guest_code, NULL);
> +       /*
> +        * Pass guest_num_pages to populate the page tables for test memory.
> +        * The memory is also added to memslot 0, but that's a benign side
> +        * effect as KVM allows aliasing HVAs in memslots.
> +        */
> +       vm = vm_create_with_vcpus(mode, vcpus, 0, guest_num_pages, guest_code, NULL);
>         pta->vm = vm;
>
>         /*
> --
> 2.30.0.478.g8a0d178c01-goog
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args
  2021-02-11  1:22   ` Ben Gardon
@ 2021-02-11  1:56     ` Sean Christopherson
  2021-02-11 13:12       ` Paolo Bonzini
  0 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-11  1:56 UTC (permalink / raw)
  To: Ben Gardon
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Wed, Feb 10, 2021, Ben Gardon wrote:
> > diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> > index f22ce1836547..03f125236021 100644
> > --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> > +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> > @@ -9,8 +9,6 @@
> >
> >  struct perf_test_args perf_test_args;
> >
> > -uint64_t guest_test_phys_mem;
> > -
> >  /*
> >   * Guest virtual memory offset of the testing memory slot.
> >   * Must not conflict with identity mapped test code.
> > @@ -87,29 +85,25 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
> >         TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm),
> >                     "Requested more guest memory than address space allows.\n"
> >                     "    guest pages: %lx max gfn: %x vcpus: %d wss: %lx]\n",
> > -                   guest_num_pages, vm_get_max_gfn(vm), vcpus,
> > -                   vcpu_memory_bytes);
> > +                   guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_memory_bytes);
> >
> > -       guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
> > -                             pta->guest_page_size;
> > -       guest_test_phys_mem &= ~(pta->host_page_size - 1);
> > +       pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
> > +       pta->gpa &= ~(pta->host_page_size - 1);
> 
> Also not related to this patch, but another case for align.
> 
> >         if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
> >             backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> > -               guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> > -
> > +               pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> 
> also align
> 
> >  #ifdef __s390x__
> >         /* Align to 1M (segment size) */
> > -       guest_test_phys_mem &= ~((1 << 20) - 1);
> > +       pta->gpa &= ~((1 << 20) - 1);
> 
> And here again (oof)

Yep, I'll fix all these and the align() comment in v2.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups
  2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
                   ` (14 preceding siblings ...)
  2021-02-10 23:06 ` [PATCH 15/15] KVM: selftests: Get rid of gorilla math in memslots modification test Sean Christopherson
@ 2021-02-11 11:58 ` Andrew Jones
  15 siblings, 0 replies; 35+ messages in thread
From: Andrew Jones @ 2021-02-11 11:58 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, linux-kernel, Ben Gardon, Yanan Wang,
	Peter Xu, Aaron Lewis

On Wed, Feb 10, 2021 at 03:06:10PM -0800, Sean Christopherson wrote:
> Fix hugepage bugs in the KVM selftests that specifically affect dirty
> logging and demand paging tests.  Found while attempting to verify KVM
> changes/fixes related to hugepages and dirty logging (patches incoming in
> a separate series).
> 
> Clean up the perf_test_args util on top of the hugepage fixes to clarify
> what "page size" means, and to improve confidence in the code doing what
> it thinks it's doing.  In a few cases, users of perf_test_args were
> duplicating (approximating?) calculations made by perf_test_args, and it
> wasn't obvious that both pieces of code were guaranteed to end up with the
> same result.
> 
> Sean Christopherson (15):
>   KVM: selftests: Explicitly state indicies for vm_guest_mode_params
>     array
>   KVM: selftests: Expose align() helpers to tests
>   KVM: selftests: Align HVA for HugeTLB-backed memslots
>   KVM: selftests: Force stronger HVA alignment (1gb) for hugepages
>   KVM: selftests: Require GPA to be aligned when backed by hugepages
>   KVM: selftests: Use shorthand local var to access struct
>     perf_tests_args
>   KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args
>   KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging
>     test
>   KVM: selftests: Move per-VM GPA into perf_test_args
>   KVM: selftests: Remove perf_test_args.host_page_size
>   KVM: selftests: Create VM with adjusted number of guest pages for perf
>     tests
>   KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation
>   KVM: selftests: Sync perf_test_args to guest during VM creation
>   KVM: selftests: Track size of per-VM memslot in perf_test_args
>   KVM: selftests: Get rid of gorilla math in memslots modification test
> 
>  .../selftests/kvm/demand_paging_test.c        |  39 ++---
>  .../selftests/kvm/dirty_log_perf_test.c       |  10 +-
>  .../testing/selftests/kvm/include/kvm_util.h  |  28 ++++
>  .../selftests/kvm/include/perf_test_util.h    |  18 +--
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  36 ++---
>  .../selftests/kvm/lib/perf_test_util.c        | 139 ++++++++++--------
>  .../kvm/memslot_modification_stress_test.c    |  16 +-
>  7 files changed, 145 insertions(+), 141 deletions(-)
> 
> -- 
> 2.30.0.478.g8a0d178c01-goog
>

For the series

Reviewed-by: Andrew Jones <drjones@redhat.com>

Thanks,
drew


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args
  2021-02-11  1:56     ` Sean Christopherson
@ 2021-02-11 13:12       ` Paolo Bonzini
  2021-02-11 15:57         ` Sean Christopherson
  0 siblings, 1 reply; 35+ messages in thread
From: Paolo Bonzini @ 2021-02-11 13:12 UTC (permalink / raw)
  To: Sean Christopherson, Ben Gardon
  Cc: kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu, Aaron Lewis

On 11/02/21 02:56, Sean Christopherson wrote:
>>> +       pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
>>> +       pta->gpa &= ~(pta->host_page_size - 1);
>> Also not related to this patch, but another case for align.
>>
>>>          if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
>>>              backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
>>> -               guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
>>> -
>>> +               pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
>> also align
>>
>>>   #ifdef __s390x__
>>>          /* Align to 1M (segment size) */
>>> -       guest_test_phys_mem &= ~((1 << 20) - 1);
>>> +       pta->gpa &= ~((1 << 20) - 1);
>> And here again (oof)
> 
> Yep, I'll fix all these and the align() comment in v2.

This is not exactly align in fact; it is x & ~y rather than (x + y) & 
~y.  Are you going to introduce a round-down macro or is it a bug?  (I 
am lazy...).

Paolo


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args
  2021-02-11 13:12       ` Paolo Bonzini
@ 2021-02-11 15:57         ` Sean Christopherson
  2021-02-11 17:33           ` Ben Gardon
  0 siblings, 1 reply; 35+ messages in thread
From: Sean Christopherson @ 2021-02-11 15:57 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Ben Gardon, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu, Aaron Lewis

On Thu, Feb 11, 2021, Paolo Bonzini wrote:
> On 11/02/21 02:56, Sean Christopherson wrote:
> > > > +       pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
> > > > +       pta->gpa &= ~(pta->host_page_size - 1);
> > > Also not related to this patch, but another case for align.
> > > 
> > > >          if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
> > > >              backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> > > > -               guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> > > > -
> > > > +               pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> > > also align
> > > 
> > > >   #ifdef __s390x__
> > > >          /* Align to 1M (segment size) */
> > > > -       guest_test_phys_mem &= ~((1 << 20) - 1);
> > > > +       pta->gpa &= ~((1 << 20) - 1);
> > > And here again (oof)
> > 
> > Yep, I'll fix all these and the align() comment in v2.
> 
> This is not exactly align in fact; it is x & ~y rather than (x + y) & ~y.
> Are you going to introduce a round-down macro or is it a bug?  (I am
> lazy...).

Good question.  I, too, was lazy.  I didn't look at the guts of align() when I
moved it, and I didn't look closely at Ben's suggestion.  I'll take a closer
look today and make sure everything is doing what it's supposed to do.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args
  2021-02-11 15:57         ` Sean Christopherson
@ 2021-02-11 17:33           ` Ben Gardon
  0 siblings, 0 replies; 35+ messages in thread
From: Ben Gardon @ 2021-02-11 17:33 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Paolo Bonzini, kvm, LKML, Yanan Wang, Andrew Jones, Peter Xu,
	Aaron Lewis

On Thu, Feb 11, 2021 at 7:58 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Feb 11, 2021, Paolo Bonzini wrote:
> > On 11/02/21 02:56, Sean Christopherson wrote:
> > > > > +       pta->gpa = (vm_get_max_gfn(vm) - guest_num_pages) * pta->guest_page_size;
> > > > > +       pta->gpa &= ~(pta->host_page_size - 1);
> > > > Also not related to this patch, but another case for align.
> > > >
> > > > >          if (backing_src == VM_MEM_SRC_ANONYMOUS_THP ||
> > > > >              backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> > > > > -               guest_test_phys_mem &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> > > > > -
> > > > > +               pta->gpa &= ~(KVM_UTIL_HUGEPAGE_ALIGNMENT - 1);
> > > > also align
> > > >
> > > > >   #ifdef __s390x__
> > > > >          /* Align to 1M (segment size) */
> > > > > -       guest_test_phys_mem &= ~((1 << 20) - 1);
> > > > > +       pta->gpa &= ~((1 << 20) - 1);
> > > > And here again (oof)
> > >
> > > Yep, I'll fix all these and the align() comment in v2.
> >
> > This is not exactly align in fact; it is x & ~y rather than (x + y) & ~y.
> > Are you going to introduce a round-down macro or is it a bug?  (I am
> > lazy...).
>
> Good question.  I, too, was lazy.  I didn't look at the guts of align() when I
> moved it, and I didn't look closely at Ben's suggestion.  I'll take a closer
> look today and make sure everything is doing what it's supposed to do.

Ooh, great point Paolo, that helper is indeed rounding up. My comment
in patch #2 was totally wrong. I forgot anyone would ever want to
round up. :/
My misunderstanding and the above use cases are probably good evidence
that it would be helpful to have both align_up and align_down helpers.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots
  2021-02-10 23:06 ` [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots Sean Christopherson
  2021-02-11  0:52   ` Ben Gardon
@ 2021-02-25  7:40   ` wangyanan (Y)
  2021-03-13  0:17     ` Sean Christopherson
  1 sibling, 1 reply; 35+ messages in thread
From: wangyanan (Y) @ 2021-02-25  7:40 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini
  Cc: kvm, linux-kernel, Ben Gardon, Andrew Jones, Peter Xu, Aaron Lewis

Hi Sean,

On 2021/2/11 7:06, Sean Christopherson wrote:
> Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
> assert so any future backing types are forced to assess whether or not
> they need to be aligned.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 584167c6dbc7..deaeb47b5a6d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>   	alignment = 1;
>   #endif
>   
> -	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> +	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> +	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
Sorry for the late reply, I just returned from vacation.
I am not sure HVA alignment is really necessary here for hugetlb pages. 
Different from hugetlb pages,
the THP pages are dynamically allocated by later madvise(), so the value 
of HVA returned from mmap()
is host page size aligned but not THP page size aligned, so we indeed 
have to perform alignment.
But hugetlb pages are pre-allocated on systems. The following test 
results also indicate that,
with MAP_HUGETLB flag, the HVA returned from mmap() is already aligned 
to the corresponding
hugetlb page size. So maybe HVAs of each hugetlb pages are aligned 
during allocation of them
or in mmap() ? If so, I think we better not do this again here, because 
the later *region->mmap_size += alignment*
will cause one more hugetlb page mapped but will not be used.

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_1gb
some outputs:
Host  virtual  test memory offset: 0xffff40000000
Host  virtual  test memory offset: 0xffff00000000
Host  virtual  test memory offset: 0x400000000000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_2mb
some outputs:
Host  virtual  test memory offset: 0xffff48000000
Host  virtual  test memory offset: 0xffff65400000
Host  virtual  test memory offset: 0xffff6ba00000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_32mb
some outputs:
Host  virtual  test memory offset: 0xffff70000000
Host  virtual  test memory offset: 0xffff4c000000
Host  virtual  test memory offset: 0xffff72000000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_64kb
some outputs:
Host  virtual  test memory offset: 0xffff58230000
Host  virtual  test memory offset: 0xffff6ef00000
Host  virtual  test memory offset: 0xffff7c150000

Thanks,
Yanan
>   		alignment = max(huge_page_size, alignment);
> +	else
> +		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>   
>   	/* Add enough memory to align up if necessary */
>   	if (alignment > 1)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages
  2021-02-10 23:06 ` [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages Sean Christopherson
@ 2021-02-25  7:57   ` wangyanan (Y)
  2021-03-13  0:26     ` Sean Christopherson
  0 siblings, 1 reply; 35+ messages in thread
From: wangyanan (Y) @ 2021-02-25  7:57 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, Ben Gardon, Andrew Jones, Peter Xu,
	Aaron Lewis, Paolo Bonzini


On 2021/2/11 7:06, Sean Christopherson wrote:
> Align the HVA for hugepage memslots to 1gb, as opposed to incorrectly
> assuming all architectures' hugepages are 512*page_size.
>
> For x86, multiplying by 512 is correct, but only for 2mb pages, e.g.
> systems that support 1gb pages will never be able to use them for mapping
> guest memory, and thus those flows will not be exercised.
>
> For arm64, powerpc, and s390 (and mips?), hardcoding the multiplier to
> 512 is either flat out wrong, or at best correct only in certain
> configurations.
>
> Hardcoding the _alignment_ to 1gb is a compromise between correctness and
> simplicity.  Due to the myriad flavors of hugepages across architectures,
> attempting to enumerate the exact hugepage size is difficult, and likely
> requires probing the kernel.
>
> But, there is no need for precision since a stronger alignment will not
> prevent creating a smaller hugepage.  For all but the most extreme cases,
> e.g. arm64's 16gb contiguous PMDs, aligning to 1gb is sufficient to allow
> KVM to back the guest with hugepages.
I have implemented a helper get_backing_src_pagesz() to get granularity 
of different
backing src types (anonymous/thp/hugetlb) which is suitable for 
different architectures.
See: 
https://lore.kernel.org/lkml/20210225055940.18748-6-wangyanan55@huawei.com/
if it looks fine for you, maybe we can use the accurate page sizes for 
GPA/HVA alignment:).

Thanks,

Yanan
> Add the new alignment in kvm_util.h so that it can be used by callers of
> vm_userspace_mem_region_add(), e.g. to also ensure GPAs are aligned.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   tools/testing/selftests/kvm/include/kvm_util.h | 13 +++++++++++++
>   tools/testing/selftests/kvm/lib/kvm_util.c     |  4 +---
>   2 files changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 4b5d2362a68a..a7dbdf46aa51 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -68,6 +68,19 @@ enum vm_guest_mode {
>   #define MIN_PAGE_SIZE		(1U << MIN_PAGE_SHIFT)
>   #define PTES_PER_MIN_PAGE	ptes_per_page(MIN_PAGE_SIZE)
>   
> +/*
> + * KVM_UTIL_HUGEPAGE_ALIGNMENT is selftest's required alignment for both host
> + * and guest addresses when backing guest memory with hugepages.  This is not
> + * the exact size of hugepages, rather it's a size that should allow backing
> + * the guest with hugepages on all architectures.  Precisely tracking the exact
> + * sizes across all architectures is more pain than gain, e.g. x86 supports 2mb
> + * and 1gb hugepages, arm64 supports 2mb and 1gb hugepages when using 4kb pages
> + * and 512mb hugepages when using 64kb pages (ignoring contiguous TLB entries),
> + * powerpc radix supports 1gb hugepages when using 64kb pages, s390 supports 1mb
> + * hugepages, and so on and so forth.
> + */
> +#define KVM_UTIL_HUGEPAGE_ALIGNMENT	(1ULL << 30)
> +
>   #define vm_guest_mode_string(m) vm_guest_mode_string[m]
>   extern const char * const vm_guest_mode_string[];
>   
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index deaeb47b5a6d..2e497fbab6ae 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -18,7 +18,6 @@
>   #include <unistd.h>
>   #include <linux/kernel.h>
>   
> -#define KVM_UTIL_PGS_PER_HUGEPG 512
>   #define KVM_UTIL_MIN_PFN	2
>   
>   /*
> @@ -670,7 +669,6 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>   {
>   	int ret;
>   	struct userspace_mem_region *region;
> -	size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size;
>   	size_t alignment;
>   
>   	TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages,
> @@ -733,7 +731,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
>   
>   	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
>   	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> -		alignment = max(huge_page_size, alignment);
> +		alignment = max((size_t)KVM_UTIL_HUGEPAGE_ALIGNMENT, alignment);
>   	else
>   		ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>   

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots
  2021-02-25  7:40   ` wangyanan (Y)
@ 2021-03-13  0:17     ` Sean Christopherson
  0 siblings, 0 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-03-13  0:17 UTC (permalink / raw)
  To: wangyanan (Y)
  Cc: Paolo Bonzini, kvm, linux-kernel, Ben Gardon, Andrew Jones,
	Peter Xu, Aaron Lewis

On Thu, Feb 25, 2021, wangyanan (Y) wrote:
> Hi Sean,
> 
> On 2021/2/11 7:06, Sean Christopherson wrote:
> > Align the HVA for HugeTLB memslots, not just THP memslots.  Add an
> > assert so any future backing types are forced to assess whether or not
> > they need to be aligned.
> > 
> > Cc: Ben Gardon <bgardon@google.com>
> > Cc: Yanan Wang <wangyanan55@huawei.com>
> > Cc: Andrew Jones <drjones@redhat.com>
> > Cc: Peter Xu <peterx@redhat.com>
> > Cc: Aaron Lewis <aaronlewis@google.com>
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> >   tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> > index 584167c6dbc7..deaeb47b5a6d 100644
> > --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> > @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
> >   	alignment = 1;
> >   #endif
> > -	if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> > +	if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> > +	    src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
> Sorry for the late reply, I just returned from vacation.

At least you had an excuse :-)

> I am not sure HVA alignment is really necessary here for hugetlb pages.
> Different from hugetlb pages, the THP pages are dynamically allocated by
> later madvise(), so the value of HVA returned from mmap() is host page size
> aligned but not THP page size aligned, so we indeed have to perform
> alignment.  But hugetlb pages are pre-allocated on systems. The following
> test results also indicate that, with MAP_HUGETLB flag, the HVA returned from
> mmap() is already aligned to the corresponding hugetlb page size. So maybe
> HVAs of each hugetlb pages are aligned during allocation of them or in mmap()?

Yep, I verified the generic and arch version of hugetlb_get_unmapped_area() that
KVM supports all align the address.  PARISC64 is the only one that looks suspect,
but it doesn't support KVM.

> If so, I think we better not do this again here, because the later
> *region->mmap_size += alignment* will cause one more hugetlb page mapped but
> will not be used.

Agreed.  I think I'll still add the assert, along with a comment calling out
that HugetlB automatically handles alignment.

Nice catch, thanks!

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages
  2021-02-25  7:57   ` wangyanan (Y)
@ 2021-03-13  0:26     ` Sean Christopherson
  0 siblings, 0 replies; 35+ messages in thread
From: Sean Christopherson @ 2021-03-13  0:26 UTC (permalink / raw)
  To: wangyanan (Y)
  Cc: kvm, linux-kernel, Ben Gardon, Andrew Jones, Peter Xu,
	Aaron Lewis, Paolo Bonzini

On Thu, Feb 25, 2021, wangyanan (Y) wrote:
> 
> On 2021/2/11 7:06, Sean Christopherson wrote:
> > Align the HVA for hugepage memslots to 1gb, as opposed to incorrectly
> > assuming all architectures' hugepages are 512*page_size.
> > 
> > For x86, multiplying by 512 is correct, but only for 2mb pages, e.g.
> > systems that support 1gb pages will never be able to use them for mapping
> > guest memory, and thus those flows will not be exercised.
> > 
> > For arm64, powerpc, and s390 (and mips?), hardcoding the multiplier to
> > 512 is either flat out wrong, or at best correct only in certain
> > configurations.
> > 
> > Hardcoding the _alignment_ to 1gb is a compromise between correctness and
> > simplicity.  Due to the myriad flavors of hugepages across architectures,
> > attempting to enumerate the exact hugepage size is difficult, and likely
> > requires probing the kernel.
> > 
> > But, there is no need for precision since a stronger alignment will not
> > prevent creating a smaller hugepage.  For all but the most extreme cases,
> > e.g. arm64's 16gb contiguous PMDs, aligning to 1gb is sufficient to allow
> > KVM to back the guest with hugepages.
> I have implemented a helper get_backing_src_pagesz() to get granularity of
> different
> backing src types (anonymous/thp/hugetlb) which is suitable for different
> architectures.
> See:
> https://lore.kernel.org/lkml/20210225055940.18748-6-wangyanan55@huawei.com/
> if it looks fine for you, maybe we can use the accurate page sizes for
> GPA/HVA alignment:).

Works for me.  I'll probably just wait until your series is queued to send v2.

Thanks again!

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2021-03-13  0:30 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10 23:06 [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Sean Christopherson
2021-02-10 23:06 ` [PATCH 01/15] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array Sean Christopherson
2021-02-11  0:50   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 02/15] KVM: selftests: Expose align() helpers to tests Sean Christopherson
2021-02-11  0:49   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots Sean Christopherson
2021-02-11  0:52   ` Ben Gardon
2021-02-25  7:40   ` wangyanan (Y)
2021-03-13  0:17     ` Sean Christopherson
2021-02-10 23:06 ` [PATCH 04/15] KVM: selftests: Force stronger HVA alignment (1gb) for hugepages Sean Christopherson
2021-02-25  7:57   ` wangyanan (Y)
2021-03-13  0:26     ` Sean Christopherson
2021-02-10 23:06 ` [PATCH 05/15] KVM: selftests: Require GPA to be aligned when backed by hugepages Sean Christopherson
2021-02-11  1:01   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 06/15] KVM: selftests: Use shorthand local var to access struct perf_tests_args Sean Christopherson
2021-02-11  1:09   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 07/15] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args Sean Christopherson
2021-02-11  1:24   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 08/15] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test Sean Christopherson
2021-02-11  1:23   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 09/15] KVM: selftests: Move per-VM GPA into perf_test_args Sean Christopherson
2021-02-11  1:22   ` Ben Gardon
2021-02-11  1:56     ` Sean Christopherson
2021-02-11 13:12       ` Paolo Bonzini
2021-02-11 15:57         ` Sean Christopherson
2021-02-11 17:33           ` Ben Gardon
2021-02-10 23:06 ` [PATCH 10/15] KVM: selftests: Remove perf_test_args.host_page_size Sean Christopherson
2021-02-11  1:26   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 11/15] KVM: selftests: Create VM with adjusted number of guest pages for perf tests Sean Christopherson
2021-02-11  1:32   ` Ben Gardon
2021-02-10 23:06 ` [PATCH 12/15] KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation Sean Christopherson
2021-02-10 23:06 ` [PATCH 13/15] KVM: selftests: Sync perf_test_args to guest during " Sean Christopherson
2021-02-10 23:06 ` [PATCH 14/15] KVM: selftests: Track size of per-VM memslot in perf_test_args Sean Christopherson
2021-02-10 23:06 ` [PATCH 15/15] KVM: selftests: Get rid of gorilla math in memslots modification test Sean Christopherson
2021-02-11 11:58 ` [PATCH 00/15] VM: selftests: Hugepage fixes and cleanups Andrew Jones

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).