linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] selftests: KVM: verify page stats in kvm x86/mmu
@ 2021-08-30  4:44 Mingwei Zhang
  2021-08-30  4:44 ` [PATCH v3 1/2] selftests: KVM: align guest physical memory base address to 1GB Mingwei Zhang
  2021-08-30  4:44 ` [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly Mingwei Zhang
  0 siblings, 2 replies; 8+ messages in thread
From: Mingwei Zhang @ 2021-08-30  4:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, David Matlack,
	Jing Zhang, Peter Xu, Ben Gardon, Mingwei Zhang

This patch set leverages the existing dirty logging performance selftest to
verify whether the page stats work correctly in KVM x86/mmu especially with
the consequence of dirty logging.

v2 -> v3:
 - fix the build error. [mizhang]

v1 -> v2:
 - split the page alignment fix into a different commit [bgardon]
 - update the TEST_ASSERT conditions [bgardon]

Mingwei Zhang (2):
  selftests: KVM: align guest physical memory base address to 1GB
  selftests: KVM: use dirty logging to check if page stats work
    correctly

 .../selftests/kvm/dirty_log_perf_test.c       | 44 +++++++++++++++++++
 .../testing/selftests/kvm/include/test_util.h |  1 +
 .../selftests/kvm/include/x86_64/processor.h  |  7 +++
 .../selftests/kvm/lib/perf_test_util.c        |  8 ++--
 tools/testing/selftests/kvm/lib/test_util.c   | 29 ++++++++++++
 5 files changed, 85 insertions(+), 4 deletions(-)

--
2.33.0.259.gc128427fd7-goog


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/2] selftests: KVM: align guest physical memory base address to 1GB
  2021-08-30  4:44 [PATCH v3 0/2] selftests: KVM: verify page stats in kvm x86/mmu Mingwei Zhang
@ 2021-08-30  4:44 ` Mingwei Zhang
  2021-08-30 20:58   ` Ben Gardon
  2021-08-30  4:44 ` [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly Mingwei Zhang
  1 sibling, 1 reply; 8+ messages in thread
From: Mingwei Zhang @ 2021-08-30  4:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, David Matlack,
	Jing Zhang, Peter Xu, Ben Gardon, Mingwei Zhang

Existing selftest library function always allocates GPA range that aligns
to the end of GPA address space, ie., the allocated GPA range guarantees to
end at the last available GPA. This ends up with the fact that selftest
programs cannot control the alignment of the base GPA. Depending on the
size of the allocation, the base GPA may align only on a 4K based
bounday.

The alignment of base GPA sometimes creates problems for dirty logging
selftest where a 2MB-aligned or 1GB-aligned base GPA is needed to
create NPT/EPT mappings for hugepages.

So, fix this issue and ensure all GPA allocation starts from a 1GB bounary
in all architectures.

Cc: Sean Christopherson <seanjc@google.com>
Cc: David Matlack <dmatlack@google.com>
Cc: Jing Zhang <jingzhangos@google.com>
Cc: Peter Xu <peterx@redhat.com>

Suggested-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
---
 tools/testing/selftests/kvm/lib/perf_test_util.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 0ef80dbdc116..96c30b8d6593 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -93,10 +93,10 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
 	guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
 			      perf_test_args.guest_page_size;
 	guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
-#ifdef __s390x__
-	/* Align to 1M (segment size) */
-	guest_test_phys_mem &= ~((1 << 20) - 1);
-#endif
+
+	/* Align to 1G for all architectures */
+	guest_test_phys_mem &= ~((1 << 30) - 1);
+
 	pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
 
 	/* Add extra memory slots for testing */
-- 
2.33.0.259.gc128427fd7-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly
  2021-08-30  4:44 [PATCH v3 0/2] selftests: KVM: verify page stats in kvm x86/mmu Mingwei Zhang
  2021-08-30  4:44 ` [PATCH v3 1/2] selftests: KVM: align guest physical memory base address to 1GB Mingwei Zhang
@ 2021-08-30  4:44 ` Mingwei Zhang
  2021-08-30 21:10   ` Ben Gardon
  1 sibling, 1 reply; 8+ messages in thread
From: Mingwei Zhang @ 2021-08-30  4:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kvm, linux-kernel, Sean Christopherson, David Matlack,
	Jing Zhang, Peter Xu, Ben Gardon, Mingwei Zhang

When dirty logging is enabled, KVM splits the hugepage mapping in NPT/EPT
into the smallest 4K size after guest VM writes to it. This property could
be used to check if the page stats metrics work properly in KVM x86/mmu. At
the same time, this logic might be used the other way around: using page
stats to verify if dirty logging really splits all huge pages after guest
VM writes to all memory.

So add page stats checking in dirty logging performance selftest. In
particular, add checks in three locations:
 - just after vm is created;
 - after populating memory into vm without turning on dirty logging;
 - after guest vm writing to all memory again with dirty logging turned on.

Tested using commands:
 - ./dirty_log_perf_test -s anonymous_hugetlb_1gb
 - ./dirty_log_perf_test -s anonymous_hugetlb_2mb
 - ./dirty_log_perf_test -s anonymous_thp

Cc: Sean Christopherson <seanjc@google.com>
Cc: David Matlack <dmatlack@google.com>
Cc: Jing Zhang <jingzhangos@google.com>
Cc: Peter Xu <peterx@redhat.com>

Suggested-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
---
 .../selftests/kvm/dirty_log_perf_test.c       | 44 +++++++++++++++++++
 .../testing/selftests/kvm/include/test_util.h |  1 +
 .../selftests/kvm/include/x86_64/processor.h  |  7 +++
 tools/testing/selftests/kvm/lib/test_util.c   | 29 ++++++++++++
 4 files changed, 81 insertions(+)

diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 3c30d0045d8d..bc598e07b295 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -19,6 +19,10 @@
 #include "perf_test_util.h"
 #include "guest_modes.h"
 
+#ifdef __x86_64__
+#include "processor.h"
+#endif
+
 /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/
 #define TEST_HOST_LOOP_N		2UL
 
@@ -166,6 +170,18 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
 				 p->slots, p->backing_src);
 
+#ifdef __x86_64__
+	/*
+	 * No vCPUs have been started yet, so KVM should not have created any
+	 * mapping at this moment.
+	 */
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) == 0,
+		    "4K page is non zero");
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == 0,
+		    "2M page is non zero");
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == 0,
+		    "1G page is non zero");
+#endif
 	perf_test_args.wr_fract = p->wr_fract;
 
 	guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm);
@@ -211,6 +227,22 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	pr_info("Populate memory time: %ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);
 
+#ifdef __x86_64__
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) != 0,
+		    "4K page is zero");
+	/* Ensure THP page stats is non-zero to minimize the flakiness. */
+	if (p->backing_src == VM_MEM_SRC_ANONYMOUS_THP)
+		TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) > 0,
+			"2M page number is zero");
+	else if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_2MB)
+		TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) ==
+			(guest_percpu_mem_size * nr_vcpus) >> X86_PAGE_2M_SHIFT,
+			"2M page number does not match");
+	else if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB)
+		TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) ==
+			(guest_percpu_mem_size * nr_vcpus) >> X86_PAGE_1G_SHIFT,
+			"1G page number does not match");
+#endif
 	/* Enable dirty logging */
 	clock_gettime(CLOCK_MONOTONIC, &start);
 	enable_dirty_logging(vm, p->slots);
@@ -256,6 +288,18 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 				iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
 		}
 	}
+#ifdef __x86_64__
+	/*
+	 * When vCPUs writes to all memory again with dirty logging enabled, we
+	 * should see only 4K page mappings exist in KVM mmu.
+	 */
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) != 0,
+		    "4K page is zero after dirtying memory");
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == 0,
+		    "2M page is non-zero after dirtying memory");
+	TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == 0,
+		    "1G page is non-zero  after dirtying memory");
+#endif
 
 	/* Disable dirty logging */
 	clock_gettime(CLOCK_MONOTONIC, &start);
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index d79be15dd3d2..dca5fcf7aa87 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -102,6 +102,7 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i);
 size_t get_backing_src_pagesz(uint32_t i);
 void backing_src_help(void);
 enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
+size_t get_page_stats(uint32_t page_level);
 
 /*
  * Whether or not the given source type is shared memory (as opposed to
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 242ae8e09a65..9749319821a3 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -39,6 +39,13 @@
 #define X86_CR4_SMAP		(1ul << 21)
 #define X86_CR4_PKE		(1ul << 22)
 
+#define X86_PAGE_4K_SHIFT	12
+#define X86_PAGE_4K		(1ul << X86_PAGE_4K_SHIFT)
+#define X86_PAGE_2M_SHIFT	21
+#define X86_PAGE_2M		(1ul << X86_PAGE_2M_SHIFT)
+#define X86_PAGE_1G_SHIFT	30
+#define X86_PAGE_1G		(1ul << X86_PAGE_1G_SHIFT)
+
 /* CPUID.1.ECX */
 #define CPUID_VMX		(1ul << 5)
 #define CPUID_SMX		(1ul << 6)
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index af1031fed97f..07eb6b5c125e 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -15,6 +15,13 @@
 #include "linux/kernel.h"
 
 #include "test_util.h"
+#include "processor.h"
+
+static const char * const pagestat_filepaths[] = {
+	"/sys/kernel/debug/kvm/pages_4k",
+	"/sys/kernel/debug/kvm/pages_2m",
+	"/sys/kernel/debug/kvm/pages_1g",
+};
 
 /*
  * Parses "[0-9]+[kmgt]?".
@@ -141,6 +148,28 @@ size_t get_trans_hugepagesz(void)
 	return size;
 }
 
+#ifdef __x86_64__
+size_t get_stats_from_file(const char *path)
+{
+	size_t value;
+	FILE *f;
+
+	f = fopen(path, "r");
+	TEST_ASSERT(f != NULL, "Error in opening file: %s\n", path);
+
+	fscanf(f, "%ld", &value);
+	fclose(f);
+
+	return value;
+}
+
+size_t get_page_stats(uint32_t page_level)
+{
+	TEST_ASSERT(page_level <= X86_PAGE_SIZE_1G, "page type error.");
+	return get_stats_from_file(pagestat_filepaths[page_level]);
+}
+#endif
+
 size_t get_def_hugetlb_pagesz(void)
 {
 	char buf[64];
-- 
2.33.0.259.gc128427fd7-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/2] selftests: KVM: align guest physical memory base address to 1GB
  2021-08-30  4:44 ` [PATCH v3 1/2] selftests: KVM: align guest physical memory base address to 1GB Mingwei Zhang
@ 2021-08-30 20:58   ` Ben Gardon
  0 siblings, 0 replies; 8+ messages in thread
From: Ben Gardon @ 2021-08-30 20:58 UTC (permalink / raw)
  To: Mingwei Zhang
  Cc: Paolo Bonzini, kvm, LKML, Sean Christopherson, David Matlack,
	Jing Zhang, Peter Xu

On Sun, Aug 29, 2021 at 9:44 PM Mingwei Zhang <mizhang@google.com> wrote:
>
> Existing selftest library function always allocates GPA range that aligns
> to the end of GPA address space, ie., the allocated GPA range guarantees to
> end at the last available GPA. This ends up with the fact that selftest
> programs cannot control the alignment of the base GPA. Depending on the
> size of the allocation, the base GPA may align only on a 4K based
> bounday.
>
> The alignment of base GPA sometimes creates problems for dirty logging
> selftest where a 2MB-aligned or 1GB-aligned base GPA is needed to
> create NPT/EPT mappings for hugepages.
>
> So, fix this issue and ensure all GPA allocation starts from a 1GB bounary
> in all architectures.
>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: David Matlack <dmatlack@google.com>
> Cc: Jing Zhang <jingzhangos@google.com>
> Cc: Peter Xu <peterx@redhat.com>
>
> Suggested-by: Ben Gardon <bgardon@google.com>
> Signed-off-by: Mingwei Zhang <mizhang@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  tools/testing/selftests/kvm/lib/perf_test_util.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 0ef80dbdc116..96c30b8d6593 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -93,10 +93,10 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
>         guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) *
>                               perf_test_args.guest_page_size;
>         guest_test_phys_mem &= ~(perf_test_args.host_page_size - 1);
> -#ifdef __s390x__
> -       /* Align to 1M (segment size) */
> -       guest_test_phys_mem &= ~((1 << 20) - 1);
> -#endif
> +
> +       /* Align to 1G for all architectures */
> +       guest_test_phys_mem &= ~((1 << 30) - 1);
> +
>         pr_info("guest physical test memory offset: 0x%lx\n", guest_test_phys_mem);
>
>         /* Add extra memory slots for testing */
> --
> 2.33.0.259.gc128427fd7-goog
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly
  2021-08-30  4:44 ` [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly Mingwei Zhang
@ 2021-08-30 21:10   ` Ben Gardon
  2021-08-31 16:58     ` Sean Christopherson
  0 siblings, 1 reply; 8+ messages in thread
From: Ben Gardon @ 2021-08-30 21:10 UTC (permalink / raw)
  To: Mingwei Zhang
  Cc: Paolo Bonzini, kvm, LKML, Sean Christopherson, David Matlack,
	Jing Zhang, Peter Xu

On Sun, Aug 29, 2021 at 9:44 PM Mingwei Zhang <mizhang@google.com> wrote:
>
> When dirty logging is enabled, KVM splits the hugepage mapping in NPT/EPT
> into the smallest 4K size after guest VM writes to it. This property could
> be used to check if the page stats metrics work properly in KVM x86/mmu. At
> the same time, this logic might be used the other way around: using page
> stats to verify if dirty logging really splits all huge pages after guest
> VM writes to all memory.
>
> So add page stats checking in dirty logging performance selftest. In
> particular, add checks in three locations:
>  - just after vm is created;
>  - after populating memory into vm without turning on dirty logging;
>  - after guest vm writing to all memory again with dirty logging turned on.
>
> Tested using commands:
>  - ./dirty_log_perf_test -s anonymous_hugetlb_1gb
>  - ./dirty_log_perf_test -s anonymous_hugetlb_2mb
>  - ./dirty_log_perf_test -s anonymous_thp
>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: David Matlack <dmatlack@google.com>
> Cc: Jing Zhang <jingzhangos@google.com>
> Cc: Peter Xu <peterx@redhat.com>
>
> Suggested-by: Ben Gardon <bgardon@google.com>
> Signed-off-by: Mingwei Zhang <mizhang@google.com>
> ---
>  .../selftests/kvm/dirty_log_perf_test.c       | 44 +++++++++++++++++++
>  .../testing/selftests/kvm/include/test_util.h |  1 +
>  .../selftests/kvm/include/x86_64/processor.h  |  7 +++
>  tools/testing/selftests/kvm/lib/test_util.c   | 29 ++++++++++++
>  4 files changed, 81 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> index 3c30d0045d8d..bc598e07b295 100644
> --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> @@ -19,6 +19,10 @@
>  #include "perf_test_util.h"
>  #include "guest_modes.h"
>
> +#ifdef __x86_64__
> +#include "processor.h"
> +#endif
> +

I know this is only needed for x86_64, but I don't think it needs to
be ifdef in order for everything to work.

>  /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/
>  #define TEST_HOST_LOOP_N               2UL
>
> @@ -166,6 +170,18 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>         vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
>                                  p->slots, p->backing_src);
>
> +#ifdef __x86_64__
> +       /*
> +        * No vCPUs have been started yet, so KVM should not have created any
> +        * mapping at this moment.
> +        */
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) == 0,
> +                   "4K page is non zero");
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == 0,
> +                   "2M page is non zero");
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == 0,
> +                   "1G page is non zero");
> +#endif
>         perf_test_args.wr_fract = p->wr_fract;
>
>         guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm);
> @@ -211,6 +227,22 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>         pr_info("Populate memory time: %ld.%.9lds\n",
>                 ts_diff.tv_sec, ts_diff.tv_nsec);
>
> +#ifdef __x86_64__
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) != 0,
> +                   "4K page is zero");
> +       /* Ensure THP page stats is non-zero to minimize the flakiness. */
> +       if (p->backing_src == VM_MEM_SRC_ANONYMOUS_THP)
> +               TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) > 0,
> +                       "2M page number is zero");

Nice. Thanks for handling this case. It's very frustrating when THP
makes test assertions flaky.

> +       else if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_2MB)
> +               TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) ==
> +                       (guest_percpu_mem_size * nr_vcpus) >> X86_PAGE_2M_SHIFT,
> +                       "2M page number does not match");
> +       else if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB)
> +               TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) ==
> +                       (guest_percpu_mem_size * nr_vcpus) >> X86_PAGE_1G_SHIFT,
> +                       "1G page number does not match");
> +#endif
>         /* Enable dirty logging */
>         clock_gettime(CLOCK_MONOTONIC, &start);
>         enable_dirty_logging(vm, p->slots);
> @@ -256,6 +288,18 @@ static void run_test(enum vm_guest_mode mode, void *arg)
>                                 iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
>                 }
>         }
> +#ifdef __x86_64__
> +       /*
> +        * When vCPUs writes to all memory again with dirty logging enabled, we
> +        * should see only 4K page mappings exist in KVM mmu.
> +        */
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_4K) != 0,
> +                   "4K page is zero after dirtying memory");
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_2M) == 0,
> +                   "2M page is non-zero after dirtying memory");
> +       TEST_ASSERT(get_page_stats(X86_PAGE_SIZE_1G) == 0,
> +                   "1G page is non-zero  after dirtying memory");
> +#endif
>
>         /* Disable dirty logging */
>         clock_gettime(CLOCK_MONOTONIC, &start);
> diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
> index d79be15dd3d2..dca5fcf7aa87 100644
> --- a/tools/testing/selftests/kvm/include/test_util.h
> +++ b/tools/testing/selftests/kvm/include/test_util.h
> @@ -102,6 +102,7 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i);
>  size_t get_backing_src_pagesz(uint32_t i);
>  void backing_src_help(void);
>  enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
> +size_t get_page_stats(uint32_t page_level);
>
>  /*
>   * Whether or not the given source type is shared memory (as opposed to
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index 242ae8e09a65..9749319821a3 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -39,6 +39,13 @@
>  #define X86_CR4_SMAP           (1ul << 21)
>  #define X86_CR4_PKE            (1ul << 22)
>
> +#define X86_PAGE_4K_SHIFT      12
> +#define X86_PAGE_4K            (1ul << X86_PAGE_4K_SHIFT)
> +#define X86_PAGE_2M_SHIFT      21
> +#define X86_PAGE_2M            (1ul << X86_PAGE_2M_SHIFT)
> +#define X86_PAGE_1G_SHIFT      30
> +#define X86_PAGE_1G            (1ul << X86_PAGE_1G_SHIFT)
> +

It would be a nice follow up to use these to clean up the magic
numbers in __virt_pg_map:

const uint64_t pg_size = 1ull << ((page_size * 9) + 12);

:(

>  /* CPUID.1.ECX */
>  #define CPUID_VMX              (1ul << 5)
>  #define CPUID_SMX              (1ul << 6)
> diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> index af1031fed97f..07eb6b5c125e 100644
> --- a/tools/testing/selftests/kvm/lib/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/test_util.c
> @@ -15,6 +15,13 @@
>  #include "linux/kernel.h"
>
>  #include "test_util.h"
> +#include "processor.h"
> +
> +static const char * const pagestat_filepaths[] = {
> +       "/sys/kernel/debug/kvm/pages_4k",
> +       "/sys/kernel/debug/kvm/pages_2m",
> +       "/sys/kernel/debug/kvm/pages_1g",
> +};

I think these should only be defined for x86_64 too. Is this the right
file for these definitions or is there an arch specific file they
should go in?

>
>  /*
>   * Parses "[0-9]+[kmgt]?".
> @@ -141,6 +148,28 @@ size_t get_trans_hugepagesz(void)
>         return size;
>  }
>
> +#ifdef __x86_64__
> +size_t get_stats_from_file(const char *path)
> +{
> +       size_t value;
> +       FILE *f;
> +
> +       f = fopen(path, "r");
> +       TEST_ASSERT(f != NULL, "Error in opening file: %s\n", path);
> +
> +       fscanf(f, "%ld", &value);
> +       fclose(f);
> +
> +       return value;
> +}
> +
> +size_t get_page_stats(uint32_t page_level)
> +{
> +       TEST_ASSERT(page_level <= X86_PAGE_SIZE_1G, "page type error.");
> +       return get_stats_from_file(pagestat_filepaths[page_level]);
> +}
> +#endif
> +
>  size_t get_def_hugetlb_pagesz(void)
>  {
>         char buf[64];
> --
> 2.33.0.259.gc128427fd7-goog
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly
  2021-08-30 21:10   ` Ben Gardon
@ 2021-08-31 16:58     ` Sean Christopherson
  2021-09-06 20:05       ` Mingwei Zhang
  0 siblings, 1 reply; 8+ messages in thread
From: Sean Christopherson @ 2021-08-31 16:58 UTC (permalink / raw)
  To: Ben Gardon
  Cc: Mingwei Zhang, Paolo Bonzini, kvm, LKML, David Matlack,
	Jing Zhang, Peter Xu

On Mon, Aug 30, 2021, Ben Gardon wrote:
> On Sun, Aug 29, 2021 at 9:44 PM Mingwei Zhang <mizhang@google.com> wrote:
> > diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> > index af1031fed97f..07eb6b5c125e 100644
> > --- a/tools/testing/selftests/kvm/lib/test_util.c
> > +++ b/tools/testing/selftests/kvm/lib/test_util.c
> > @@ -15,6 +15,13 @@
> >  #include "linux/kernel.h"
> >
> >  #include "test_util.h"
> > +#include "processor.h"
> > +
> > +static const char * const pagestat_filepaths[] = {
> > +       "/sys/kernel/debug/kvm/pages_4k",
> > +       "/sys/kernel/debug/kvm/pages_2m",
> > +       "/sys/kernel/debug/kvm/pages_1g",
> > +};
> 
> I think these should only be defined for x86_64 too. Is this the right
> file for these definitions or is there an arch specific file they
> should go in?

The stats also need to be pulled from the selftest's VM, not from the overall KVM
stats, otherwise the test will fail if there are any other active VMs on the host,
e.g. I like to run to selftests and kvm-unit-tests in parallel.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly
  2021-08-31 16:58     ` Sean Christopherson
@ 2021-09-06 20:05       ` Mingwei Zhang
  2021-09-08 16:07         ` Mingwei Zhang
  0 siblings, 1 reply; 8+ messages in thread
From: Mingwei Zhang @ 2021-09-06 20:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Ben Gardon, Paolo Bonzini, kvm, LKML, David Matlack, Jing Zhang,
	Peter Xu

On Tue, Aug 31, 2021 at 9:58 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Aug 30, 2021, Ben Gardon wrote:
> > On Sun, Aug 29, 2021 at 9:44 PM Mingwei Zhang <mizhang@google.com> wrote:
> > > diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> > > index af1031fed97f..07eb6b5c125e 100644
> > > --- a/tools/testing/selftests/kvm/lib/test_util.c
> > > +++ b/tools/testing/selftests/kvm/lib/test_util.c
> > > @@ -15,6 +15,13 @@
> > >  #include "linux/kernel.h"
> > >
> > >  #include "test_util.h"
> > > +#include "processor.h"
> > > +
> > > +static const char * const pagestat_filepaths[] = {
> > > +       "/sys/kernel/debug/kvm/pages_4k",
> > > +       "/sys/kernel/debug/kvm/pages_2m",
> > > +       "/sys/kernel/debug/kvm/pages_1g",
> > > +};
> >
> > I think these should only be defined for x86_64 too. Is this the right
> > file for these definitions or is there an arch specific file they
> > should go in?
>
> The stats also need to be pulled from the selftest's VM, not from the overall KVM
> stats, otherwise the test will fail if there are any other active VMs on the host,
> e.g. I like to run to selftests and kvm-unit-tests in parallel.

That is correct. But since this selftest is not the 'default' selftest
that people normally run, can we make an assumption on running these
tests at this moment? I am planning to submit this test and improve it
in the next series by using Jing's fd based KVM stats interface to
eliminate the assumption of the existence of a single running VM.
Right now, this interface still needs some work, so I am taking a
shortcut that directly uses the whole-system metricfs based interface.

But I can choose to do that and submit the fd-based API together with
this series. What do you suggest?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly
  2021-09-06 20:05       ` Mingwei Zhang
@ 2021-09-08 16:07         ` Mingwei Zhang
  0 siblings, 0 replies; 8+ messages in thread
From: Mingwei Zhang @ 2021-09-08 16:07 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Ben Gardon, Paolo Bonzini, kvm, LKML, David Matlack, Jing Zhang,
	Peter Xu

On Mon, Sep 6, 2021 at 1:05 PM Mingwei Zhang <mizhang@google.com> wrote:
>
> On Tue, Aug 31, 2021 at 9:58 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Mon, Aug 30, 2021, Ben Gardon wrote:
> > > On Sun, Aug 29, 2021 at 9:44 PM Mingwei Zhang <mizhang@google.com> wrote:
> > > > diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> > > > index af1031fed97f..07eb6b5c125e 100644
> > > > --- a/tools/testing/selftests/kvm/lib/test_util.c
> > > > +++ b/tools/testing/selftests/kvm/lib/test_util.c
> > > > @@ -15,6 +15,13 @@
> > > >  #include "linux/kernel.h"
> > > >
> > > >  #include "test_util.h"
> > > > +#include "processor.h"
> > > > +
> > > > +static const char * const pagestat_filepaths[] = {
> > > > +       "/sys/kernel/debug/kvm/pages_4k",
> > > > +       "/sys/kernel/debug/kvm/pages_2m",
> > > > +       "/sys/kernel/debug/kvm/pages_1g",
> > > > +};
> > >
> > > I think these should only be defined for x86_64 too. Is this the right
> > > file for these definitions or is there an arch specific file they
> > > should go in?
> >
> > The stats also need to be pulled from the selftest's VM, not from the overall KVM
> > stats, otherwise the test will fail if there are any other active VMs on the host,
> > e.g. I like to run to selftests and kvm-unit-tests in parallel.
>
> That is correct. But since this selftest is not the 'default' selftest
> that people normally run, can we make an assumption on running these
> tests at this moment? I am planning to submit this test and improve it
> in the next series by using Jing's fd based KVM stats interface to
> eliminate the assumption of the existence of a single running VM.
> Right now, this interface still needs some work, so I am taking a
> shortcut that directly uses the whole-system metricfs based interface.
>
> But I can choose to do that and submit the fd-based API together with
> this series. What do you suggest?


I will take my point back, since some of the "TEST_ASSERT" in this
selftest does assume that there is no other VM running even on
'default' case (ie., run ./dirty_logging_perf_test without arguments).
Therefore, this patch will make the test flaky.

I will go back to implement the fd based API and submit the code along
with this selftest.

Thanks.
-Mingwei

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-09-08 16:07 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-30  4:44 [PATCH v3 0/2] selftests: KVM: verify page stats in kvm x86/mmu Mingwei Zhang
2021-08-30  4:44 ` [PATCH v3 1/2] selftests: KVM: align guest physical memory base address to 1GB Mingwei Zhang
2021-08-30 20:58   ` Ben Gardon
2021-08-30  4:44 ` [PATCH v3 2/2] selftests: KVM: use dirty logging to check if page stats work correctly Mingwei Zhang
2021-08-30 21:10   ` Ben Gardon
2021-08-31 16:58     ` Sean Christopherson
2021-09-06 20:05       ` Mingwei Zhang
2021-09-08 16:07         ` Mingwei Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).