kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
From: Gavin Shan <gshan@redhat.com>
To: kvmarm@lists.linux.dev
Cc: kvm@vger.kernel.org, maz@kernel.org,
	linux-kernel@vger.kernel.org, zhenyzha@redhat.com,
	shan.gavin@gmail.com, pbonzini@redhat.com,
	maciej.szmigiero@oracle.com, shuah@kernel.org,
	kvmarm@lists.cs.columbia.edu, ajones@ventanamicro.com
Subject: [PATCH 6/6] KVM: selftests: memslot_perf_test: Report optimal memory slots
Date: Fri, 14 Oct 2022 15:19:14 +0800	[thread overview]
Message-ID: <20221014071914.227134-7-gshan@redhat.com> (raw)
In-Reply-To: <20221014071914.227134-1-gshan@redhat.com>

The memory area in each slot should be aligned to host page size.
Otherwise, the test will failure. For example, the following command
fails with the following messages with 64KB-page-size-host and
4KB-pae-size-guest. It's not user friendly. Lets do something to report
the optimal memory slots, instead of failing the test.

  # ./memslot_perf_test -v -s 1000
  Number of memory slots: 999
  Testing map performance with 1 runs, 5 seconds each
  Adding slots 1..999, each slot with 8 pages + 216 extra pages last
  ==== Test Assertion Failure ====
    lib/kvm_util.c:824: vm_adjust_num_guest_pages(vm->mode, npages) == npages
    pid=19872 tid=19872 errno=0 - Success
       1  0x00000000004065b3: vm_userspace_mem_region_add at kvm_util.c:822
       2  0x0000000000401d6b: prepare_vm at memslot_perf_test.c:273
       3  (inlined by) test_execute at memslot_perf_test.c:756
       4  (inlined by) test_loop at memslot_perf_test.c:994
       5  (inlined by) main at memslot_perf_test.c:1073
       6  0x0000ffff7ebb4383: ?? ??:0
       7  0x00000000004021ff: _start at :?
    Number of guest pages is not compatible with the host. Try npages=16

Report the optimal memory slots instead of failing the test when
the memory area in each slot isn't aligned to host page size. With
this applied, the optimal memory slots is reported.

  # ./memslot_perf_test -v -s 1000
  # ./memslot_perf_test -v -s 1000
  Number of memory slots: 999
  Testing map performance with 1 runs, 5 seconds each
  Memslot count too high for this test, decrease the cap (max is 514)

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 .../testing/selftests/kvm/memslot_perf_test.c | 45 +++++++++++++++++--
 1 file changed, 41 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index e6d34744b45d..bec65803f220 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -230,16 +230,52 @@ static struct vm_data *alloc_vm(void)
 	return data;
 }
 
+static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size,
+			     uint64_t pages_per_slot, uint64_t rempages)
+{
+	if (!pages_per_slot)
+		return false;
+
+	if ((pages_per_slot * guest_page_size) % host_page_size)
+		return false;
+
+	if ((rempages * guest_page_size) % host_page_size)
+		return false;
+
+	return true;
+}
+
+
+static uint64_t get_max_slots(struct vm_data *data, uint32_t host_page_size)
+{
+	uint32_t guest_page_size = data->vm->page_size;
+	uint64_t mempages, pages_per_slot, rempages;
+	uint64_t slots;
+
+	mempages = data->npages;
+	slots = data->nslots;
+	while (--slots > 1) {
+		pages_per_slot = mempages / slots;
+		rempages = mempages % pages_per_slot;
+		if (check_slot_pages(host_page_size, guest_page_size,
+				     pages_per_slot, rempages))
+			return slots + 1;	/* slot 0 is reserved */
+	}
+
+	return 0;
+}
+
 static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 		       void *guest_code, uint64_t mem_size,
 		       struct timespec *slot_runtime)
 {
 	uint64_t mempages, rempages;
 	uint64_t guest_addr;
-	uint32_t slot, guest_page_size;
+	uint32_t slot, host_page_size, guest_page_size;
 	struct timespec tstart;
 	struct sync_area *sync;
 
+	host_page_size = getpagesize();
 	guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size;
 	mempages = mem_size / guest_page_size;
 
@@ -250,12 +286,13 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 	TEST_ASSERT(data->npages > 1, "Can't test without any memory");
 	data->nslots = nslots;
 	data->pages_per_slot = data->npages / data->nslots;
-	if (!data->pages_per_slot) {
-		*maxslots = data->npages + 1;
+	rempages = data->npages % data->nslots;
+	if (!check_slot_pages(host_page_size, guest_page_size,
+			      data->pages_per_slot, rempages)) {
+		*maxslots = get_max_slots(data, host_page_size);
 		return false;
 	}
 
-	rempages = data->npages % data->nslots;
 	data->hva_slots = malloc(sizeof(*data->hva_slots) * data->nslots);
 	TEST_ASSERT(data->hva_slots, "malloc() fail");
 
-- 
2.23.0

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Gavin Shan <gshan@redhat.com>
To: kvmarm@lists.linux.dev
Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, ajones@ventanamicro.com,
	pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org,
	oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com,
	maciej.szmigiero@oracle.com, ricarkol@google.com,
	zhenyzha@redhat.com, shan.gavin@gmail.com
Subject: [PATCH 6/6] KVM: selftests: memslot_perf_test: Report optimal memory slots
Date: Fri, 14 Oct 2022 15:19:14 +0800	[thread overview]
Message-ID: <20221014071914.227134-7-gshan@redhat.com> (raw)
Message-ID: <20221014071914.17PmQeGC1Nn8_FMPoIgkdnVBFcs4-2jLF1AAQqndgpE@z> (raw)
In-Reply-To: <20221014071914.227134-1-gshan@redhat.com>

The memory area in each slot should be aligned to host page size.
Otherwise, the test will failure. For example, the following command
fails with the following messages with 64KB-page-size-host and
4KB-pae-size-guest. It's not user friendly. Lets do something to report
the optimal memory slots, instead of failing the test.

  # ./memslot_perf_test -v -s 1000
  Number of memory slots: 999
  Testing map performance with 1 runs, 5 seconds each
  Adding slots 1..999, each slot with 8 pages + 216 extra pages last
  ==== Test Assertion Failure ====
    lib/kvm_util.c:824: vm_adjust_num_guest_pages(vm->mode, npages) == npages
    pid=19872 tid=19872 errno=0 - Success
       1  0x00000000004065b3: vm_userspace_mem_region_add at kvm_util.c:822
       2  0x0000000000401d6b: prepare_vm at memslot_perf_test.c:273
       3  (inlined by) test_execute at memslot_perf_test.c:756
       4  (inlined by) test_loop at memslot_perf_test.c:994
       5  (inlined by) main at memslot_perf_test.c:1073
       6  0x0000ffff7ebb4383: ?? ??:0
       7  0x00000000004021ff: _start at :?
    Number of guest pages is not compatible with the host. Try npages=16

Report the optimal memory slots instead of failing the test when
the memory area in each slot isn't aligned to host page size. With
this applied, the optimal memory slots is reported.

  # ./memslot_perf_test -v -s 1000
  # ./memslot_perf_test -v -s 1000
  Number of memory slots: 999
  Testing map performance with 1 runs, 5 seconds each
  Memslot count too high for this test, decrease the cap (max is 514)

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 .../testing/selftests/kvm/memslot_perf_test.c | 45 +++++++++++++++++--
 1 file changed, 41 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index e6d34744b45d..bec65803f220 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -230,16 +230,52 @@ static struct vm_data *alloc_vm(void)
 	return data;
 }
 
+static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size,
+			     uint64_t pages_per_slot, uint64_t rempages)
+{
+	if (!pages_per_slot)
+		return false;
+
+	if ((pages_per_slot * guest_page_size) % host_page_size)
+		return false;
+
+	if ((rempages * guest_page_size) % host_page_size)
+		return false;
+
+	return true;
+}
+
+
+static uint64_t get_max_slots(struct vm_data *data, uint32_t host_page_size)
+{
+	uint32_t guest_page_size = data->vm->page_size;
+	uint64_t mempages, pages_per_slot, rempages;
+	uint64_t slots;
+
+	mempages = data->npages;
+	slots = data->nslots;
+	while (--slots > 1) {
+		pages_per_slot = mempages / slots;
+		rempages = mempages % pages_per_slot;
+		if (check_slot_pages(host_page_size, guest_page_size,
+				     pages_per_slot, rempages))
+			return slots + 1;	/* slot 0 is reserved */
+	}
+
+	return 0;
+}
+
 static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 		       void *guest_code, uint64_t mem_size,
 		       struct timespec *slot_runtime)
 {
 	uint64_t mempages, rempages;
 	uint64_t guest_addr;
-	uint32_t slot, guest_page_size;
+	uint32_t slot, host_page_size, guest_page_size;
 	struct timespec tstart;
 	struct sync_area *sync;
 
+	host_page_size = getpagesize();
 	guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size;
 	mempages = mem_size / guest_page_size;
 
@@ -250,12 +286,13 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
 	TEST_ASSERT(data->npages > 1, "Can't test without any memory");
 	data->nslots = nslots;
 	data->pages_per_slot = data->npages / data->nslots;
-	if (!data->pages_per_slot) {
-		*maxslots = data->npages + 1;
+	rempages = data->npages % data->nslots;
+	if (!check_slot_pages(host_page_size, guest_page_size,
+			      data->pages_per_slot, rempages)) {
+		*maxslots = get_max_slots(data, host_page_size);
 		return false;
 	}
 
-	rempages = data->npages % data->nslots;
 	data->hva_slots = malloc(sizeof(*data->hva_slots) * data->nslots);
 	TEST_ASSERT(data->hva_slots, "malloc() fail");
 
-- 
2.23.0


  parent reply	other threads:[~2022-10-14  7:27 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-14  7:19 [PATCH 0/6] KVM: selftests: memslot_perf_test: aarch64 cleanup/fixes Gavin Shan
2022-10-14  7:19 ` Gavin Shan
2022-10-14  7:19 ` [PATCH 1/6] KVM: selftests: memslot_perf_test: Use data->nslots in prepare_vm() Gavin Shan
2022-10-14  7:19   ` Gavin Shan
2022-10-14  7:19 ` [PATCH 2/6] KVM: selftests: memslot_perf_test: Consolidate loop conditions " Gavin Shan
2022-10-14  7:19   ` Gavin Shan
2022-10-14  7:19 ` [PATCH 3/6] KVM: selftests: memslot_perf_test: Probe memory slots for once Gavin Shan
2022-10-14  7:19   ` Gavin Shan
2022-10-17 17:34   ` Maciej S. Szmigiero
2022-10-17 17:34     ` Maciej S. Szmigiero
2022-10-17 22:18     ` Gavin Shan
2022-10-17 22:18       ` Gavin Shan
2022-10-14  7:19 ` [PATCH 4/6] KVM: selftests: memslot_perf_test: Support variable guest page size Gavin Shan
2022-10-14  7:19   ` Gavin Shan
2022-10-17 21:31   ` Maciej S. Szmigiero
2022-10-17 21:31     ` Maciej S. Szmigiero
2022-10-18  0:46     ` Gavin Shan
2022-10-18  0:46       ` Gavin Shan
2022-10-18  0:51       ` Gavin Shan
2022-10-18  0:51         ` Gavin Shan
2022-10-18 15:56         ` Maciej S. Szmigiero
2022-10-18 15:56           ` Maciej S. Szmigiero
2022-10-19  0:26           ` Gavin Shan
2022-10-19  0:26             ` Gavin Shan
2022-10-19 20:18             ` Maciej S. Szmigiero
2022-10-19 20:18               ` Maciej S. Szmigiero
2022-10-20  7:19               ` Gavin Shan
2022-10-20  7:19                 ` Gavin Shan
2022-10-14  7:19 ` [PATCH 5/6] KVM: selftests: memslot_perf_test: Consolidate memory sizes Gavin Shan
2022-10-14  7:19   ` Gavin Shan
2022-10-17 21:36   ` Maciej S. Szmigiero
2022-10-17 21:36     ` Maciej S. Szmigiero
2022-10-17 22:08     ` Sean Christopherson
2022-10-17 22:08       ` Sean Christopherson
2022-10-17 22:51       ` Gavin Shan
2022-10-17 22:51         ` Gavin Shan
2022-10-17 22:56         ` Maciej S. Szmigiero
2022-10-17 22:56           ` Maciej S. Szmigiero
2022-10-17 23:10           ` Gavin Shan
2022-10-17 23:10             ` Gavin Shan
2022-10-17 23:32             ` Sean Christopherson
2022-10-17 23:32               ` Sean Christopherson
2022-10-17 23:39               ` Gavin Shan
2022-10-17 23:39                 ` Gavin Shan
2022-10-18  7:47       ` Oliver Upton
2022-10-18  7:47         ` Oliver Upton
2022-10-18  8:48         ` Gavin Shan
2022-10-18  8:48           ` Gavin Shan
2022-10-18  1:13     ` Gavin Shan
2022-10-18  1:13       ` Gavin Shan
2022-10-14  7:19 ` Gavin Shan [this message]
2022-10-14  7:19   ` [PATCH 6/6] KVM: selftests: memslot_perf_test: Report optimal memory slots Gavin Shan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221014071914.227134-7-gshan@redhat.com \
    --to=gshan@redhat.com \
    --cc=ajones@ventanamicro.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.szmigiero@oracle.com \
    --cc=maz@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=shan.gavin@gmail.com \
    --cc=shuah@kernel.org \
    --cc=zhenyzha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).