From: Ben Gardon <bgardon@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
kvm@vger.kernel.org, Sean Christopherson <seanjc@google.com>,
Andrew Jones <drjones@redhat.com>,
Jim Mattson <jmattson@google.com>,
Yanan Wang <wangyanan55@huawei.com>, Peter Xu <peterx@redhat.com>,
Aaron Lewis <aaronlewis@google.com>
Subject: Re: [PATCH v2 11/12] KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation
Date: Thu, 11 Nov 2021 09:53:57 -0800 [thread overview]
Message-ID: <CANgfPd-mEk4Q5uEXCz+J27kKCrNL=-YMNsK0X3C9p8LtT5NmLw@mail.gmail.com> (raw)
In-Reply-To: <20211111000310.1435032-12-dmatlack@google.com>
On Wed, Nov 10, 2021 at 4:03 PM David Matlack <dmatlack@google.com> wrote:
>
> From: Sean Christopherson <seanjc@google.com>
>
> Fill the per-vCPU args when creating the perf_test VM instead of having
> the caller do so. This helps ensure that any adjustments to the number
> of pages (and thus vcpu_memory_bytes) are reflected in the per-VM args.
> Automatically filling the per-vCPU args will also allow a future patch
> to do the sync to the guest during creation.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> [Updated access_tracking_perf_test as well.]
> Signed-off-by: David Matlack <dmatlack@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
> ---
> .../selftests/kvm/access_tracking_perf_test.c | 5 +-
> .../selftests/kvm/demand_paging_test.c | 5 +-
> .../selftests/kvm/dirty_log_perf_test.c | 6 +-
> .../selftests/kvm/include/perf_test_util.h | 6 +-
> .../selftests/kvm/lib/perf_test_util.c | 71 ++++++++++---------
> .../kvm/memslot_modification_stress_test.c | 6 +-
> 6 files changed, 45 insertions(+), 54 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> index 5d95113c7b7c..fdef6c906388 100644
> --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
> +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> @@ -332,10 +332,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> int vcpus = params->vcpus;
>
> vm = perf_test_create_vm(mode, vcpus, params->vcpu_memory_bytes, 1,
> - params->backing_src);
> -
> - perf_test_setup_vcpus(vm, vcpus, params->vcpu_memory_bytes,
> - !overlap_memory_access);
> + params->backing_src, !overlap_memory_access);
>
> vcpu_threads = create_vcpu_threads(vcpus);
>
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index 3c729a0a1ab1..0fee44f5e5ae 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -293,7 +293,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> int r;
>
> vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
> - p->src_type);
> + p->src_type, p->partition_vcpu_memory_access);
>
> perf_test_args.wr_fract = 1;
>
> @@ -307,9 +307,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
> TEST_ASSERT(vcpu_threads, "Memory allocation failed");
>
> - perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size,
> - p->partition_vcpu_memory_access);
> -
> if (p->uffd_mode) {
> uffd_handler_threads =
> malloc(nr_vcpus * sizeof(*uffd_handler_threads));
> diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> index 7ffab5bd5ce5..62f9cc2a3146 100644
> --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> @@ -186,7 +186,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> struct timespec clear_dirty_log_total = (struct timespec){0};
>
> vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
> - p->slots, p->backing_src);
> + p->slots, p->backing_src,
> + p->partition_vcpu_memory_access);
>
> perf_test_args.wr_fract = p->wr_fract;
>
> @@ -206,9 +207,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
> TEST_ASSERT(vcpu_threads, "Memory allocation failed");
>
> - perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size,
> - p->partition_vcpu_memory_access);
> -
> sync_global_to_guest(vm, perf_test_args);
>
> /* Start the iterations */
> diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
> index 9348580dc5be..91804be1cf53 100644
> --- a/tools/testing/selftests/kvm/include/perf_test_util.h
> +++ b/tools/testing/selftests/kvm/include/perf_test_util.h
> @@ -39,10 +39,8 @@ extern struct perf_test_args perf_test_args;
>
> struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
> uint64_t vcpu_memory_bytes, int slots,
> - enum vm_mem_backing_src_type backing_src);
> + enum vm_mem_backing_src_type backing_src,
> + bool partition_vcpu_memory_access);
> void perf_test_destroy_vm(struct kvm_vm *vm);
> -void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
> - uint64_t vcpu_memory_bytes,
> - bool partition_vcpu_memory_access);
>
> #endif /* SELFTEST_KVM_PERF_TEST_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index b3154b5b0cfd..13c8bc22f4e1 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -48,9 +48,43 @@ static void guest_code(uint32_t vcpu_id)
> }
> }
>
> +void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
> + uint64_t vcpu_memory_bytes,
> + bool partition_vcpu_memory_access)
> +{
> + struct perf_test_args *pta = &perf_test_args;
> + struct perf_test_vcpu_args *vcpu_args;
> + int vcpu_id;
> +
> + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> + vcpu_args = &pta->vcpu_args[vcpu_id];
> +
> + vcpu_args->vcpu_id = vcpu_id;
> + if (partition_vcpu_memory_access) {
> + vcpu_args->gva = guest_test_virt_mem +
> + (vcpu_id * vcpu_memory_bytes);
> + vcpu_args->pages = vcpu_memory_bytes /
> + pta->guest_page_size;
> + vcpu_args->gpa = pta->gpa + (vcpu_id * vcpu_memory_bytes);
> + } else {
> + vcpu_args->gva = guest_test_virt_mem;
> + vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
> + pta->guest_page_size;
> + vcpu_args->gpa = pta->gpa;
> + }
> +
> + vcpu_args_set(vm, vcpu_id, 1, vcpu_id);
> +
> + pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
> + vcpu_id, vcpu_args->gpa, vcpu_args->gpa +
> + (vcpu_args->pages * pta->guest_page_size));
> + }
> +}
> +
> struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
> uint64_t vcpu_memory_bytes, int slots,
> - enum vm_mem_backing_src_type backing_src)
> + enum vm_mem_backing_src_type backing_src,
> + bool partition_vcpu_memory_access)
> {
> struct perf_test_args *pta = &perf_test_args;
> struct kvm_vm *vm;
> @@ -119,6 +153,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
> /* Do mapping for the demand paging memory slot */
> virt_map(vm, guest_test_virt_mem, pta->gpa, guest_num_pages);
>
> + perf_test_setup_vcpus(vm, vcpus, vcpu_memory_bytes, partition_vcpu_memory_access);
> +
> ucall_init(vm, NULL);
>
> return vm;
> @@ -129,36 +165,3 @@ void perf_test_destroy_vm(struct kvm_vm *vm)
> ucall_uninit(vm);
> kvm_vm_free(vm);
> }
> -
> -void perf_test_setup_vcpus(struct kvm_vm *vm, int vcpus,
> - uint64_t vcpu_memory_bytes,
> - bool partition_vcpu_memory_access)
> -{
> - struct perf_test_args *pta = &perf_test_args;
> - struct perf_test_vcpu_args *vcpu_args;
> - int vcpu_id;
> -
> - for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) {
> - vcpu_args = &pta->vcpu_args[vcpu_id];
> -
> - vcpu_args->vcpu_id = vcpu_id;
> - if (partition_vcpu_memory_access) {
> - vcpu_args->gva = guest_test_virt_mem +
> - (vcpu_id * vcpu_memory_bytes);
> - vcpu_args->pages = vcpu_memory_bytes /
> - pta->guest_page_size;
> - vcpu_args->gpa = pta->gpa + (vcpu_id * vcpu_memory_bytes);
> - } else {
> - vcpu_args->gva = guest_test_virt_mem;
> - vcpu_args->pages = (vcpus * vcpu_memory_bytes) /
> - pta->guest_page_size;
> - vcpu_args->gpa = pta->gpa;
> - }
> -
> - vcpu_args_set(vm, vcpu_id, 1, vcpu_id);
> -
> - pr_debug("Added VCPU %d with test mem gpa [%lx, %lx)\n",
> - vcpu_id, vcpu_args->gpa, vcpu_args->gpa +
> - (vcpu_args->pages * pta->guest_page_size));
> - }
> -}
> diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> index d105180d5e8c..27af0bb8deb7 100644
> --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
> @@ -105,16 +105,14 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> int vcpu_id;
>
> vm = perf_test_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
> - VM_MEM_SRC_ANONYMOUS);
> + VM_MEM_SRC_ANONYMOUS,
> + p->partition_vcpu_memory_access);
>
> perf_test_args.wr_fract = 1;
>
> vcpu_threads = malloc(nr_vcpus * sizeof(*vcpu_threads));
> TEST_ASSERT(vcpu_threads, "Memory allocation failed");
>
> - perf_test_setup_vcpus(vm, nr_vcpus, guest_percpu_mem_size,
> - p->partition_vcpu_memory_access);
> -
> /* Export the shared variables to the guest */
> sync_global_to_guest(vm, perf_test_args);
>
> --
> 2.34.0.rc1.387.gb447b232ab-goog
>
next prev parent reply other threads:[~2021-11-11 17:54 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-11 0:02 [PATCH v2 00/12] KVM: selftests: Hugepage fixes and cleanups David Matlack
2021-11-11 0:02 ` [PATCH v2 01/12] KVM: selftests: Explicitly state indicies for vm_guest_mode_params array David Matlack
2021-11-11 0:03 ` [PATCH v2 02/12] KVM: selftests: Expose align() helpers to tests David Matlack
2021-11-11 17:43 ` Ben Gardon
2021-11-11 0:03 ` [PATCH v2 03/12] KVM: selftests: Assert mmap HVA is aligned when using HugeTLB David Matlack
2021-11-11 0:03 ` [PATCH v2 04/12] KVM: selftests: Require GPA to be aligned when backed by hugepages David Matlack
2021-11-11 17:49 ` Ben Gardon
2021-11-11 19:21 ` Sean Christopherson
2021-11-11 0:03 ` [PATCH v2 05/12] KVM: selftests: Use shorthand local var to access struct perf_tests_args David Matlack
2021-11-11 0:03 ` [PATCH v2 06/12] KVM: selftests: Capture per-vCPU GPA in perf_test_vcpu_args David Matlack
2021-11-11 0:03 ` [PATCH v2 07/12] KVM: selftests: Use perf util's per-vCPU GPA/pages in demand paging test David Matlack
2021-11-11 0:03 ` [PATCH v2 08/12] KVM: selftests: Move per-VM GPA into perf_test_args David Matlack
2021-11-11 0:03 ` [PATCH v2 09/12] KVM: selftests: Remove perf_test_args.host_page_size David Matlack
2021-11-11 0:03 ` [PATCH v2 10/12] KVM: selftests: Create VM with adjusted number of guest pages for perf tests David Matlack
2021-11-11 0:03 ` [PATCH v2 11/12] KVM: selftests: Fill per-vCPU struct during "perf_test" VM creation David Matlack
2021-11-11 17:53 ` Ben Gardon [this message]
2021-11-11 0:03 ` [PATCH v2 12/12] KVM: selftests: Sync perf_test_args to guest during " David Matlack
2021-11-11 17:55 ` Ben Gardon
2021-11-16 11:12 ` [PATCH v2 00/12] KVM: selftests: Hugepage fixes and cleanups Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CANgfPd-mEk4Q5uEXCz+J27kKCrNL=-YMNsK0X3C9p8LtT5NmLw@mail.gmail.com' \
--to=bgardon@google.com \
--cc=aaronlewis@google.com \
--cc=dmatlack@google.com \
--cc=drjones@redhat.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=seanjc@google.com \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).