From: Ricardo Koller <ricarkol@google.com>
To: Colton Lewis <coltonlewis@google.com>
Cc: kvm@vger.kernel.org, pbonzini@redhat.com, maz@kernel.org,
dmatlack@google.com, seanjc@google.com, bgardon@google.com,
oupton@google.com
Subject: Re: [PATCH 3/3] KVM: selftests: Print summary stats of memory latency distribution
Date: Tue, 17 Jan 2023 12:45:00 -0800 [thread overview]
Message-ID: <Y8cIzKf52fzf0/d4@google.com> (raw)
In-Reply-To: <20221115173258.2530923-4-coltonlewis@google.com>
On Tue, Nov 15, 2022 at 05:32:58PM +0000, Colton Lewis wrote:
> Print summary stats of the memory latency distribution in
> nanoseconds. For every iteration, this prints the minimum, the
> maximum, and the 50th, 90th, and 99th percentiles.
>
> Stats are calculated by sorting the samples taken from all vcpus and
> picking from the index corresponding with each percentile.
>
> The conversion to nanoseconds needs the frequency of the Intel
> timestamp counter, which is estimated by reading the counter before
> and after sleeping for 1 second. This is not a pretty trick, but it
> also exists in vmx_nested_tsc_scaling_test.c
>
> Signed-off-by: Colton Lewis <coltonlewis@google.com>
> ---
> .../selftests/kvm/dirty_log_perf_test.c | 2 +
> .../selftests/kvm/include/perf_test_util.h | 2 +
> .../selftests/kvm/lib/perf_test_util.c | 62 +++++++++++++++++++
> 3 files changed, 66 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> index 202f38a72851..2bc066bba460 100644
> --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> @@ -274,6 +274,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> ts_diff = timespec_elapsed(start);
> pr_info("Populate memory time: %ld.%.9lds\n",
> ts_diff.tv_sec, ts_diff.tv_nsec);
> + perf_test_print_percentiles(vm, nr_vcpus);
>
> /* Enable dirty logging */
> clock_gettime(CLOCK_MONOTONIC, &start);
> @@ -304,6 +305,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> vcpu_dirty_total = timespec_add(vcpu_dirty_total, ts_diff);
> pr_info("Iteration %d dirty memory time: %ld.%.9lds\n",
> iteration, ts_diff.tv_sec, ts_diff.tv_nsec);
> + perf_test_print_percentiles(vm, nr_vcpus);
>
> clock_gettime(CLOCK_MONOTONIC, &start);
> get_dirty_log(vm, bitmaps, p->slots);
> diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
> index 3d0b75ea866a..ca378c262f12 100644
> --- a/tools/testing/selftests/kvm/include/perf_test_util.h
> +++ b/tools/testing/selftests/kvm/include/perf_test_util.h
> @@ -47,6 +47,8 @@ struct perf_test_args {
>
> extern struct perf_test_args perf_test_args;
>
> +void perf_test_print_percentiles(struct kvm_vm *vm, int nr_vcpus);
> +
> struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int nr_vcpus,
> uint64_t vcpu_memory_bytes, int slots,
> enum vm_mem_backing_src_type backing_src,
> diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
> index 0311da76bae0..927d22421f7c 100644
> --- a/tools/testing/selftests/kvm/lib/perf_test_util.c
> +++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
> @@ -115,6 +115,68 @@ void perf_test_guest_code(uint32_t vcpu_idx)
> }
> }
>
> +#if defined(__x86_64__)
> +/* This could be determined with the right sequence of cpuid
> + * instructions, but that's oddly complicated.
> + */
> +static uint64_t perf_test_intel_timer_frequency(void)
> +{
> + uint64_t count_before;
> + uint64_t count_after;
> + uint64_t measured_freq;
> + uint64_t adjusted_freq;
> +
> + count_before = perf_test_timer_read();
> + sleep(1);
> + count_after = perf_test_timer_read();
> +
> + /* Using 1 second implies our units are in Hz already. */
> + measured_freq = count_after - count_before;
> + /* Truncate to the nearest MHz. Clock frequencies are round numbers. */
> + adjusted_freq = measured_freq / 1000000 * 1000000;
> +
> + return adjusted_freq;
> +}
> +#endif
> +
> +static double perf_test_cycles_to_ns(double cycles)
> +{
> +#if defined(__aarch64__)
> + return cycles * (1e9 / timer_get_cntfrq());
> +#elif defined(__x86_64__)
> + static uint64_t timer_frequency;
> +
> + if (timer_frequency == 0)
> + timer_frequency = perf_test_intel_timer_frequency();
> +
> + return cycles * (1e9 / timer_frequency);
> +#else
> +#warn __func__ " is not implemented for this architecture, will return 0"
> + return 0.0;
> +#endif
> +}
> +
> +/* compare function for qsort */
> +static int perf_test_qcmp(const void *a, const void *b)
> +{
> + return *(int *)a - *(int *)b;
> +}
> +
> +void perf_test_print_percentiles(struct kvm_vm *vm, int nr_vcpus)
> +{
> + uint64_t n_samples = nr_vcpus * SAMPLES_PER_VCPU;
> +
> + sync_global_from_guest(vm, latency_samples);
> + qsort(latency_samples, n_samples, sizeof(uint64_t), &perf_test_qcmp);
> +
> + pr_info("Latency distribution (ns) = min:%6.0lf, 50th:%6.0lf, 90th:%6.0lf, 99th:%6.0lf, max:%6.0lf\n",
> + perf_test_cycles_to_ns((double)latency_samples[0]),
> + perf_test_cycles_to_ns((double)latency_samples[n_samples / 2]),
> + perf_test_cycles_to_ns((double)latency_samples[n_samples * 9 / 10]),
> + perf_test_cycles_to_ns((double)latency_samples[n_samples * 99 / 100]),
> + perf_test_cycles_to_ns((double)latency_samples[n_samples - 1]));
> +}
Latency distribution (ns) = min: 732, 50th: 792, 90th: 901, 99th:
^^^
nit: would prefer to avoid the spaces
> +
> void perf_test_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
> struct kvm_vcpu *vcpus[],
> uint64_t vcpu_memory_bytes,
> --
> 2.38.1.431.g37b22c650d-goog
>
next prev parent reply other threads:[~2023-01-17 22:02 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-15 17:32 [PATCH 0/3] Calculate memory access latency stats Colton Lewis
2022-11-15 17:32 ` [PATCH 1/3] KVM: selftests: Allocate additional space for latency samples Colton Lewis
2023-01-17 20:32 ` Ricardo Koller
2023-01-18 16:49 ` Sean Christopherson
2023-01-26 18:00 ` Colton Lewis
2022-11-15 17:32 ` [PATCH 2/3] KVM: selftests: Collect memory access " Colton Lewis
2023-01-17 20:43 ` Ricardo Koller
2023-01-18 16:32 ` Sean Christopherson
2023-01-26 18:00 ` Colton Lewis
2023-01-26 19:07 ` Sean Christopherson
2023-01-26 17:58 ` Colton Lewis
2023-01-26 18:30 ` Ricardo Koller
2023-01-17 20:48 ` Ricardo Koller
2023-01-26 17:59 ` Colton Lewis
2022-11-15 17:32 ` [PATCH 3/3] KVM: selftests: Print summary stats of memory latency distribution Colton Lewis
2023-01-17 20:45 ` Ricardo Koller [this message]
2023-01-26 17:58 ` Colton Lewis
2023-01-18 16:43 ` Sean Christopherson
2023-01-26 17:59 ` Colton Lewis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y8cIzKf52fzf0/d4@google.com \
--to=ricarkol@google.com \
--cc=bgardon@google.com \
--cc=coltonlewis@google.com \
--cc=dmatlack@google.com \
--cc=kvm@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oupton@google.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).