From: Paolo Bonzini <pbonzini@redhat.com>
To: Jim Mattson <jmattson@google.com>, kvm@vger.kernel.org
Cc: Eric Hankland <ehankland@google.com>
Subject: Re: [PATCH 1/2] KVM: x86: Update vPMCs when retiring instructions
Date: Tue, 16 Nov 2021 11:30:02 +0100 [thread overview]
Message-ID: <5ce937a9-5c14-189a-2aff-08476fb942f2@redhat.com> (raw)
In-Reply-To: <20211112235235.1125060-2-jmattson@google.com>
On 11/13/21 00:52, Jim Mattson wrote:
> When KVM retires a guest instruction through emulation, increment any
> vPMCs that are configured to monitor "instructions retired," and
> update the sample period of those counters so that they will overflow
> at the right time.
>
> Signed-off-by: Eric Hankland <ehankland@google.com>
> [jmattson:
> - Split the code to increment "branch instructions retired" into a
> separate commit.
> - Added 'static' to kvm_pmu_incr_counter() definition.
> - Modified kvm_pmu_incr_counter() to check pmc->perf_event->state ==
> PERF_EVENT_STATE_ACTIVE.
> ]
> Signed-off-by: Jim Mattson <jmattson@google.com>
> Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests")
Queued both, with the addition of an
+ if (!pmu->event_count)
+ return;
check in kvm_pmu_record_event.
Paolo
> ---
> arch/x86/kvm/pmu.c | 31 +++++++++++++++++++++++++++++++
> arch/x86/kvm/pmu.h | 1 +
> arch/x86/kvm/x86.c | 3 +++
> 3 files changed, 35 insertions(+)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 09873f6488f7..153c488032a5 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -490,6 +490,37 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu)
> kvm_pmu_reset(vcpu);
> }
>
> +static void kvm_pmu_incr_counter(struct kvm_pmc *pmc, u64 evt)
> +{
> + u64 counter_value, sample_period;
> +
> + if (pmc->perf_event &&
> + pmc->perf_event->attr.type == PERF_TYPE_HARDWARE &&
> + pmc->perf_event->state == PERF_EVENT_STATE_ACTIVE &&
> + pmc->perf_event->attr.config == evt) {
> + pmc->counter++;
> + counter_value = pmc_read_counter(pmc);
> + sample_period = get_sample_period(pmc, counter_value);
> + if (!counter_value)
> + perf_event_overflow(pmc->perf_event, NULL, NULL);
> + if (local64_read(&pmc->perf_event->hw.period_left) >
> + sample_period)
> + perf_event_period(pmc->perf_event, sample_period);
> + }
> +}
> +
> +void kvm_pmu_record_event(struct kvm_vcpu *vcpu, u64 evt)
> +{
> + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
> + int i;
> +
> + for (i = 0; i < pmu->nr_arch_gp_counters; i++)
> + kvm_pmu_incr_counter(&pmu->gp_counters[i], evt);
> + for (i = 0; i < pmu->nr_arch_fixed_counters; i++)
> + kvm_pmu_incr_counter(&pmu->fixed_counters[i], evt);
> +}
> +EXPORT_SYMBOL_GPL(kvm_pmu_record_event);
> +
> int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
> {
> struct kvm_pmu_event_filter tmp, *filter;
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index 59d6b76203d5..d1dd2294f8fb 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -159,6 +159,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu);
> void kvm_pmu_cleanup(struct kvm_vcpu *vcpu);
> void kvm_pmu_destroy(struct kvm_vcpu *vcpu);
> int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp);
> +void kvm_pmu_record_event(struct kvm_vcpu *vcpu, u64 evt);
>
> bool is_vmware_backdoor_pmc(u32 pmc_idx);
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d7def720227d..bd49e2a204d5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7854,6 +7854,8 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
> if (unlikely(!r))
> return 0;
>
> + kvm_pmu_record_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS);
> +
> /*
> * rflags is the old, "raw" value of the flags. The new value has
> * not been saved yet.
> @@ -8101,6 +8103,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
> if (!ctxt->have_exception ||
> exception_type(ctxt->exception.vector) == EXCPT_TRAP) {
> + kvm_pmu_record_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS);
> kvm_rip_write(vcpu, ctxt->eip);
> if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
> r = kvm_vcpu_do_singlestep(vcpu);
>
next prev parent reply other threads:[~2021-11-16 10:30 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-12 23:52 [PATCH 0/2] kvm: x86: Fix PMU virtualization for some basic events Jim Mattson
2021-11-12 23:52 ` [PATCH 1/2] KVM: x86: Update vPMCs when retiring instructions Jim Mattson
2021-11-16 10:30 ` Paolo Bonzini [this message]
2021-11-16 12:43 ` Like Xu
2021-11-16 22:15 ` Jim Mattson
2021-11-17 3:21 ` Like Xu
2021-11-17 20:01 ` Jim Mattson
2021-11-18 3:37 ` Jim Mattson
2021-11-18 11:26 ` Like Xu
2021-11-18 9:13 ` Like Xu
2021-11-12 23:52 ` [PATCH 2/2] KVM: x86: Update vPMCs when retiring branch instructions Jim Mattson
2021-11-15 3:43 ` [PATCH 0/2] kvm: x86: Fix PMU virtualization for some basic events Like Xu
2021-11-15 17:51 ` Jim Mattson
2021-11-16 3:22 ` Like Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5ce937a9-5c14-189a-2aff-08476fb942f2@redhat.com \
--to=pbonzini@redhat.com \
--cc=ehankland@google.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).