From: Auger Eric <eric.auger@redhat.com>
To: Marc Zyngier <maz@kernel.org>
Cc: linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
eric.auger.pro@gmail.com
Subject: Re: [RFC 2/3] KVM: arm64: pmu: Fix chained SW_INCR counters
Date: Thu, 5 Dec 2019 15:06:54 +0100 [thread overview]
Message-ID: <2b30c1ca-3bc0-9f73-4bea-ee42bb74cbac@redhat.com> (raw)
In-Reply-To: <561ac6df385e977cc51d51a8ab28ee49@www.loen.fr>
Hi Marc,
On 12/5/19 10:43 AM, Marc Zyngier wrote:
> Hi Eric,
>
> On 2019-12-04 20:44, Eric Auger wrote:
>> At the moment a SW_INCR counter always overflows on 32-bit
>> boundary, independently on whether the n+1th counter is
>> programmed as CHAIN.
>>
>> Check whether the SW_INCR counter is a 64b counter and if so,
>> implement the 64b logic.
>>
>> Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU counters")
>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>> ---
>> virt/kvm/arm/pmu.c | 16 +++++++++++++++-
>> 1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> index c3f8b059881e..7ab477db2f75 100644
>> --- a/virt/kvm/arm/pmu.c
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -491,6 +491,8 @@ void kvm_pmu_software_increment(struct kvm_vcpu
>> *vcpu, u64 val)
>>
>> enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>> for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
>> + bool chained = test_bit(i >> 1, vcpu->arch.pmu.chained);
>> +
>
> I'd rather you use kvm_pmu_pmc_is_chained() rather than open-coding
> this. But see below:
>
>> if (!(val & BIT(i)))
>> continue;
>> type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
>> @@ -500,8 +502,20 @@ void kvm_pmu_software_increment(struct kvm_vcpu
>> *vcpu, u64 val)
>> reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
>> reg = lower_32_bits(reg);
>> __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
>> - if (!reg)
>> + if (reg) /* no overflow */
>> + continue;
>> + if (chained) {
>> + reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) + 1;
>> + reg = lower_32_bits(reg);
>> + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
>> + if (reg)
>> + continue;
>> + /* mark an overflow on high counter */
>> + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
>> + } else {
>> + /* mark an overflow */
>> __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
>> + }
>> }
>> }
>> }
>
> I think the whole function is a bit of a mess, and could be better
> structured to treat 64bit counters as a first class citizen.
>
> I'm suggesting something along those lines, which tries to
> streamline things a bit and keep the flow uniform between the
> two word sizes. IMHO, it helps reasonning about it and gives
> scope to the ARMv8.5 full 64bit counters... It is of course
> completely untested.
Looks OK to me as well. One remark though, don't we need to test if the
n+1th reg is enabled before incrementing it?
Thanks
Eric
>
> Thoughts?
>
> M.
>
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 8731dfeced8b..cf371f643ade 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -480,26 +480,43 @@ static void kvm_pmu_perf_overflow(struct
> perf_event *perf_event,
> */
> void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
> {
> + struct kvm_pmu *pmu = &vcpu->arch.pmu;
> int i;
> - u64 type, enable, reg;
>
> - if (val == 0)
> - return;
> + /* Weed out disabled counters */
> + val &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>
> - enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
> + u64 type, reg;
> + int ovs = i;
> +
> if (!(val & BIT(i)))
> continue;
> - type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
> - & ARMV8_PMU_EVTYPE_EVENT;
> - if ((type == ARMV8_PMUV3_PERFCTR_SW_INCR)
> - && (enable & BIT(i))) {
> - reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
> - reg = lower_32_bits(reg);
> - __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
> - if (!reg)
> - __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
> +
> + /* PMSWINC only applies to ... SW_INC! */
> + type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);
> + type &= ARMV8_PMU_EVTYPE_EVENT;
> + if (type != ARMV8_PMUV3_PERFCTR_SW_INCR)
> + continue;
> +
> + /* Potential 64bit value */
> + reg = kvm_pmu_get_counter_value(vcpu, i) + 1;
> +
> + /* Start by writing back the low 32bits */
> + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = lower_32_bits(reg);
> +
> + /*
> + * 64bit counter? Write back the upper bits and target
> + * the overflow bit at the next counter
> + */
> + if (kvm_pmu_pmc_is_chained(&pmu->pmc[i])) {
> + reg = upper_32_bits(reg);
> + __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
> + ovs++;
> }
> +
> + if (!lower_32_bits(reg))
> + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(ovs);
> }
> }
>
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
next prev parent reply other threads:[~2019-12-05 14:07 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-04 20:44 [RFC 0/3] KVM/ARM: Misc PMU fixes Eric Auger
2019-12-04 20:44 ` [RFC 1/3] KVM: arm64: pmu: Don't increment SW_INCR if PMCR.E is unset Eric Auger
2019-12-05 9:54 ` Marc Zyngier
2019-12-06 10:35 ` Andrew Murray
2019-12-04 20:44 ` [RFC 2/3] KVM: arm64: pmu: Fix chained SW_INCR counters Eric Auger
2019-12-05 9:43 ` Marc Zyngier
2019-12-05 14:06 ` Auger Eric [this message]
2019-12-05 14:52 ` Marc Zyngier
2019-12-05 19:01 ` Auger Eric
2019-12-06 9:56 ` Auger Eric
2019-12-06 15:48 ` Andrew Murray
2020-01-19 17:58 ` Marc Zyngier
2020-01-20 13:30 ` Auger Eric
2019-12-06 15:21 ` Andrew Murray
2019-12-06 15:35 ` Marc Zyngier
2019-12-06 16:02 ` Andrew Murray
2019-12-04 20:44 ` [RFC 3/3] KVM: arm64: pmu: Enforce PMEVTYPER evtCount size Eric Auger
2019-12-05 9:02 ` Will Deacon
2019-12-05 9:37 ` Auger Eric
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2b30c1ca-3bc0-9f73-4bea-ee42bb74cbac@redhat.com \
--to=eric.auger@redhat.com \
--cc=eric.auger.pro@gmail.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).