All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Auger Eric <eric.auger@redhat.com>
Cc: <eric.auger.pro@gmail.com>, <linux-kernel@vger.kernel.org>,
	<kvmarm@lists.cs.columbia.edu>, <james.morse@arm.com>,
	<andrew.murray@arm.com>, <suzuki.poulose@arm.com>,
	<drjones@redhat.com>
Subject: Re: [RFC 2/3] KVM: arm64: pmu: Fix chained SW_INCR counters
Date: Thu, 05 Dec 2019 14:52:26 +0000	[thread overview]
Message-ID: <15507faca89a980056df7119e105e82a@www.loen.fr> (raw)
In-Reply-To: <2b30c1ca-3bc0-9f73-4bea-ee42bb74cbac@redhat.com>

On 2019-12-05 14:06, Auger Eric wrote:
> Hi Marc,
>
> On 12/5/19 10:43 AM, Marc Zyngier wrote:
>> Hi Eric,
>>
>> On 2019-12-04 20:44, Eric Auger wrote:
>>> At the moment a SW_INCR counter always overflows on 32-bit
>>> boundary, independently on whether the n+1th counter is
>>> programmed as CHAIN.
>>>
>>> Check whether the SW_INCR counter is a 64b counter and if so,
>>> implement the 64b logic.
>>>
>>> Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU 
>>> counters")
>>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>> ---
>>>  virt/kvm/arm/pmu.c | 16 +++++++++++++++-
>>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index c3f8b059881e..7ab477db2f75 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -491,6 +491,8 @@ void kvm_pmu_software_increment(struct kvm_vcpu
>>> *vcpu, u64 val)
>>>
>>>      enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>      for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
>>> +        bool chained = test_bit(i >> 1, vcpu->arch.pmu.chained);
>>> +
>>
>> I'd rather you use kvm_pmu_pmc_is_chained() rather than open-coding
>> this. But see below:
>>
>>>          if (!(val & BIT(i)))
>>>              continue;
>>>          type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
>>> @@ -500,8 +502,20 @@ void kvm_pmu_software_increment(struct 
>>> kvm_vcpu
>>> *vcpu, u64 val)
>>>              reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
>>>              reg = lower_32_bits(reg);
>>>              __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
>>> -            if (!reg)
>>> +            if (reg) /* no overflow */
>>> +                continue;
>>> +            if (chained) {
>>> +                reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) 
>>> + 1;
>>> +                reg = lower_32_bits(reg);
>>> +                __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
>>> +                if (reg)
>>> +                    continue;
>>> +                /* mark an overflow on high counter */
>>> +                __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
>>> +            } else {
>>> +                /* mark an overflow */
>>>                  __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
>>> +            }
>>>          }
>>>      }
>>>  }
>>
>> I think the whole function is a bit of a mess, and could be better
>> structured to treat 64bit counters as a first class citizen.
>>
>> I'm suggesting something along those lines, which tries to
>> streamline things a bit and keep the flow uniform between the
>> two word sizes. IMHO, it helps reasonning about it and gives
>> scope to the ARMv8.5 full 64bit counters... It is of course
>> completely untested.
>
> Looks OK to me as well. One remark though, don't we need to test if 
> the
> n+1th reg is enabled before incrementing it?

Hmmm. I'm not sure. I think we should make sure that we don't flag
a counter as being chained if the odd counter is disabled, rather
than checking it here. As long as the odd counter is not chained
*and* enabled, we shouldn't touch it.

Again, untested:

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cf371f643ade..47366817cd2a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -15,6 +15,7 @@
  #include <kvm/arm_vgic.h>

  static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 
select_idx);
+static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 
select_idx);

  #define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1

@@ -298,6 +299,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu 
*vcpu, u64 val)
  		 * For high counters of chained events we must recreate the
  		 * perf event with the long (64bit) attribute set.
  		 */
+		kvm_pmu_update_pmc_chained(vcpu, i);
  		if (kvm_pmu_pmc_is_chained(pmc) &&
  		    kvm_pmu_idx_is_high_counter(i)) {
  			kvm_pmu_create_perf_event(vcpu, i);
@@ -645,7 +647,8 @@ static void kvm_pmu_update_pmc_chained(struct 
kvm_vcpu *vcpu, u64 select_idx)
  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];

-	if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx)) {
+	if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx) &&
+	    kvm_pmu_counter_is_enabled(vcpu, pmc->idx)) {
  		/*
  		 * During promotion from !chained to chained we must ensure
  		 * the adjacent counter is stopped and its event destroyed

What do you think?

         M.
-- 
Jazz is not dead. It just smells funny...

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Auger Eric <eric.auger@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
	eric.auger.pro@gmail.com
Subject: Re: [RFC 2/3] KVM: arm64: pmu: Fix chained SW_INCR counters
Date: Thu, 05 Dec 2019 14:52:26 +0000	[thread overview]
Message-ID: <15507faca89a980056df7119e105e82a@www.loen.fr> (raw)
In-Reply-To: <2b30c1ca-3bc0-9f73-4bea-ee42bb74cbac@redhat.com>

On 2019-12-05 14:06, Auger Eric wrote:
> Hi Marc,
>
> On 12/5/19 10:43 AM, Marc Zyngier wrote:
>> Hi Eric,
>>
>> On 2019-12-04 20:44, Eric Auger wrote:
>>> At the moment a SW_INCR counter always overflows on 32-bit
>>> boundary, independently on whether the n+1th counter is
>>> programmed as CHAIN.
>>>
>>> Check whether the SW_INCR counter is a 64b counter and if so,
>>> implement the 64b logic.
>>>
>>> Fixes: 80f393a23be6 ("KVM: arm/arm64: Support chained PMU 
>>> counters")
>>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>> ---
>>>  virt/kvm/arm/pmu.c | 16 +++++++++++++++-
>>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index c3f8b059881e..7ab477db2f75 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -491,6 +491,8 @@ void kvm_pmu_software_increment(struct kvm_vcpu
>>> *vcpu, u64 val)
>>>
>>>      enable = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>      for (i = 0; i < ARMV8_PMU_CYCLE_IDX; i++) {
>>> +        bool chained = test_bit(i >> 1, vcpu->arch.pmu.chained);
>>> +
>>
>> I'd rather you use kvm_pmu_pmc_is_chained() rather than open-coding
>> this. But see below:
>>
>>>          if (!(val & BIT(i)))
>>>              continue;
>>>          type = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
>>> @@ -500,8 +502,20 @@ void kvm_pmu_software_increment(struct 
>>> kvm_vcpu
>>> *vcpu, u64 val)
>>>              reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
>>>              reg = lower_32_bits(reg);
>>>              __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
>>> -            if (!reg)
>>> +            if (reg) /* no overflow */
>>> +                continue;
>>> +            if (chained) {
>>> +                reg = __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) 
>>> + 1;
>>> +                reg = lower_32_bits(reg);
>>> +                __vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i + 1) = reg;
>>> +                if (reg)
>>> +                    continue;
>>> +                /* mark an overflow on high counter */
>>> +                __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i + 1);
>>> +            } else {
>>> +                /* mark an overflow */
>>>                  __vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(i);
>>> +            }
>>>          }
>>>      }
>>>  }
>>
>> I think the whole function is a bit of a mess, and could be better
>> structured to treat 64bit counters as a first class citizen.
>>
>> I'm suggesting something along those lines, which tries to
>> streamline things a bit and keep the flow uniform between the
>> two word sizes. IMHO, it helps reasonning about it and gives
>> scope to the ARMv8.5 full 64bit counters... It is of course
>> completely untested.
>
> Looks OK to me as well. One remark though, don't we need to test if 
> the
> n+1th reg is enabled before incrementing it?

Hmmm. I'm not sure. I think we should make sure that we don't flag
a counter as being chained if the odd counter is disabled, rather
than checking it here. As long as the odd counter is not chained
*and* enabled, we shouldn't touch it.

Again, untested:

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index cf371f643ade..47366817cd2a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -15,6 +15,7 @@
  #include <kvm/arm_vgic.h>

  static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 
select_idx);
+static void kvm_pmu_update_pmc_chained(struct kvm_vcpu *vcpu, u64 
select_idx);

  #define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1

@@ -298,6 +299,7 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu 
*vcpu, u64 val)
  		 * For high counters of chained events we must recreate the
  		 * perf event with the long (64bit) attribute set.
  		 */
+		kvm_pmu_update_pmc_chained(vcpu, i);
  		if (kvm_pmu_pmc_is_chained(pmc) &&
  		    kvm_pmu_idx_is_high_counter(i)) {
  			kvm_pmu_create_perf_event(vcpu, i);
@@ -645,7 +647,8 @@ static void kvm_pmu_update_pmc_chained(struct 
kvm_vcpu *vcpu, u64 select_idx)
  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];

-	if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx)) {
+	if (kvm_pmu_idx_has_chain_evtype(vcpu, pmc->idx) &&
+	    kvm_pmu_counter_is_enabled(vcpu, pmc->idx)) {
  		/*
  		 * During promotion from !chained to chained we must ensure
  		 * the adjacent counter is stopped and its event destroyed

What do you think?

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2019-12-05 14:52 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-04 20:44 [RFC 0/3] KVM/ARM: Misc PMU fixes Eric Auger
2019-12-04 20:44 ` Eric Auger
2019-12-04 20:44 ` [RFC 1/3] KVM: arm64: pmu: Don't increment SW_INCR if PMCR.E is unset Eric Auger
2019-12-04 20:44   ` Eric Auger
2019-12-05  9:54   ` Marc Zyngier
2019-12-05  9:54     ` Marc Zyngier
2019-12-06 10:35   ` Andrew Murray
2019-12-06 10:35     ` Andrew Murray
2019-12-04 20:44 ` [RFC 2/3] KVM: arm64: pmu: Fix chained SW_INCR counters Eric Auger
2019-12-04 20:44   ` Eric Auger
2019-12-05  9:43   ` Marc Zyngier
2019-12-05  9:43     ` Marc Zyngier
2019-12-05 14:06     ` Auger Eric
2019-12-05 14:06       ` Auger Eric
2019-12-05 14:52       ` Marc Zyngier [this message]
2019-12-05 14:52         ` Marc Zyngier
2019-12-05 19:01         ` Auger Eric
2019-12-05 19:01           ` Auger Eric
2019-12-06  9:56           ` Auger Eric
2019-12-06  9:56             ` Auger Eric
2019-12-06 15:48           ` Andrew Murray
2019-12-06 15:48             ` Andrew Murray
2020-01-19 17:58           ` Marc Zyngier
2020-01-19 17:58             ` Marc Zyngier
2020-01-20 13:30             ` Auger Eric
2020-01-20 13:30               ` Auger Eric
2019-12-06 15:21         ` Andrew Murray
2019-12-06 15:21           ` Andrew Murray
2019-12-06 15:35           ` Marc Zyngier
2019-12-06 15:35             ` Marc Zyngier
2019-12-06 16:02             ` Andrew Murray
2019-12-06 16:02               ` Andrew Murray
2019-12-04 20:44 ` [RFC 3/3] KVM: arm64: pmu: Enforce PMEVTYPER evtCount size Eric Auger
2019-12-04 20:44   ` Eric Auger
2019-12-05  9:02   ` Will Deacon
2019-12-05  9:02     ` Will Deacon
2019-12-05  9:37     ` Auger Eric
2019-12-05  9:37       ` Auger Eric

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=15507faca89a980056df7119e105e82a@www.loen.fr \
    --to=maz@kernel.org \
    --cc=andrew.murray@arm.com \
    --cc=drjones@redhat.com \
    --cc=eric.auger.pro@gmail.com \
    --cc=eric.auger@redhat.com \
    --cc=james.morse@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-kernel@vger.kernel.org \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.