All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Alexandre Chartre <alexandre.chartre@oracle.com>
Cc: will@kernel.org, catalin.marinas@arm.com,
	alexandru.elisei@arm.com, james.morse@arm.com,
	suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [PATCH] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
Date: Tue, 29 Jun 2021 15:25:13 +0100	[thread overview]
Message-ID: <62e6fa4693c87e7233642e7192344562@kernel.org> (raw)
In-Reply-To: <abcbd6db-da75-a6ad-01f3-7c614172ebd4@oracle.com>

On 2021-06-29 15:17, Alexandre Chartre wrote:
> On 6/29/21 3:47 PM, Marc Zyngier wrote:
>> On Tue, 29 Jun 2021 14:16:55 +0100,
>> Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
>>> 
>>> 
>>> Hi Marc,
>>> 
>>> On 6/29/21 11:06 AM, Marc Zyngier wrote:
>>>> Hi Alexandre,
>> 
>> [...]
>> 
>>>> So the sysreg is the only thing we should consider, and I think we
>>>> should drop the useless masking. There is at least another instance 
>>>> of
>>>> this in the PMU code (kvm_pmu_overflow_status()), and apart from
>>>> kvm_pmu_vcpu_reset(), only the sysreg accessors should care about 
>>>> the
>>>> masking to sanitise accesses.
>>>> 
>>>> What do you think?
>>>> 
>>> 
>>> I think you are right. PMCNTENSET_EL0 is already masked with
>>> kvm_pmu_valid_counter_mask() so there's effectively no need to mask
>>> it again when we use it. I will send an additional patch (on top of
>>> this one) to remove useless masking. Basically, changes would be:
>>> 
>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>> index bab4b735a0cf..e0dfd7ce4ba0 100644
>>> --- a/arch/arm64/kvm/pmu-emul.c
>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct 
>>> kvm_vcpu *vcpu)
>>>                  reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>>> -               reg &= kvm_pmu_valid_counter_mask(vcpu);
>>>          }
>>>           return reg;
>>> @@ -564,21 +563,22 @@ void kvm_pmu_software_increment(struct kvm_vcpu 
>>> *vcpu, u64 val)
>>>    */
>>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>>>   {
>>> -       unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +       unsigned long mask;
>>>          int i;
>>>           if (val & ARMV8_PMU_PMCR_E) {
>>>                  kvm_pmu_enable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          } else {
>>>                  kvm_pmu_disable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          }
>>>           if (val & ARMV8_PMU_PMCR_C)
>>>                  kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 
>>> 0);
>>>           if (val & ARMV8_PMU_PMCR_P) {
>>> +               mask = kvm_pmu_valid_counter_mask(vcpu);
>> 
>> Careful here, this clashes with a fix from Alexandru that is currently
>> in -next (PMCR_EL0.P shouldn't reset the cycle counter) and aimed at
>> 5.14. And whilst you're at it, consider moving the 'mask' declaration
>> here too.
>> 
>>>                  for_each_set_bit(i, &mask, 32)
>>>                          kvm_pmu_set_counter_value(vcpu, i, 0);
>>>          }
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 1a7968ad078c..2e406905760e 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -845,7 +845,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, 
>>> struct sys_reg_params *p,
>>>                          kvm_pmu_disable_counter_mask(vcpu, val);
>>>                  }
>>>          } else {
>>> -               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & 
>>> mask;
>>> +               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>          }
>>>           return true;
>> 
>> If you are cleaning up the read-side of sysregs, access_pminten() and
>> access_pmovs() could have some of your attention too.
>> 
> 
> Ok, so for now, I will just resubmit the initial patch with the commit
> comment fixes. Then, look at all the mask cleanup on top of Alexandru
> changes and prepare another patch.

Please send this as a series rather than individual patches. I'm only
queuing critical fixes at the moment (this is the merge window).
If you post the series after -rc1, I'll queue it and let it simmer
in -next.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Alexandre Chartre <alexandre.chartre@oracle.com>
Cc: kvm@vger.kernel.org, catalin.marinas@arm.com,
	konrad.wilk@oracle.com, will@kernel.org,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
Date: Tue, 29 Jun 2021 15:25:13 +0100	[thread overview]
Message-ID: <62e6fa4693c87e7233642e7192344562@kernel.org> (raw)
In-Reply-To: <abcbd6db-da75-a6ad-01f3-7c614172ebd4@oracle.com>

On 2021-06-29 15:17, Alexandre Chartre wrote:
> On 6/29/21 3:47 PM, Marc Zyngier wrote:
>> On Tue, 29 Jun 2021 14:16:55 +0100,
>> Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
>>> 
>>> 
>>> Hi Marc,
>>> 
>>> On 6/29/21 11:06 AM, Marc Zyngier wrote:
>>>> Hi Alexandre,
>> 
>> [...]
>> 
>>>> So the sysreg is the only thing we should consider, and I think we
>>>> should drop the useless masking. There is at least another instance 
>>>> of
>>>> this in the PMU code (kvm_pmu_overflow_status()), and apart from
>>>> kvm_pmu_vcpu_reset(), only the sysreg accessors should care about 
>>>> the
>>>> masking to sanitise accesses.
>>>> 
>>>> What do you think?
>>>> 
>>> 
>>> I think you are right. PMCNTENSET_EL0 is already masked with
>>> kvm_pmu_valid_counter_mask() so there's effectively no need to mask
>>> it again when we use it. I will send an additional patch (on top of
>>> this one) to remove useless masking. Basically, changes would be:
>>> 
>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>> index bab4b735a0cf..e0dfd7ce4ba0 100644
>>> --- a/arch/arm64/kvm/pmu-emul.c
>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct 
>>> kvm_vcpu *vcpu)
>>>                  reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>>> -               reg &= kvm_pmu_valid_counter_mask(vcpu);
>>>          }
>>>           return reg;
>>> @@ -564,21 +563,22 @@ void kvm_pmu_software_increment(struct kvm_vcpu 
>>> *vcpu, u64 val)
>>>    */
>>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>>>   {
>>> -       unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +       unsigned long mask;
>>>          int i;
>>>           if (val & ARMV8_PMU_PMCR_E) {
>>>                  kvm_pmu_enable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          } else {
>>>                  kvm_pmu_disable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          }
>>>           if (val & ARMV8_PMU_PMCR_C)
>>>                  kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 
>>> 0);
>>>           if (val & ARMV8_PMU_PMCR_P) {
>>> +               mask = kvm_pmu_valid_counter_mask(vcpu);
>> 
>> Careful here, this clashes with a fix from Alexandru that is currently
>> in -next (PMCR_EL0.P shouldn't reset the cycle counter) and aimed at
>> 5.14. And whilst you're at it, consider moving the 'mask' declaration
>> here too.
>> 
>>>                  for_each_set_bit(i, &mask, 32)
>>>                          kvm_pmu_set_counter_value(vcpu, i, 0);
>>>          }
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 1a7968ad078c..2e406905760e 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -845,7 +845,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, 
>>> struct sys_reg_params *p,
>>>                          kvm_pmu_disable_counter_mask(vcpu, val);
>>>                  }
>>>          } else {
>>> -               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & 
>>> mask;
>>> +               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>          }
>>>           return true;
>> 
>> If you are cleaning up the read-side of sysregs, access_pminten() and
>> access_pmovs() could have some of your attention too.
>> 
> 
> Ok, so for now, I will just resubmit the initial patch with the commit
> comment fixes. Then, look at all the mask cleanup on top of Alexandru
> changes and prepare another patch.

Please send this as a series rather than individual patches. I'm only
queuing critical fixes at the moment (this is the merge window).
If you post the series after -rc1, I'll queue it and let it simmer
in -next.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Alexandre Chartre <alexandre.chartre@oracle.com>
Cc: will@kernel.org, catalin.marinas@arm.com,
	alexandru.elisei@arm.com, james.morse@arm.com,
	suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	konrad.wilk@oracle.com
Subject: Re: [PATCH] KVM: arm64: Disabling disabled PMU counters wastes a lot of time
Date: Tue, 29 Jun 2021 15:25:13 +0100	[thread overview]
Message-ID: <62e6fa4693c87e7233642e7192344562@kernel.org> (raw)
In-Reply-To: <abcbd6db-da75-a6ad-01f3-7c614172ebd4@oracle.com>

On 2021-06-29 15:17, Alexandre Chartre wrote:
> On 6/29/21 3:47 PM, Marc Zyngier wrote:
>> On Tue, 29 Jun 2021 14:16:55 +0100,
>> Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
>>> 
>>> 
>>> Hi Marc,
>>> 
>>> On 6/29/21 11:06 AM, Marc Zyngier wrote:
>>>> Hi Alexandre,
>> 
>> [...]
>> 
>>>> So the sysreg is the only thing we should consider, and I think we
>>>> should drop the useless masking. There is at least another instance 
>>>> of
>>>> this in the PMU code (kvm_pmu_overflow_status()), and apart from
>>>> kvm_pmu_vcpu_reset(), only the sysreg accessors should care about 
>>>> the
>>>> masking to sanitise accesses.
>>>> 
>>>> What do you think?
>>>> 
>>> 
>>> I think you are right. PMCNTENSET_EL0 is already masked with
>>> kvm_pmu_valid_counter_mask() so there's effectively no need to mask
>>> it again when we use it. I will send an additional patch (on top of
>>> this one) to remove useless masking. Basically, changes would be:
>>> 
>>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>>> index bab4b735a0cf..e0dfd7ce4ba0 100644
>>> --- a/arch/arm64/kvm/pmu-emul.c
>>> +++ b/arch/arm64/kvm/pmu-emul.c
>>> @@ -373,7 +373,6 @@ static u64 kvm_pmu_overflow_status(struct 
>>> kvm_vcpu *vcpu)
>>>                  reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>                  reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>>> -               reg &= kvm_pmu_valid_counter_mask(vcpu);
>>>          }
>>>           return reg;
>>> @@ -564,21 +563,22 @@ void kvm_pmu_software_increment(struct kvm_vcpu 
>>> *vcpu, u64 val)
>>>    */
>>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>>>   {
>>> -       unsigned long mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +       unsigned long mask;
>>>          int i;
>>>           if (val & ARMV8_PMU_PMCR_E) {
>>>                  kvm_pmu_enable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          } else {
>>>                  kvm_pmu_disable_counter_mask(vcpu,
>>> -                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>>> +                      __vcpu_sys_reg(vcpu, PMCNTENSET_EL0));
>>>          }
>>>           if (val & ARMV8_PMU_PMCR_C)
>>>                  kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 
>>> 0);
>>>           if (val & ARMV8_PMU_PMCR_P) {
>>> +               mask = kvm_pmu_valid_counter_mask(vcpu);
>> 
>> Careful here, this clashes with a fix from Alexandru that is currently
>> in -next (PMCR_EL0.P shouldn't reset the cycle counter) and aimed at
>> 5.14. And whilst you're at it, consider moving the 'mask' declaration
>> here too.
>> 
>>>                  for_each_set_bit(i, &mask, 32)
>>>                          kvm_pmu_set_counter_value(vcpu, i, 0);
>>>          }
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 1a7968ad078c..2e406905760e 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -845,7 +845,7 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, 
>>> struct sys_reg_params *p,
>>>                          kvm_pmu_disable_counter_mask(vcpu, val);
>>>                  }
>>>          } else {
>>> -               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & 
>>> mask;
>>> +               p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
>>>          }
>>>           return true;
>> 
>> If you are cleaning up the read-side of sysregs, access_pminten() and
>> access_pmovs() could have some of your attention too.
>> 
> 
> Ok, so for now, I will just resubmit the initial patch with the commit
> comment fixes. Then, look at all the mask cleanup on top of Alexandru
> changes and prepare another patch.

Please send this as a series rather than individual patches. I'm only
queuing critical fixes at the moment (this is the merge window).
If you post the series after -rc1, I'll queue it and let it simmer
in -next.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-29 14:25 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-28 16:19 [PATCH] KVM: arm64: Disabling disabled PMU counters wastes a lot of time Alexandre Chartre
2021-06-28 16:19 ` Alexandre Chartre
2021-06-28 16:19 ` Alexandre Chartre
2021-06-29  9:06 ` Marc Zyngier
2021-06-29  9:06   ` Marc Zyngier
2021-06-29  9:06   ` Marc Zyngier
2021-06-29 13:16   ` Alexandre Chartre
2021-06-29 13:16     ` Alexandre Chartre
2021-06-29 13:16     ` Alexandre Chartre
2021-06-29 13:47     ` Marc Zyngier
2021-06-29 13:47       ` Marc Zyngier
2021-06-29 13:47       ` Marc Zyngier
2021-06-29 14:17       ` Alexandre Chartre
2021-06-29 14:17         ` Alexandre Chartre
2021-06-29 14:17         ` Alexandre Chartre
2021-06-29 14:25         ` Marc Zyngier [this message]
2021-06-29 14:25           ` Marc Zyngier
2021-06-29 14:25           ` Marc Zyngier
2021-06-29 14:40           ` Alexandre Chartre
2021-06-29 14:40             ` Alexandre Chartre
2021-06-29 14:40             ` Alexandre Chartre
2021-07-06 13:50     ` Alexandre Chartre
2021-07-06 13:50       ` Alexandre Chartre
2021-07-06 13:50       ` Alexandre Chartre
2021-07-06 14:52       ` Marc Zyngier
2021-07-06 14:52         ` Marc Zyngier
2021-07-06 14:52         ` Marc Zyngier
2021-07-06 15:35         ` Alexandre Chartre
2021-07-06 15:35           ` Alexandre Chartre
2021-07-06 15:35           ` Alexandre Chartre
2021-07-06 17:36         ` Marc Zyngier
2021-07-06 17:36           ` Marc Zyngier
2021-07-06 17:36           ` Marc Zyngier
2021-07-07 12:48           ` Alexandre Chartre
2021-07-07 12:48             ` Alexandre Chartre
2021-07-07 12:48             ` Alexandre Chartre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=62e6fa4693c87e7233642e7192344562@kernel.org \
    --to=maz@kernel.org \
    --cc=alexandre.chartre@oracle.com \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=james.morse@arm.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.