linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lukasz Luba <lukasz.luba@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	linux-kernel@vger.kernel.org, rafael@kernel.org,
	linux-pm@vger.kernel.org, Dietmar.Eggemann@arm.com,
	peterz@infradead.org
Subject: Re: [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq()
Date: Mon, 10 Oct 2022 10:30:49 +0100	[thread overview]
Message-ID: <8a7968c2-dbf7-5316-ef36-6d45143e0605@arm.com> (raw)
In-Reply-To: <CAKfTPtBPqcTm5_-M_Ka3y46yQ2322TmH8KS-QyDbAiKk5B6hEQ@mail.gmail.com>



On 10/10/22 10:15, Vincent Guittot wrote:
> On Mon, 10 Oct 2022 at 11:02, Lukasz Luba <lukasz.luba@arm.com> wrote:
>>
>>
>>
>> On 10/10/22 06:39, Viresh Kumar wrote:
>>> Would be good to always CC Scheduler maintainers for such a patch.
>>
>> Agree, I'll do that.
>>
>>>
>>> On 30-09-22, 10:48, Lukasz Luba wrote:
>>>> When the new max frequency value is stored, the task scheduler must
>>>> know about it. The scheduler uses the CPUs capacity information in the
>>>> task placement. Use the existing mechanism which provides information
>>>> about reduced CPU capacity to the scheduler due to thermal capping.
>>>>
>>>> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
>>>> ---
>>>>    drivers/cpufreq/cpufreq.c | 18 +++++++++++++++++-
>>>>    1 file changed, 17 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
>>>> index 1f8b93f42c76..205d9ea9c023 100644
>>>> --- a/drivers/cpufreq/cpufreq.c
>>>> +++ b/drivers/cpufreq/cpufreq.c
>>>> @@ -27,6 +27,7 @@
>>>>    #include <linux/slab.h>
>>>>    #include <linux/suspend.h>
>>>>    #include <linux/syscore_ops.h>
>>>> +#include <linux/thermal.h>
>>>>    #include <linux/tick.h>
>>>>    #include <linux/units.h>
>>>>    #include <trace/events/power.h>
>>>> @@ -718,6 +719,8 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf)
>>>>    static ssize_t store_scaling_max_freq
>>>>    (struct cpufreq_policy *policy, const char *buf, size_t count)
>>>>    {
>>>> +    unsigned int frequency;
>>>> +    struct cpumask *cpus;
>>>>       unsigned long val;
>>>>       int ret;
>>>>
>>>> @@ -726,7 +729,20 @@ static ssize_t store_scaling_max_freq
>>>>               return -EINVAL;
>>>>
>>>>       ret = freq_qos_update_request(policy->max_freq_req, val);
>>>> -    return ret >= 0 ? count : ret;
>>>> +    if (ret >= 0) {
>>>> +            /*
>>>> +             * Make sure that the task scheduler sees these CPUs
>>>> +             * capacity reduction. Use the thermal pressure mechanism
>>>> +             * to propagate this information to the scheduler.
>>>> +             */
>>>> +            cpus = policy->related_cpus;
>>>
>>> No need of this, just use related_cpus directly.
>>>
>>>> +            frequency = __resolve_freq(policy, val, CPUFREQ_RELATION_HE);
>>>> +            arch_update_thermal_pressure(cpus, frequency);
>>>
>>> I wonder if using the thermal-pressure API here is the right thing to
>>> do. It is a change coming from User, which may or may not be
>>> thermal-related.
>>
>> Yes, I thought the same. Thermal-pressure name might be not the
>> best for covering this use case. I have been thinking about this
>> thermal pressure mechanism for a while, since there are other
>> use cases like PowerCap DTPM which also reduces CPU capacity
>> because of power policy from user-space. We don't notify
>> the scheduler about it. There might be also an issue with virtual
>> guest OS and how that kernel 'sees' the capacity of CPUs.
>> We might try to use this 'thermal-pressure' in the guest kernel
>> to notify about available CPU capacity (just a proposal, not
>> even an RFC, since we are missing requirements, but issues where
>> discussed on LPC 2022 on ChromeOS+Android_guest)
> 
> The User space setting scaling_max_freq is a long scale event and it
> should be considered as a new running environnement instead of a
> transient event. I would suggest updating the EM is and capacity orig
> of the system in this case. Similarly, we rebuild sched_domain with a
> cpu hotplug. scaling_max_freq interface should not be used to do any
> kind of dynamic scaling.

I tend to agree, but the EM capacity would be only used in part of EAS
code. The whole fair.c view to the capacity_of() (RT + DL + irq +
thermal_pressure) would be still wrong in other parts, e.g.
select_idle_sibling() and load balance.

When we get this powerhint we might be already in overutilied state
so EAS is disabled. IMO other mechanisms in the task scheduler
should be also aware of that capacity reduction.

  reply	other threads:[~2022-10-10  9:31 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-30  9:48 [PATCH 1/2] cpufreq: Change macro for store scaling min/max frequency Lukasz Luba
2022-09-30  9:48 ` [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq() Lukasz Luba
2022-10-10  5:39   ` Viresh Kumar
2022-10-10  9:02     ` Lukasz Luba
2022-10-10  9:15       ` Vincent Guittot
2022-10-10  9:30         ` Lukasz Luba [this message]
2022-10-10  9:32           ` Vincent Guittot
2022-10-10 10:12             ` Lukasz Luba
2022-10-10 10:22               ` Vincent Guittot
2022-10-10 10:49                 ` Lukasz Luba
2022-10-10 12:21                   ` Vincent Guittot
2022-10-10 13:05                     ` Lukasz Luba
2022-10-10 10:25               ` Peter Zijlstra
2022-10-10 10:46                 ` Lukasz Luba
2022-10-11  8:38                   ` Peter Zijlstra
2022-10-11 10:25                     ` Lukasz Luba
2022-10-10  5:36 ` [PATCH 1/2] cpufreq: Change macro for store scaling min/max frequency Viresh Kumar
2022-10-10  8:49   ` Lukasz Luba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a7968c2-dbf7-5316-ef36-6d45143e0605@arm.com \
    --to=lukasz.luba@arm.com \
    --cc=Dietmar.Eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).