linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lukasz Luba <lukasz.luba@arm.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: linux-kernel@vger.kernel.org, rafael@kernel.org,
	linux-pm@vger.kernel.org, Dietmar.Eggemann@arm.com,
	peterz@infradead.org,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq()
Date: Mon, 10 Oct 2022 10:02:27 +0100	[thread overview]
Message-ID: <3f9a4123-171b-5fa7-f506-341355f71483@arm.com> (raw)
In-Reply-To: <20221010053902.5rofnpzvyynumw3e@vireshk-i7>



On 10/10/22 06:39, Viresh Kumar wrote:
> Would be good to always CC Scheduler maintainers for such a patch.

Agree, I'll do that.

> 
> On 30-09-22, 10:48, Lukasz Luba wrote:
>> When the new max frequency value is stored, the task scheduler must
>> know about it. The scheduler uses the CPUs capacity information in the
>> task placement. Use the existing mechanism which provides information
>> about reduced CPU capacity to the scheduler due to thermal capping.
>>
>> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
>> ---
>>   drivers/cpufreq/cpufreq.c | 18 +++++++++++++++++-
>>   1 file changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
>> index 1f8b93f42c76..205d9ea9c023 100644
>> --- a/drivers/cpufreq/cpufreq.c
>> +++ b/drivers/cpufreq/cpufreq.c
>> @@ -27,6 +27,7 @@
>>   #include <linux/slab.h>
>>   #include <linux/suspend.h>
>>   #include <linux/syscore_ops.h>
>> +#include <linux/thermal.h>
>>   #include <linux/tick.h>
>>   #include <linux/units.h>
>>   #include <trace/events/power.h>
>> @@ -718,6 +719,8 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf)
>>   static ssize_t store_scaling_max_freq
>>   (struct cpufreq_policy *policy, const char *buf, size_t count)
>>   {
>> +	unsigned int frequency;
>> +	struct cpumask *cpus;
>>   	unsigned long val;
>>   	int ret;
>>   
>> @@ -726,7 +729,20 @@ static ssize_t store_scaling_max_freq
>>   		return -EINVAL;
>>   
>>   	ret = freq_qos_update_request(policy->max_freq_req, val);
>> -	return ret >= 0 ? count : ret;
>> +	if (ret >= 0) {
>> +		/*
>> +		 * Make sure that the task scheduler sees these CPUs
>> +		 * capacity reduction. Use the thermal pressure mechanism
>> +		 * to propagate this information to the scheduler.
>> +		 */
>> +		cpus = policy->related_cpus;
> 
> No need of this, just use related_cpus directly.
> 
>> +		frequency = __resolve_freq(policy, val, CPUFREQ_RELATION_HE);
>> +		arch_update_thermal_pressure(cpus, frequency);
> 
> I wonder if using the thermal-pressure API here is the right thing to
> do. It is a change coming from User, which may or may not be
> thermal-related.

Yes, I thought the same. Thermal-pressure name might be not the
best for covering this use case. I have been thinking about this
thermal pressure mechanism for a while, since there are other
use cases like PowerCap DTPM which also reduces CPU capacity
because of power policy from user-space. We don't notify
the scheduler about it. There might be also an issue with virtual
guest OS and how that kernel 'sees' the capacity of CPUs.
We might try to use this 'thermal-pressure' in the guest kernel
to notify about available CPU capacity (just a proposal, not
even an RFC, since we are missing requirements, but issues where
discussed on LPC 2022 on ChromeOS+Android_guest)

Android middleware has 'powerhits' (IIRC since ~4-5 versions now)
but our capacity in task scheduler is not aware of those reductions.

IMO thermal-pressure mechanism is good, but the naming convention
just might be a bit more 'generic' to cover those two users.

Some proposals of better naming:
1. Performance capping
2. Capacity capping
3. Performance reduction

What do you think about changing the name of this and cover
those two users: PowerCap DTPM and this user-space cpufreq?

  reply	other threads:[~2022-10-10  9:02 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-30  9:48 [PATCH 1/2] cpufreq: Change macro for store scaling min/max frequency Lukasz Luba
2022-09-30  9:48 ` [PATCH 2/2] cpufreq: Update CPU capacity reduction in store_scaling_max_freq() Lukasz Luba
2022-10-10  5:39   ` Viresh Kumar
2022-10-10  9:02     ` Lukasz Luba [this message]
2022-10-10  9:15       ` Vincent Guittot
2022-10-10  9:30         ` Lukasz Luba
2022-10-10  9:32           ` Vincent Guittot
2022-10-10 10:12             ` Lukasz Luba
2022-10-10 10:22               ` Vincent Guittot
2022-10-10 10:49                 ` Lukasz Luba
2022-10-10 12:21                   ` Vincent Guittot
2022-10-10 13:05                     ` Lukasz Luba
2022-10-10 10:25               ` Peter Zijlstra
2022-10-10 10:46                 ` Lukasz Luba
2022-10-11  8:38                   ` Peter Zijlstra
2022-10-11 10:25                     ` Lukasz Luba
2022-10-10  5:36 ` [PATCH 1/2] cpufreq: Change macro for store scaling min/max frequency Viresh Kumar
2022-10-10  8:49   ` Lukasz Luba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f9a4123-171b-5fa7-f506-341355f71483@arm.com \
    --to=lukasz.luba@arm.com \
    --cc=Dietmar.Eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).