All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kajetan Puchalski <kajetan.puchalski@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Jian-Min Liu <jian-min.liu@mediatek.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ingo Molnar <mingo@kernel.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Vincent Donnefort <vdonnefort@google.com>,
	Quentin Perret <qperret@google.com>,
	Patrick Bellasi <patrick.bellasi@matbug.net>,
	Abhijeet Dharmapurikar <adharmap@quicinc.com>,
	Qais Yousef <qais.yousef@arm.com>,
	linux-kernel@vger.kernel.org,
	Jonathan JMChen <jonathan.jmchen@mediatek.com>
Subject: Re: [RFC PATCH 0/1] sched/pelt: Change PELT halflife at runtime
Date: Thu, 29 Sep 2022 12:10:17 +0100	[thread overview]
Message-ID: <YzV9Gejo/+DL3UjK@e126311.manchester.arm.com> (raw)
In-Reply-To: <YzVpqweg21yIn30A@hirez.programming.kicks-ass.net>

On Thu, Sep 29, 2022 at 11:47:23AM +0200, Peter Zijlstra wrote:

[...]

> Mostly I think you've demonstrated that none of this is worth it.
> 
> > -----------------------------------------------------------------------
> > 
> > HOK ... Honour Of Kings, Video game
> > FHD ... Full High Definition
> > fps ... frame per second
> > pwr ... power consumption
> > 
> > table values are in %
> 
> Oh... that's bloody insane; that's why none of it makes sense.

Hi,

We have seen similar results to the ones provided by MTK while running
Jankbench, a UI performance benchmark.

For the following tables, the pelt numbers refer to multiplier values so
pelt_1 -> 32ms, pelt_2 -> 16ms, pelt_4 -> 8ms.

We can see the max frame durations decreasing significantly in line with
changing the pelt multiplier. Having a faster-responding pelt lets us
improve the worst-case scenario by a large margin which is why it can be
useful in some cases where that worst-case scenario is important.

Max frame duration (ms)

+------------------+----------+
| kernel          |    value  |
|------------------+----------|
| pelt_1           | 157.426  |
| pelt_2           | 111.975  |
| pelt_4           | 85.2713  |
+------------------+----------+

However, it is accompanied by a very noticeable increase in power usage.
We have seen even bigger power usage increases for different workloads.
This is why we think it makes much more sense as something that can be
changed at runtime - if set at boot time the energy consumption increase
would nullify any of the potential benefits. For limited workloads or
scenarios, the tradeoff might be worth it.

Power usage [mW]

+------------------+---------+-------------+
| kernel           |   value | perc_diff   |
|------------------+---------+-------------|
| pelt_1           |   139.9 | 0.0%        |
| pelt_2           |   146.4 | 4.62%       |
| pelt_4           |   158.5 | 13.25%      |
+------------------+---------+-------------+

At the same time we see that the average-case can improve slightly as
well in the process and the consistency either doesn't get worse or
improves a bit too.

Mean frame duration (ms)

+---------------+------------------+---------+-------------+
| variable      | kernel           |   value | perc_diff   |
|---------------+------------------+---------+-------------|
| mean_duration | pelt_1           |    14.6 | 0.0%        |
| mean_duration | pelt_2           |    13.8 | -5.43%      |
| mean_duration | pelt_4           |    14.5 | -0.58%      |
+---------------+------------------+---------+-------------+

Jank percentage

+------------+------------------+---------+-------------+
| variable   | kernel           |   value | perc_diff   |
|------------+------------------+---------+-------------|
| jank_perc  | pelt_1           |     2.1 | 0.0%        |
| jank_perc  | pelt_2           |     2.1 | 0.11%       |
| jank_perc  | pelt_4           |     2   | -3.46%      |
+------------+------------------+---------+-------------+

> How is any of that an answer to:
>
>   "They want; I want an explanation of what exact problem is fixed how ;-)"
>
> This is just random numbers showing poking the number has some effect;
> it has zero explaination of why poking the number changes the workload
> and if that is in fact the right way to go about solving that particular
> issue.

Overall, the problem being solved here is that based on our testing the
PELT half life can occasionally be too slow to keep up in scenarios
where many frames need to be rendered quickly, especially on high-refresh
rate phones and similar devices. While it's not a problem most of the
time and so it doesn't warrant changing the default or having it set at
boot time, introducing this pelt multiplier would be very useful as a
tool to be able to avoid the worst-case in limited scenarios.

----
Kajetan

  parent reply	other threads:[~2022-09-29 11:10 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-29  5:54 [RFC PATCH 0/1] sched/pelt: Change PELT halflife at runtime Dietmar Eggemann
2022-08-29  5:54 ` [RFC PATCH 1/1] sched/pelt: Introduce PELT multiplier Dietmar Eggemann
2022-08-29  8:08   ` Peter Zijlstra
2022-08-29 10:02     ` Peter Zijlstra
2022-08-29 10:13       ` Vincent Guittot
2022-08-29 14:23         ` Quentin Perret
2022-08-29 14:34           ` Peter Zijlstra
2022-08-29 15:31             ` Quentin Perret
2022-08-29 15:48             ` Quentin Perret
2022-09-02  7:53         ` Dietmar Eggemann
2022-09-02  8:45           ` Peter Zijlstra
2022-09-06  5:49           ` Vincent Guittot
2022-09-08  6:50             ` Dietmar Eggemann
2022-09-02  7:53       ` Dietmar Eggemann
2022-09-02  8:45         ` Peter Zijlstra
2022-09-20 14:07 ` [RFC PATCH 0/1] sched/pelt: Change PELT halflife at runtime Jian-Min Liu
2022-09-28 17:09   ` Dietmar Eggemann
2022-09-29  9:47   ` Peter Zijlstra
2022-09-29 11:07     ` Dietmar Eggemann
2022-09-29 11:10     ` Kajetan Puchalski [this message]
2022-09-29 11:21       ` Peter Zijlstra
2022-09-29 14:41         ` Kajetan Puchalski
2022-10-03 22:57           ` Wei Wang
2022-10-04  9:33             ` Dietmar Eggemann
2022-10-05 16:57               ` Wei Wang
2022-11-07 13:41           ` Peter Zijlstra
2022-11-08 19:48             ` Qais Yousef
2022-11-09 15:49               ` Peter Zijlstra
2022-11-10 13:25                 ` Qais Yousef
2023-02-07 10:29                 ` Dietmar Eggemann
2023-02-09 16:16                   ` Vincent Guittot
2023-02-17 13:54                     ` Dietmar Eggemann
2023-02-20 13:54                       ` Vincent Guittot
2023-02-21  9:29                         ` Vincent Guittot
2023-02-22 20:28                           ` Dietmar Eggemann
2023-03-01 10:24                             ` Vincent Guittot
2023-02-22 20:13                         ` Dietmar Eggemann
2023-03-02 19:36                           ` Dietmar Eggemann
2023-02-20 10:13                     ` Peter Zijlstra
2023-02-20 13:39                       ` Vincent Guittot
2023-02-23 15:37                     ` Qais Yousef
2023-03-01 10:39                       ` Vincent Guittot
2023-03-01 17:24                         ` Qais Yousef
2023-03-02  8:00                           ` Vincent Guittot
2023-03-02 19:39                             ` Dietmar Eggemann
2023-03-06 19:11                             ` Qais Yousef
2023-03-07 13:22                               ` Vincent Guittot
2023-03-11 16:55                                 ` Qais Yousef
2023-03-23 16:29                           ` Dietmar Eggemann
2023-04-03 14:45                             ` Qais Yousef
2023-04-06 15:58                               ` Dietmar Eggemann
2023-04-11 17:51                                 ` Qais Yousef
2022-11-09 15:18             ` Lukasz Luba
2022-11-10 11:16             ` Dietmar Eggemann
2022-11-10 13:05               ` Peter Zijlstra
2022-11-10 14:59                 ` Dietmar Eggemann
2022-11-10 17:51                   ` Peter Zijlstra
2022-11-30 18:14                     ` Dietmar Eggemann
2022-12-01 13:37                       ` Kajetan Puchalski
2022-11-10 12:45             ` Kajetan Puchalski
2022-11-07  9:41     ` Jian-Min Liu (劉建旻)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YzV9Gejo/+DL3UjK@e126311.manchester.arm.com \
    --to=kajetan.puchalski@arm.com \
    --cc=adharmap@quicinc.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=jian-min.liu@mediatek.com \
    --cc=jonathan.jmchen@mediatek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=morten.rasmussen@arm.com \
    --cc=patrick.bellasi@matbug.net \
    --cc=peterz@infradead.org \
    --cc=qais.yousef@arm.com \
    --cc=qperret@google.com \
    --cc=vdonnefort@google.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.