All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kajetan Puchalski <kajetan.puchalski@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: mingo@kernel.org, peterz@infradead.org, dietmar.eggemann@arm.com,
	qyousef@layalina.io, rafael@kernel.org, viresh.kumar@linaro.org,
	vschneid@redhat.com, linux-pm@vger.kernel.org,
	linux-kernel@vger.kernel.org, lukasz.luba@arm.com,
	wvw@google.com, xuewen.yan94@gmail.com, han.lin@mediatek.com,
	Jonathan.JMChen@mediatek.com
Subject: Re: [PATCH v2] sched/fair: unlink misfit task from cpu overutilized
Date: Fri, 13 Jan 2023 14:50:44 +0000	[thread overview]
Message-ID: <Y8FvtLGdRK8ZdOvd@e126311.manchester.arm.com> (raw)
In-Reply-To: <CAKfTPtCmDA8WPrhFc8YxFXSOPOKasvvNWA3iOmRYcC2VSyMMrw@mail.gmail.com>

> > I was testing this on a Pixel 6 with a 5.18 android-mainline kernel with

> Do you have more details to share on your setup ?
> Android kernel has some hack on top of the mainline. Do you use some ?
> Then, the perf and the power can be largely impacted by the cgroup
> configuration. Have you got details on your setup ?

The kernel I use has all the vendor hooks and hacks switched off to
keep it as close to mainline as possible. Unfortunately 5.18 was the
last mainline that worked on this device due to some driver issues so we
just backport mainline scheduling patches as they come out to at least
keep the scheduler itself up to date.

> I just sent a v3 which fixes a condition. Wonder if this could have an
> impact on the results both perf and power

I don't think it'll fix the GB5 score side of it as that's clearly
related to overutilization while the condition changed in v3 is inside
the non-OU section of feec(). I'll still test the v3 on the weekend
if I have some free time.

The power usage issue was already introduced in the uclamp fits capacity
patchset that's been merged so I doubt this change will be enough to
account for it but I'll give it a try regardless.

> > The most likely cause for the regression seen above is the decrease in the amount of
> > time spent while overutilized with these patches. Maximising
> > overutilization for GB5 is the desired outcome as the benchmark for
> > almost its entire duration keeps either 1 core or all the cores
> > completely saturated so EAS cannot be effective. These patches have the
> > opposite from the desired effect in this area.
> >
> > +----------------------------+--------------------+--------------------+------------+
> > |          kernel            |        time        |     total_time     | percentage |
> > +----------------------------+--------------------+--------------------+------------+
> > |          baseline          |      121.979       |      181.065       |   67.46    |
> > |        baseline_ufc        |      120.355       |      184.255       |   65.32    |
> > |        ufc_patched         |       60.715       |      196.135       |   30.98    | <-- !!!
> > +----------------------------+--------------------+--------------------+------------+
>
> I'm not surprised because some use cases which were not overutilized
> were wrongly triggered as overutilized so switching back to
> performance mode. You might have to tune the uclamp value

But they'd be wrongly triggered with the 'baseline_ufc' variant while
not with the 'baseline' variant. The baseline here is pre taking uclamp
into account for cpu_overutilized, all cpu_overutilized did on that
kernel was compare util against capacity.
Meaning that the 'real' overutilised would be in the ~67% ballpark while
the patch makes it incorrectly not trigger more than half the time. I'm
not sure we can tweak uclamp enough to fix that.

> >
> > 2. Jankbench (power usage regression)
> >
> > +--------+---------------+---------------------------------+-------+-----------+
> > | metric |   variable    |             kernel              | value | perc_diff |
> > +--------+---------------+---------------------------------+-------+-----------+
> > | gmean  | mean_duration |          baseline_60hz          | 14.6  |   0.0%    |
> > | gmean  | mean_duration |        baseline_ufc_60hz        | 15.2  |   3.83%   |
> > | gmean  | mean_duration |        ufc_patched_60hz         | 14.0  |  -4.12%   |
> > +--------+---------------+---------------------------------+-------+-----------+
> >
> > +--------+-----------+---------------------------------+-------+-----------+
> > | metric | variable  |             kernel              | value | perc_diff |
> > +--------+-----------+---------------------------------+-------+-----------+
> > | gmean  | jank_perc |          baseline_60hz          |  1.9  |   0.0%    |
> > | gmean  | jank_perc |        baseline_ufc_60hz        |  2.2  |  15.39%   |
> > | gmean  | jank_perc |        ufc_patched_60hz         |  2.0  |   3.61%   |
> > +--------+-----------+---------------------------------+-------+-----------+
> >
> > +--------------+--------+---------------------------------+-------+-----------+
> > |  chan_name   | metric |             kernel              | value | perc_diff |
> > +--------------+--------+---------------------------------+-------+-----------+
> > | total_power  | gmean  |          baseline_60hz          | 135.9 |   0.0%    |
> > | total_power  | gmean  |        baseline_ufc_60hz        | 155.7 |  14.61%   | <-- !!!
> > | total_power  | gmean  |        ufc_patched_60hz         | 157.1 |  15.63%   | <-- !!!
> > +--------------+--------+---------------------------------+-------+-----------+
> >
> > With these patches while running Jankbench we use up ~15% more power
> > just to achieve roughly the same results. Here I'm not sure where this
> > issue is coming from exactly but all the results above are very consistent
> > across different runs.
> >
> > > --
> > > 2.17.1
> > >
> > >

  reply	other threads:[~2023-01-13 15:00 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-28 16:54 [PATCH v2] sched/fair: unlink misfit task from cpu overutilized Vincent Guittot
2023-01-12 14:26 ` Qais Yousef
2023-01-13  8:53   ` Vincent Guittot
2023-01-14 18:21     ` Qais Yousef
2023-01-13 13:49 ` Kajetan Puchalski
2023-01-13 14:28   ` Vincent Guittot
2023-01-13 14:50     ` Kajetan Puchalski [this message]
2023-01-14 21:18     ` Qais Yousef
2023-01-16  9:21       ` Dietmar Eggemann
2023-01-16 11:25         ` Vincent Guittot
2023-01-17 16:40         ` Qais Yousef
2023-01-20 16:08       ` Kajetan Puchalski
2022-12-29  5:40 kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y8FvtLGdRK8ZdOvd@e126311.manchester.arm.com \
    --to=kajetan.puchalski@arm.com \
    --cc=Jonathan.JMChen@mediatek.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=han.lin@mediatek.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=lukasz.luba@arm.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=qyousef@layalina.io \
    --cc=rafael@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=wvw@google.com \
    --cc=xuewen.yan94@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.