All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qais Yousef <qais.yousef@arm.com>
To: YT Chang <yt.chang@mediatek.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Iurii Zaikin <yzaikin@google.com>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Paul Turner <pjt@google.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mediatek@lists.infradead.org, wsd_upstream@mediatek.com
Subject: Re: [PATCH 1/1] sched: Add tunable capacity margin for fis_capacity
Date: Fri, 18 Jun 2021 18:14:50 +0100	[thread overview]
Message-ID: <20210618171450.c5tgggydukcmap5v@e107158-lin.cambridge.arm.com> (raw)
In-Reply-To: <1623855954-6970-1-git-send-email-yt.chang@mediatek.com>

Hi YT Chang

Thanks for the patch.

On 06/16/21 23:05, YT Chang wrote:
> Currently, the margin of cpu frequency raising and cpu overutilized are
> hard-coded as 25% (1280/1024). Make the margin tunable

The way I see cpu overutilized is that we check if we're above the 80% range.

> to control the aggressive for placement and frequency control. Such as
> for power tuning framework could adjust smaller margin to slow down
> frequency raising speed and let task stay in smaller cpu.
> 
> For light loading scenarios, like beach buggy blitz and messaging apps,
> the app threads are moved big core with 25% margin and causing
> unnecessary power.
> With 0% capacity margin (1024/1024), the app threads could be kept in
> little core and deliver better power results without any fps drop.
> 
> capacity margin        0%          10%          20%          30%
>                      current        current       current      current
>                   Fps  (mA)    Fps    (mA)   Fps   (mA)    Fps  (mA)
> Beach buggy blitz  60 198.164  60   203.211  60   209.984  60  213.374
> Yahoo browser      60 232.301 59.97 237.52  59.95 248.213  60  262.809
> 
> Change-Id: Iba48c556ed1b73c9a2699e9e809bc7d9333dc004
> Signed-off-by: YT Chang <yt.chang@mediatek.com>
> ---

We are aware of the cpu overutilized value not being adequate on some modern
platforms. But I haven't considered or seen any issues with the frequency one.
So the latter is an interesting one.

I like your patch, but sadly I can't agree with it too.

The dilemma is that there are several options forward based on what we've seen
vendors do/want:

	1. Modify the margin to be small for high end SoC and larger for lower
	   end ones. Which is what your patch allows.
	2. Some vendors have a per cluster (perf domain) value. So within the
	   same SoC different margins are used for each capacity level.
	3. Some vendors have asymmetric margin. A margin to move up and a
	   different margin to go down.

We're still not sure which approach is the best way forward.

Your patch allows 1, but if it turned out options 2 or 3 are better; the ABI
will make it hard to change.

Have you considered all these options? Do you have any data to help support
1 is enough for the range of platforms you work with at least?

We were considering also whether we can have a smarter logic to automagically
set a better value for the platform, but no concrete suggestions yet.

So while I agree the current margin value of one size fits all is no longer
suitable. But the variation of hardware and the possible approaches we could
take need more careful thinking and consideration before committing to an ABI.

This patch is a good start for this discussion :)


Thanks

--
Qais Yousef

WARNING: multiple messages have this Message-ID (diff)
From: Qais Yousef <qais.yousef@arm.com>
To: YT Chang <yt.chang@mediatek.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Iurii Zaikin <yzaikin@google.com>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Paul Turner <pjt@google.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mediatek@lists.infradead.org, wsd_upstream@mediatek.com
Subject: Re: [PATCH 1/1] sched: Add tunable capacity margin for fis_capacity
Date: Fri, 18 Jun 2021 18:14:50 +0100	[thread overview]
Message-ID: <20210618171450.c5tgggydukcmap5v@e107158-lin.cambridge.arm.com> (raw)
In-Reply-To: <1623855954-6970-1-git-send-email-yt.chang@mediatek.com>

Hi YT Chang

Thanks for the patch.

On 06/16/21 23:05, YT Chang wrote:
> Currently, the margin of cpu frequency raising and cpu overutilized are
> hard-coded as 25% (1280/1024). Make the margin tunable

The way I see cpu overutilized is that we check if we're above the 80% range.

> to control the aggressive for placement and frequency control. Such as
> for power tuning framework could adjust smaller margin to slow down
> frequency raising speed and let task stay in smaller cpu.
> 
> For light loading scenarios, like beach buggy blitz and messaging apps,
> the app threads are moved big core with 25% margin and causing
> unnecessary power.
> With 0% capacity margin (1024/1024), the app threads could be kept in
> little core and deliver better power results without any fps drop.
> 
> capacity margin        0%          10%          20%          30%
>                      current        current       current      current
>                   Fps  (mA)    Fps    (mA)   Fps   (mA)    Fps  (mA)
> Beach buggy blitz  60 198.164  60   203.211  60   209.984  60  213.374
> Yahoo browser      60 232.301 59.97 237.52  59.95 248.213  60  262.809
> 
> Change-Id: Iba48c556ed1b73c9a2699e9e809bc7d9333dc004
> Signed-off-by: YT Chang <yt.chang@mediatek.com>
> ---

We are aware of the cpu overutilized value not being adequate on some modern
platforms. But I haven't considered or seen any issues with the frequency one.
So the latter is an interesting one.

I like your patch, but sadly I can't agree with it too.

The dilemma is that there are several options forward based on what we've seen
vendors do/want:

	1. Modify the margin to be small for high end SoC and larger for lower
	   end ones. Which is what your patch allows.
	2. Some vendors have a per cluster (perf domain) value. So within the
	   same SoC different margins are used for each capacity level.
	3. Some vendors have asymmetric margin. A margin to move up and a
	   different margin to go down.

We're still not sure which approach is the best way forward.

Your patch allows 1, but if it turned out options 2 or 3 are better; the ABI
will make it hard to change.

Have you considered all these options? Do you have any data to help support
1 is enough for the range of platforms you work with at least?

We were considering also whether we can have a smarter logic to automagically
set a better value for the platform, but no concrete suggestions yet.

So while I agree the current margin value of one size fits all is no longer
suitable. But the variation of hardware and the possible approaches we could
take need more careful thinking and consideration before committing to an ABI.

This patch is a good start for this discussion :)


Thanks

--
Qais Yousef

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

WARNING: multiple messages have this Message-ID (diff)
From: Qais Yousef <qais.yousef@arm.com>
To: YT Chang <yt.chang@mediatek.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Iurii Zaikin <yzaikin@google.com>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Paul Turner <pjt@google.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mediatek@lists.infradead.org, wsd_upstream@mediatek.com
Subject: Re: [PATCH 1/1] sched: Add tunable capacity margin for fis_capacity
Date: Fri, 18 Jun 2021 18:14:50 +0100	[thread overview]
Message-ID: <20210618171450.c5tgggydukcmap5v@e107158-lin.cambridge.arm.com> (raw)
In-Reply-To: <1623855954-6970-1-git-send-email-yt.chang@mediatek.com>

Hi YT Chang

Thanks for the patch.

On 06/16/21 23:05, YT Chang wrote:
> Currently, the margin of cpu frequency raising and cpu overutilized are
> hard-coded as 25% (1280/1024). Make the margin tunable

The way I see cpu overutilized is that we check if we're above the 80% range.

> to control the aggressive for placement and frequency control. Such as
> for power tuning framework could adjust smaller margin to slow down
> frequency raising speed and let task stay in smaller cpu.
> 
> For light loading scenarios, like beach buggy blitz and messaging apps,
> the app threads are moved big core with 25% margin and causing
> unnecessary power.
> With 0% capacity margin (1024/1024), the app threads could be kept in
> little core and deliver better power results without any fps drop.
> 
> capacity margin        0%          10%          20%          30%
>                      current        current       current      current
>                   Fps  (mA)    Fps    (mA)   Fps   (mA)    Fps  (mA)
> Beach buggy blitz  60 198.164  60   203.211  60   209.984  60  213.374
> Yahoo browser      60 232.301 59.97 237.52  59.95 248.213  60  262.809
> 
> Change-Id: Iba48c556ed1b73c9a2699e9e809bc7d9333dc004
> Signed-off-by: YT Chang <yt.chang@mediatek.com>
> ---

We are aware of the cpu overutilized value not being adequate on some modern
platforms. But I haven't considered or seen any issues with the frequency one.
So the latter is an interesting one.

I like your patch, but sadly I can't agree with it too.

The dilemma is that there are several options forward based on what we've seen
vendors do/want:

	1. Modify the margin to be small for high end SoC and larger for lower
	   end ones. Which is what your patch allows.
	2. Some vendors have a per cluster (perf domain) value. So within the
	   same SoC different margins are used for each capacity level.
	3. Some vendors have asymmetric margin. A margin to move up and a
	   different margin to go down.

We're still not sure which approach is the best way forward.

Your patch allows 1, but if it turned out options 2 or 3 are better; the ABI
will make it hard to change.

Have you considered all these options? Do you have any data to help support
1 is enough for the range of platforms you work with at least?

We were considering also whether we can have a smarter logic to automagically
set a better value for the platform, but no concrete suggestions yet.

So while I agree the current margin value of one size fits all is no longer
suitable. But the variation of hardware and the possible approaches we could
take need more careful thinking and consideration before committing to an ABI.

This patch is a good start for this discussion :)


Thanks

--
Qais Yousef

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2021-06-18 17:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-16 15:05 [PATCH 1/1] sched: Add tunable capacity margin for fis_capacity YT Chang
2021-06-16 15:05 ` YT Chang
2021-06-16 15:05 ` YT Chang
2021-06-16 15:22 ` Vincent Guittot
2021-06-16 15:22   ` Vincent Guittot
2021-06-16 15:22   ` Vincent Guittot
2021-06-17 17:56 ` kernel test robot
2021-06-18 17:14 ` Qais Yousef [this message]
2021-06-18 17:14   ` Qais Yousef
2021-06-18 17:14   ` Qais Yousef

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210618171450.c5tgggydukcmap5v@e107158-lin.cambridge.arm.com \
    --to=qais.yousef@arm.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=juri.lelli@redhat.com \
    --cc=keescook@chromium.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=matthias.bgg@gmail.com \
    --cc=mcgrof@kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=rjw@rjwysocki.net \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=wsd_upstream@mediatek.com \
    --cc=yt.chang@mediatek.com \
    --cc=yzaikin@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.