linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thara Gopinath <thara.gopinath@linaro.org>
To: Ingo Molnar <mingo@kernel.org>
Cc: mingo@redhat.com, peterz@infradead.org, rui.zhang@intel.com,
	linux-kernel@vger.kernel.org, amit.kachhap@gmail.com,
	viresh.kumar@linaro.org, javi.merino@kernel.org,
	edubezval@gmail.com, daniel.lezcano@linaro.org,
	vincent.guittot@linaro.org, nicolas.dechesne@linaro.org,
	bjorn.andersson@linaro.org, dietmar.eggemann@arm.com
Subject: Re: [PATCH V2 0/3] Introduce Thermal Pressure
Date: Wed, 17 Apr 2019 13:18:17 -0400	[thread overview]
Message-ID: <5CB75FD9.3070207@linaro.org> (raw)
In-Reply-To: <20190417053626.GA47282@gmail.com>


On 04/17/2019 01:36 AM, Ingo Molnar wrote:
> 
> * Thara Gopinath <thara.gopinath@linaro.org> wrote:
> 
>> The test results below shows 3-5% improvement in performance when
>> using the third solution compared to the default system today where
>> scheduler is unware of cpu capacity limitations due to thermal events.
> 
> The numbers look very promising!

Hello Ingo,
Thank you for the review.
> 
> I've rearranged the results to make the performance properties of the 
> various approaches and parameters easier to see:
> 
>                                          (seconds, lower is better)
> 
> 			                 Hackbench   Aobench   Dhrystone
>                                          =========   =======   =========
> Vanilla kernel (No Thermal Pressure)         10.21    141.58        1.14
> Instantaneous thermal pressure               10.16    141.63        1.15
> Thermal Pressure Averaging:
>       - PELT fmwk                             9.88    134.48        1.19
>       - non-PELT Algo. Decay : 500 ms         9.94    133.62        1.09
>       - non-PELT Algo. Decay : 250 ms         7.52    137.22        1.012
>       - non-PELT Algo. Decay : 125 ms         9.87    137.55        1.12
> 
> 
> Firstly, a couple of questions about the numbers:
> 
>    1)
> 
>       Is the 1.012 result for "non-PELT 250 msecs Dhrystone" really 1.012?
>       You reported it as:
> 
>              non-PELT Algo. Decay : 250 ms   1.012                   7.02%

It is indeed 1.012. So, I ran the "non-PELT Algo 250 ms" benchmarks
multiple time because of the anomalies noticed.  1.012 is a formatting
error on my part when I copy pasted the results into a google sheet I am
maintaining to capture the test results. Sorry about the confusion.
> 
>       But the formatting is significant 3 digits versus only two for all 
>       the other results.
> 
>    2)
> 
>       You reported the hackbench numbers with "10 runs" - did the other 
>       benchmarks use 10 runs as well? Maybe you used fewer runs for the 
>       longest benchmark, Aobench?
 Hackbench and dhrystone are 10 runs each. Aobench is part of phoronix
test suit and the test suite runs it six times and gives the per run
results, mean and stddev. On my part,  I ran aobench just once per
configuration.

> 
> Secondly, it appears the non-PELT decaying average is the best approach, 
> but the results are a bit coarse around the ~250 msecs peak. Maybe it 
> would be good to measure it in 50 msecs steps between 50 msecs and 1000 
> msecs - but only if it can be scripted sanely:

non-PELT looks better overall because the test results are quite
comparable (if not better) between the two solutions and it takes care
of concerns people raised when I posted V1 using PELT-fmwk algo
regarding reuse of utilization signal to track thermal pressure.

Regarding the decay period, I agree that more testing can be done. I
like your suggestions below and I am going to try implementing them
sometime next week. Once I have some solid results, I will send them out.

My concern regarding getting hung up too much on decay period is that I
think it could vary from SoC to SoC depending on the type and number of
cores and thermal characteristics. So I was thinking eventually the
decay period should be configurable via a config option or by any other
means. Testing on different systems will definitely help and maybe I am
wrong and there is no much variation between systems.

Regards
Thara

> 
> A possible approach would be to add a debug sysctl for the tuning period, 
> and script all these benchmark runs and the printing of the results. You 
> could add another (debug) sysctl to turn the 'instant' logic on, and to 
> restore vanilla kernel behavior as well - this makes it all much easier 
> to script and measure with a single kernel image, without having to 
> reboot the kernel. The sysctl overhead will not be measurable for 
> workloads like this.
> 
> Then you can use "perf stat --null --table" to measure runtime and stddev 
> easily and with a single tool, for example:
> 
>   dagon:~> perf stat --null --sync --repeat 10 --table ./hackbench 20 >benchmark.out
> 
>   Performance counter stats for './hackbench 20' (10 runs):
> 
>            # Table of individual measurements:
>            0.15246 (-0.03960) ######
>            0.20832 (+0.01627) ##
>            0.17895 (-0.01310) ##
>            0.19791 (+0.00585) #
>            0.19209 (+0.00004) #
>            0.19406 (+0.00201) #
>            0.22484 (+0.03278) ###
>            0.18695 (-0.00511) #
>            0.19032 (-0.00174) #
>            0.19464 (+0.00259) #
> 
>            # Final result:
>            0.19205 +- 0.00592 seconds time elapsed  ( +-  3.08% )
> 
> Note how all the individual measurements can be captured this way, 
> without seeing the benchmark output itself. So difference benchmarks can 
> be measured this way, assuming they don't have too long setup time.
> 
> Thanks,
> 
> 	Ingo
> 


-- 
Regards
Thara

  parent reply	other threads:[~2019-04-17 17:18 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-16 19:38 [PATCH V2 0/3] Introduce Thermal Pressure Thara Gopinath
2019-04-16 19:38 ` [PATCH V2 1/3] Calculate " Thara Gopinath
2019-04-18 10:14   ` Quentin Perret
2019-04-24  4:13     ` Thara Gopinath
2019-04-24 16:38   ` Peter Zijlstra
2019-04-24 16:45   ` Peter Zijlstra
2019-04-25 10:57   ` Quentin Perret
2019-04-25 12:45     ` Vincent Guittot
2019-04-25 12:47       ` Quentin Perret
2019-04-26 14:17       ` Thara Gopinath
2019-05-08 12:41         ` Quentin Perret
2019-04-16 19:38 ` [PATCH V2 2/3] sched/fair: update cpu_capcity to reflect thermal pressure Thara Gopinath
2019-04-16 19:38 ` [PATCH V3 3/3] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
2019-04-18  9:48   ` Quentin Perret
2019-04-23 22:38     ` Thara Gopinath
2019-04-24 15:56       ` Ionela Voinescu
2019-04-26 10:24         ` Thara Gopinath
2019-04-25 10:45       ` Quentin Perret
2019-04-25 12:04         ` Vincent Guittot
2019-04-25 12:50           ` Quentin Perret
2019-04-26 13:47         ` Thara Gopinath
2019-04-24 16:47   ` Peter Zijlstra
2019-04-17  5:36 ` [PATCH V2 0/3] Introduce Thermal Pressure Ingo Molnar
2019-04-17  5:55   ` Ingo Molnar
2019-04-17 17:28     ` Thara Gopinath
2019-04-17 17:18   ` Thara Gopinath [this message]
2019-04-17 18:29     ` Ingo Molnar
2019-04-18  0:07       ` Thara Gopinath
2019-04-18  9:22       ` Quentin Perret
2019-04-24 16:34       ` Peter Zijlstra
2019-04-25 17:33         ` Ingo Molnar
2019-04-25 17:44           ` Ingo Molnar
2019-04-26  7:08             ` Vincent Guittot
2019-04-26  8:35               ` Ingo Molnar
2019-04-24 15:57 ` Ionela Voinescu
2019-04-26 11:50   ` Thara Gopinath
2019-04-26 14:46     ` Ionela Voinescu
2019-04-29 13:29 ` Ionela Voinescu
2019-04-30 14:39   ` Ionela Voinescu
2019-04-30 16:10     ` Thara Gopinath
2019-05-02 10:44       ` Ionela Voinescu
2019-04-30 15:57   ` Thara Gopinath
2019-04-30 16:02     ` Thara Gopinath

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5CB75FD9.3070207@linaro.org \
    --to=thara.gopinath@linaro.org \
    --cc=amit.kachhap@gmail.com \
    --cc=bjorn.andersson@linaro.org \
    --cc=daniel.lezcano@linaro.org \
    --cc=dietmar.eggemann@arm.com \
    --cc=edubezval@gmail.com \
    --cc=javi.merino@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nicolas.dechesne@linaro.org \
    --cc=peterz@infradead.org \
    --cc=rui.zhang@intel.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).