linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ionela Voinescu <ionela.voinescu@arm.com>
To: Thara Gopinath <thara.gopinath@linaro.org>,
	mingo@redhat.com, peterz@infradead.org, rui.zhang@intel.com
Cc: linux-kernel@vger.kernel.org, amit.kachhap@gmail.com,
	viresh.kumar@linaro.org, javi.merino@kernel.org,
	edubezval@gmail.com, daniel.lezcano@linaro.org,
	vincent.guittot@linaro.org, nicolas.dechesne@linaro.org,
	bjorn.andersson@linaro.org, dietmar.eggemann@arm.com
Subject: Re: [PATCH V2 0/3] Introduce Thermal Pressure
Date: Thu, 2 May 2019 11:44:57 +0100	[thread overview]
Message-ID: <632321a8-d7f0-49a6-9577-95fac4c87b1c@arm.com> (raw)
In-Reply-To: <5CC87362.6080307@linaro.org>

Hi Thara,

>> After cleaning it up I'm getting results around 5.6s for this test case.
>> I've run 50 iterations for each test, with 90s cool down period between
>> them.
>>
>>
>>  			Hackbench: (1 group , 30000 loops, 50 runs)
>>  				Result            Standard Deviation
>>  				(Time Secs)        (% of mean)
>>
>>  No Thermal Pressure(step_wise)  5.644                   7.760%
>>  No Thermal Pressure(IPA)        5.677                   9.062%
>>
>>  Thermal Pressure Averaging
>>  non-PELT Algo. Decay : 250 ms   5.627                   5.593%
>>  (step-wise, bigs capped only)
>>
>>  Thermal Pressure Averaging
>>  non-PELT Algo. Decay : 250 ms   5.690                   3.738%
>>  (IPA)
>>
>> All of the results above are within 1.1% difference with a
>> significantly higher standard deviation.
> 
> Hi Ionela,
> 
> I have replied to your original emails without seeing this one. So,
> interesting results. I see IPA is worse off (Slightly) than step wise in
> both thermal pressure and non-thermal pressure scenarios. Did you try
> 500 ms decay period by any chance?
>

I don't think we can draw a conclusion on that given how close the
results are and given the high standard deviation. Probably if I run
them again the tables will be turned :).

I did not run experiments with different decay periods yet, as I want to
have first a list  of experiments that are relevant for thermal pressure,
that can help later with refining the solution, which can mean either
deciding on a decay period or possibly going with the instantaneous
thermal pressure. Please find more details below.

>>
>> I wanted to run this initially to validate my setup and understand
>> if there is any conclusion we can draw from a test like this, that
>> floods the CPUs with tasks. Looking over the traces, the tasks are
>> running almost back to back, trying to use all available resources,
>> on all the CPUs.
>> Therefore, I doubt that there could be better decisions that could be
>> made, knowing about thermal pressure, for this usecase.
>>
>> I'll try next some capacity inversion usecase and post the results when
>> they are ready.
> 

I've started looking into this, starting from the most obvious case of
capacity inversion: using the user-space thermal governor and capping
the bigs to their lowest OPP. The LITTLEs are left uncapped.

This was not enough on the Hikey960 as the bigs at their lowest OPP were
in the capacity margin of the LITTLEs at their highest OPP. That meant
that LITTLEs would not pull tasks from the bigs, even if they had higher
capacity, as the capacity was in within the 25% margin. So another
change I've made was to set the capacity margin in fair.c to 10%.

I've run both sysbench and dhrystone. I'll put here only the results for
sysbench, interleaved, with and without considering thermal pressure (TP
and !TP).
As before, the TP solution uses averaging with a 250ms decay period.

               			Sysbench: (500000 req, 4 runs)
  				Result            Standard Deviation
  				(Time Secs)        (% of mean)

  !TP/4 threads                   146.46          0.063%
  TP/4 threads                    136.36          0.002%

  !TP/5 threads                   115.38          0.028%
  TP/5 threads                    110.62          0.006%

  !TP/6 threads                   95.38           0.051%
  TP/6 threads                    93.07           0.054%

  !TP/7 threads                   81.19           0.012%
  TP/7 threads                    80.32           0.028%

  !TP/8 threads                   72.58           2.295%
  TP/8 threads                    71.37           0.044%

As expected, the results are significantly improved when the scheduler
is let know of reduced capacity on the bigs which results in tasks being
placed or migrated to the littles which are able to provide better
performance. Traces nicely confirm this.

To be noted that these results only show that reflecting thermal
pressure in the capacity of the CPUs is useful and the scheduler is
equipped to make proper use of this information.
Possibly a thing to consider is whether or not to reduce the capacity
margin, but that's for another discussion.

This does not reflect the benefits of averaging, as, with the bigs
always being capped to the lowest OPP, the thermal pressure value will
be constant over the duration of the workload. The same results would
have been obtained with instantaneous thermal pressure.


Secondly, I've tried to use the step-wise governor, modified to only
cap the big CPUs, with the intention to obtain smaller periods of
capacity inversion for which a thermal pressure solution would show its
benefits.

Unfortunately dhrystone was misbehaving for some reason and was
giving me a high variation between results for the same test case.
Also, sysbench, ran with the same arguments as above, was not creating
enough load and thermal capping as to show the benefits of considering
thermal pressure.

So my recommendation is continue exploiting more test-cases like these.
I would continue with sysbench as it looks more stable, but modify the
the temperature threshold to determine periods of drastic capping of the
bigs. Once a dynamic test case and setup like this (no fixing
frequencies) is identified, it can be used to understand if averaging
is needed and to refine the decay period, and establish a good default.

What do you think? Does this make sense as a direction for obtaining
test cases? In my opinion the previous test cases were not  triggering
the right behaviors that can help prove the need for thermal pressure,
or help refine it. 

I will try to continue in this direction, but I won't be able to get to
in for a few days.

You'll find more results at: 
https://docs.google.com/spreadsheets/d/1ibxDSSSLTodLzihNAw6jM36eVZABuPMMnjvV-Xh4NEo/edit?usp=sharing


> Sure. let me know if I can help.

Any test results or recommendations for test cases would be helpful.
The need for thermal pressure is obvious, but the way that thermal
pressure is reflected in the capacity of the CPUs could be supported by
more thorough testing.

Regards,
Ionela.

> 
> Regards
> Thara
> 
>>
>> Hope it helps,
>> Ionela.
>>
>>
>>> Thank you,
>>> Ionela.
>>>
> 
> 

  reply	other threads:[~2019-05-02 10:45 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-16 19:38 [PATCH V2 0/3] Introduce Thermal Pressure Thara Gopinath
2019-04-16 19:38 ` [PATCH V2 1/3] Calculate " Thara Gopinath
2019-04-18 10:14   ` Quentin Perret
2019-04-24  4:13     ` Thara Gopinath
2019-04-24 16:38   ` Peter Zijlstra
2019-04-24 16:45   ` Peter Zijlstra
2019-04-25 10:57   ` Quentin Perret
2019-04-25 12:45     ` Vincent Guittot
2019-04-25 12:47       ` Quentin Perret
2019-04-26 14:17       ` Thara Gopinath
2019-05-08 12:41         ` Quentin Perret
2019-04-16 19:38 ` [PATCH V2 2/3] sched/fair: update cpu_capcity to reflect thermal pressure Thara Gopinath
2019-04-16 19:38 ` [PATCH V3 3/3] thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping Thara Gopinath
2019-04-18  9:48   ` Quentin Perret
2019-04-23 22:38     ` Thara Gopinath
2019-04-24 15:56       ` Ionela Voinescu
2019-04-26 10:24         ` Thara Gopinath
2019-04-25 10:45       ` Quentin Perret
2019-04-25 12:04         ` Vincent Guittot
2019-04-25 12:50           ` Quentin Perret
2019-04-26 13:47         ` Thara Gopinath
2019-04-24 16:47   ` Peter Zijlstra
2019-04-17  5:36 ` [PATCH V2 0/3] Introduce Thermal Pressure Ingo Molnar
2019-04-17  5:55   ` Ingo Molnar
2019-04-17 17:28     ` Thara Gopinath
2019-04-17 17:18   ` Thara Gopinath
2019-04-17 18:29     ` Ingo Molnar
2019-04-18  0:07       ` Thara Gopinath
2019-04-18  9:22       ` Quentin Perret
2019-04-24 16:34       ` Peter Zijlstra
2019-04-25 17:33         ` Ingo Molnar
2019-04-25 17:44           ` Ingo Molnar
2019-04-26  7:08             ` Vincent Guittot
2019-04-26  8:35               ` Ingo Molnar
2019-04-24 15:57 ` Ionela Voinescu
2019-04-26 11:50   ` Thara Gopinath
2019-04-26 14:46     ` Ionela Voinescu
2019-04-29 13:29 ` Ionela Voinescu
2019-04-30 14:39   ` Ionela Voinescu
2019-04-30 16:10     ` Thara Gopinath
2019-05-02 10:44       ` Ionela Voinescu [this message]
2019-04-30 15:57   ` Thara Gopinath
2019-04-30 16:02     ` Thara Gopinath

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=632321a8-d7f0-49a6-9577-95fac4c87b1c@arm.com \
    --to=ionela.voinescu@arm.com \
    --cc=amit.kachhap@gmail.com \
    --cc=bjorn.andersson@linaro.org \
    --cc=daniel.lezcano@linaro.org \
    --cc=dietmar.eggemann@arm.com \
    --cc=edubezval@gmail.com \
    --cc=javi.merino@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=nicolas.dechesne@linaro.org \
    --cc=peterz@infradead.org \
    --cc=rui.zhang@intel.com \
    --cc=thara.gopinath@linaro.org \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).