linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
@ 2018-10-27  6:37 Doug Smythies
  2018-10-30  7:19 ` Rafael J. Wysocki
  0 siblings, 1 reply; 11+ messages in thread
From: Doug Smythies @ 2018-10-27  6:37 UTC (permalink / raw)
  To: 'Rafael J. Wysocki'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Giovanni Gherdovich',
	'Daniel Lezcano', 'Linux PM',
	Doug Smythies

This is just for anybody else trying to compile:

On 2018.10.26 02:12 Rafael J. Wysocki wrote:

> The venerable menu governor does some thigns that are quite

Typo: thigns -> things

...[snip]...

> The patch should apply on top of 4.19, although I'm running it on
> top of my linux-next branch.

No, it uses "poll_time_limit" which was introduced in patch 1 of 6
[1] in that group of menu changes from October 2nd.

"[PATCH 1/6] cpuidle: menu: Fix wakeup statistics updates for polling state"

... Doug

[1] https://lkml.org/lkml/2018/10/3/42



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-10-27  6:37 [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems Doug Smythies
@ 2018-10-30  7:19 ` Rafael J. Wysocki
  0 siblings, 0 replies; 11+ messages in thread
From: Rafael J. Wysocki @ 2018-10-30  7:19 UTC (permalink / raw)
  To: Doug Smythies
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Giovanni Gherdovich',
	'Daniel Lezcano', 'Linux PM'

On Saturday, October 27, 2018 8:37:24 AM CET Doug Smythies wrote:
> This is just for anybody else trying to compile:
> 
> On 2018.10.26 02:12 Rafael J. Wysocki wrote:
> 
> > The venerable menu governor does some thigns that are quite
> 
> Typo: thigns -> things
> 
> ...[snip]...
> 
> > The patch should apply on top of 4.19, although I'm running it on
> > top of my linux-next branch.
> 
> No, it uses "poll_time_limit" which was introduced in patch 1 of 6
> [1] in that group of menu changes from October 2nd.
> 
> "[PATCH 1/6] cpuidle: menu: Fix wakeup statistics updates for polling state"

Right, sorry for missing that.

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-11-04 10:06   ` Rafael J. Wysocki
  2018-11-05 19:14     ` Giovanni Gherdovich
@ 2018-11-05 22:09     ` Doug Smythies
  1 sibling, 0 replies; 11+ messages in thread
From: Doug Smythies @ 2018-11-05 22:09 UTC (permalink / raw)
  To: 'Giovanni Gherdovich'
  Cc: 'Linux PM', 'Srinivas Pandruvada',
	'Peter Zijlstra', 'LKML',
	'Frederic Weisbecker', 'Mel Gorman',
	'Daniel Lezcano', 'Rafael J. Wysocki',
	Doug Smythies

On 2018.11.05 11:14 Giovanni Gherdovich wrote:
> On Sun, 2018-11-04 at 11:06 +0100, Rafael J. Wysocki wrote:
>>
>> You can use the cpu_idle trace point to correlate the selected state index
>> with the observed idle duration (that's what Doug did IIUC).
>
> True, that works; although I ended up slapping a tracepoint right at the
> beginning of the teo_update() and capturing the variables
> cpu_data->last_state, dev->last_residency and dev->cpu.
>
> I should have some plots to share soon. I really wanted to do in-kernel
> histograms with systemtap as opposed to collecting data with ftrace and doing
> post-processing, because I noticed that the latter approach generates lots of
> events and wakeups from idle on the cpu that handles the ftrace data. It's
> kind of a workload in itself and spoils the results.

I agree that we need to be careful not to influence the system we
are trying to acquire diagnostic data on via the act of acquiring
that data.

I did not find much, if any, effect of acquiring trace data during the
dbench with 12 clients test. Regardless I do the exact same test the exact
same way for the baseline reference kernel and the test kernel. To be
clear, I mean no effect while actually acquiring the trace samples.
Obviously there is a significant effect while the samples are eventually
written out to disk, after being acquired. But at that point, I don’t care.

For tests where I am also acquiring long term idle statistics, over many
hours, I never run a trace at the same time, and only sample the system once
per minute. For those test scenarios, when a trace is required, i.e. for
greater detail, it is done as an independent step. But yes, for my very high
idle state 0 entry/exit per unit time type tests, enabling trace has a very
significant effect on the system under test. I haven't figured out a way around that.
For example the test where ~6 gigabytes of trace data was collected in
2 minutes, at a cost of ~25% performance drop
(https://marc.info/?l=linux-pm&m=153897853630373&w=2)
For comparison, the 12 client Phoronix dbench test trace on kernel 4.20-rc1
(baseline reference for TEO V3 tests) was only 199 Megabytes in 10 minutes.

... Doug



^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-11-02 15:39 Doug Smythies
  2018-11-04 10:06 ` Rafael J. Wysocki
  2018-11-05 19:11 ` Giovanni Gherdovich
@ 2018-11-05 21:28 ` Doug Smythies
  2 siblings, 0 replies; 11+ messages in thread
From: Doug Smythies @ 2018-11-05 21:28 UTC (permalink / raw)
  To: 'Giovanni Gherdovich'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Daniel Lezcano',
	'Linux PM', 'Rafael J. Wysocki',
	Doug Smythies

On 2018.11.05 11:12 Giovanni Gherdovich wrote:
> On Fri, 2018-11-02 at 08:39 -0700, Doug Smythies wrote:
> ...[snip]...
>> 
>> After reading Giovanni's reply the other day, I tried the
>> Phoronix dbench test: 12 clients resulted in similar performance,
>> But TEOv2 used a little less processor package power; 256 clients
>> had about -7% performance using TEOv2, but (my numbers are not
>> exact) also used less processor package power.
>
> Uhm, I see. The results I've got vary between machines; that could
> depend on the CPU type.

Agreed.

> What is your machine processor model, 
> and how many logical cores does it have?

Sorry, I had meant to include that in my original e-mail.
My test server has an older i7-2600K processor.
It has 4 cores, and 8 CPUs.

> For the record, in my previous email I wrote that my script runs dbench with
> up to NUMCPUS*8 clients, but that's misleading; indeed for the 48-cores
> machines I had runs with 1, 2, 4, 8, 16, 32 and 64 clients.
> https://lore.kernel.org/lkml/1541010981.3423.2.camel@suse.cz/
>
> The sequence is generated with
>
>    CLIENT=1
>    DBENCH_MAX_CLIENTS=$((NUMCPUS*8))
>
>    while [ $CLIENT -le $DBENCH_MAX_CLIENTS ]; do
>
>            ./bin/dbench [...] $CLIENT
>
>            if [ $CLIENT -lt $NUMCPUS ]; then
>                    CLIENT=$((CLIENT*2))
>            else
>                    CLIENT=$((CLIENT*8))
>            fi
>    done
>
> In practice the max number of clients I get is slightly below NUMCPUS*2 to
> reach saturation. I write this as I read you ran it with 256 clients but I
> never went that high.

I agree that my system is extremely overloaded and unresponsive while
running the Phoronix dbench test with 256 clients. However, I did it
because it gives a rather high number of idle state 0 entries/exits
per unit time.

>> 
>> On 2018.10.31 11:36 Giovanni Gherdovich wrote:
>> 
>>> Something I'd like to do now is verify that "teo"'s predictions
>>> are better than "menu"'s; I'll probably use systemtap to make
>>> some histograms of idle times versus what idle state was chosen
>>> -- that'd be enough to compare the two.
>> 
>> I don't know what a "systemtap" is, but I have (crude) tools to
>> post process trace data into histograms data. I did 5 minute
>> traces during the 12 client Phoronix dbench test and plotted
>> the results, [1]. Sometimes, to the right of the autoscaled
>> graph is another with fixed scaling. Better grouping of idle
>> durations with TEOv2 are clearly visible.
>> 
>> ... Doug
>> 
>> [1] http://fast.smythies.com/linux-pm/k419p/histo_compare.htm
>
> Oh, that's interesting, thanks. Can you post the break-even residency times and
> exit latencies for your CPUs? On my Skylake test machine I get this from sysfs:
>
> $ cd /sys/devices/system/cpu/cpu0/cpuidle
> $ for state in * ; do
> echo -e \
> "STATE: $state\t\
> DESC: $(cat $state/desc)\t\
> NAME: $(cat $state/name)\t\
> LATENCY: $(cat $state/latency)\t\
> RESIDENCY: $(cat $state/residency)"
> done
>
> STATE: state0   DESC: CPUIDLE CORE POLL IDLE    NAME: POLL      LATENCY: 0      RESIDENCY: 0
> STATE: state1   DESC: MWAIT 0x00        NAME: C1        LATENCY: 2      RESIDENCY: 2
> STATE: state2   DESC: MWAIT 0x01        NAME: C1E       LATENCY: 10     RESIDENCY: 20
> STATE: state3   DESC: MWAIT 0x10        NAME: C3        LATENCY: 70     RESIDENCY: 100
> STATE: state4   DESC: MWAIT 0x20        NAME: C6        LATENCY: 85     RESIDENCY: 200
> STATE: state5   DESC: MWAIT 0x33        NAME: C7s       LATENCY: 124    RESIDENCY: 800
> STATE: state6   DESC: MWAIT 0x40        NAME: C8        LATENCY: 200    RESIDENCY: 800

Sorry again, I had meant to include that in my original e-mail also.
And also that it was a 1000 Hz kernel (which should be evident from looking
at the graphs). Anyway using your above command on my system:

STATE: state0   DESC: CPUIDLE CORE POLL IDLE    NAME: POLL      LATENCY: 0      RESIDENCY: 0
STATE: state1   DESC: MWAIT 0x00        NAME: C1        LATENCY: 2      RESIDENCY: 2
STATE: state2   DESC: MWAIT 0x01        NAME: C1E       LATENCY: 10     RESIDENCY: 20
STATE: state3   DESC: MWAIT 0x10        NAME: C3        LATENCY: 80     RESIDENCY: 211
STATE: state4   DESC: MWAIT 0x20        NAME: C6        LATENCY: 104    RESIDENCY: 345

... Doug



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-11-04 10:06   ` Rafael J. Wysocki
@ 2018-11-05 19:14     ` Giovanni Gherdovich
  2018-11-05 22:09     ` Doug Smythies
  1 sibling, 0 replies; 11+ messages in thread
From: Giovanni Gherdovich @ 2018-11-05 19:14 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux PM, Srinivas Pandruvada, Peter Zijlstra, LKML,
	Frederic Weisbecker, Mel Gorman, Doug Smythies, Daniel Lezcano

On Sun, 2018-11-04 at 11:06 +0100, Rafael J. Wysocki wrote:
> On Wednesday, October 31, 2018 7:36:21 PM CET Giovanni Gherdovich wrote:
>
> [...]
> You can use the cpu_idle trace point to correlate the selected state index
> with the observed idle duration (that's what Doug did IIUC).

True, that works; although I ended up slapping a tracepoint right at the
beginning of the teo_update() and capturing the variables
cpu_data->last_state, dev->last_residency and dev->cpu.

I should have some plots to share soon. I really wanted to do in-kernel
histograms with systemtap as opposed to collecting data with ftrace and doing
post-processing, because I noticed that the latter approach generates lots of
events and wakeups from idle on the cpu that handles the ftrace data. It's
kind of a workload in itself and spoils the results.

> 
> Then, if the observed idle duration is between the target residency of the
> selected state and the target residency of the next one, the selected state
> is adequate and that's what we care about really.
> 
> If the observed idle duration is below the target residency of the selected
> state, the selected state is too deep and it if is above (or equal to) the
> target residency of the next state, it is too shallow.

Thanks for explaining this.

> 
> > After that it would be nice to somehow know where timers came from; i.e. if
> > I see that residences in a given state are consistently shorter than
> > they're supposed to be, it would be interesting to see who set the timer
> > that causes the wakeup. But... I'm not sure to know how to do that :) Do
> > you have a strategy to track down the origin of timers/interrupts? Is there
> > any script you're using to evaluate teo that you can share?
> 
> I need to think about that TBH.
> 
> The information that we can get readily should give use quite a good idea of
> what happens on average, though, so let's first do that and then try to dig
> deeper if need be.
> 
> I think that the difference between the v1 and v2 of the TEO governor comes
> mostly from the way in which they handle patterns of "early" wakeups.  The
> method used in v1 is very crude (and arguably invalid in general) and it
> will cause shallow states to be selected more often, while the v2 tries to
> be more "intelligent", but it may be overly conservative with that.
> 
> I'm working on a v3 that will try to address the above ATM, but I'd like to run
> it on my systems first (I'm going back home from a conference right now).
>

I've seen v3, I'll send you the test results ASAP.

Giovanni

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-11-02 15:39 Doug Smythies
  2018-11-04 10:06 ` Rafael J. Wysocki
@ 2018-11-05 19:11 ` Giovanni Gherdovich
  2018-11-05 21:28 ` Doug Smythies
  2 siblings, 0 replies; 11+ messages in thread
From: Giovanni Gherdovich @ 2018-11-05 19:11 UTC (permalink / raw)
  To: Doug Smythies, 'Rafael J. Wysocki'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Daniel Lezcano',
	'Linux PM'

On Fri, 2018-11-02 at 08:39 -0700, Doug Smythies wrote:
> 
> I have been testing this V2 against a baseline that includes all
> of the pending menu patches. My baseline kernel is somewhere
> after 4.19, at 345671e.
> 
> A side note:
> Recall that with the menu patch set tests, I found that the baseline
> reference performance for the pipe test on one core had changed
> significantly (worse - Kernel 4.19-rc1). Well, now it has changed
> significantly again (better, and even significantly better than it
> was for 4.18). 4.18 ~4.8 uSec/loop; 4.19 ~5.2 uSec/loop; 4.19+
> (345671e) 4.2 uSec/loop.
> 
> This V2 is pretty good. All of the tests that I run gave similar
> performance and power use between the baseline reference and V2.
> I couldn't find any issues with the decay stuff, and I tried.
> (sorry, I didn't do pretty graphs.)
> 
> After reading Giovanni's reply the other day, I tried the
> Phoronix dbench test: 12 clients resulted in similar performance,
> But TEOv2 used a little less processor package power; 256 clients
> had about -7% performance using TEOv2, but (my numbers are not
> exact) also used less processor package power.

Uhm, I see. The results I've got vary between machines; that could
depend on the CPU type. What is your machine processor model (or
microarchitecture, see the search box at the website https://ark.intel.com ),
and how many logical cores does it have?

For the record, in my previous email I wrote that my script runs dbench with
up to NUMCPUS*8 clients, but that's misleading; indeed for the 48-cores
machines I had runs with 1, 2, 4, 8, 16, 32 and 64 clients.
https://lore.kernel.org/lkml/1541010981.3423.2.camel@suse.cz/

The sequence is generated with

    CLIENT=1
    DBENCH_MAX_CLIENTS=$((NUMCPUS*8))

    while [ $CLIENT -le $DBENCH_MAX_CLIENTS ]; do

            ./bin/dbench [...] $CLIENT

            if [ $CLIENT -lt $NUMCPUS ]; then
                    CLIENT=$((CLIENT*2))
            else
                    CLIENT=$((CLIENT*8))
            fi
    done

In practice the max number of clients I get is slightly below NUMCPUS*2 to
reach saturation. I write this as I read you ran it with 256 clients but I
never went that high.

> 
> On 2018.10.31 11:36 Giovanni Gherdovich wrote:
> 
> > Something I'd like to do now is verify that "teo"'s predictions
> > are better than "menu"'s; I'll probably use systemtap to make
> > some histograms of idle times versus what idle state was chosen
> > -- that'd be enough to compare the two.
> 
> I don't know what a "systemtap" is, but I have (crude) tools to
> post process trace data into histograms data. I did 5 minute
> traces during the 12 client Phoronix dbench test and plotted
> the results, [1]. Sometimes, to the right of the autoscaled
> graph is another with fixed scaling. Better grouping of idle
> durations with TEOv2 are clearly visible.
> 
> ... Doug
> 
> [1] http://fast.smythies.com/linux-pm/k419p/histo_compare.htm

Oh, that's interesting, thanks. Can you post the break-even residency times and
exit latencies for your CPUs? On my Skylake test machine I get this from sysfs:

$ cd /sys/devices/system/cpu/cpu0/cpuidle
$ for state in * ; do
echo -e \
"STATE: $state\t\
DESC: $(cat $state/desc)\t\
NAME: $(cat $state/name)\t\
LATENCY: $(cat $state/latency)\t\
RESIDENCY: $(cat $state/residency)"
done

STATE: state0   DESC: CPUIDLE CORE POLL IDLE    NAME: POLL      LATENCY: 0      RESIDENCY: 0
STATE: state1   DESC: MWAIT 0x00        NAME: C1        LATENCY: 2      RESIDENCY: 2
STATE: state2   DESC: MWAIT 0x01        NAME: C1E       LATENCY: 10     RESIDENCY: 20
STATE: state3   DESC: MWAIT 0x10        NAME: C3        LATENCY: 70     RESIDENCY: 100
STATE: state4   DESC: MWAIT 0x20        NAME: C6        LATENCY: 85     RESIDENCY: 200
STATE: state5   DESC: MWAIT 0x33        NAME: C7s       LATENCY: 124    RESIDENCY: 800
STATE: state6   DESC: MWAIT 0x40        NAME: C8        LATENCY: 200    RESIDENCY: 800

At the bottom of the email at
https://lore.kernel.org/lkml/4168371.zz0pVZtGOY@aspire.rjw.lan/
Rafael explains how the sysfs residencies are important to understand the
histograms.

Thanks,
Giovanni

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-11-02 15:39 Doug Smythies
@ 2018-11-04 10:06 ` Rafael J. Wysocki
  2018-11-05 19:11 ` Giovanni Gherdovich
  2018-11-05 21:28 ` Doug Smythies
  2 siblings, 0 replies; 11+ messages in thread
From: Rafael J. Wysocki @ 2018-11-04 10:06 UTC (permalink / raw)
  To: Doug Smythies
  Cc: 'Giovanni Gherdovich', 'Srinivas Pandruvada',
	'Peter Zijlstra', 'LKML',
	'Frederic Weisbecker', 'Mel Gorman',
	'Daniel Lezcano', 'Linux PM'

On Friday, November 2, 2018 4:39:42 PM CET Doug Smythies wrote:
> On 2018.10.26 02:12 Rafael J. Wysocki wrote:
> 
> ...[snip]...

Again, thanks a lot for the feedback, it is appreciated very much!

> > The v2 is a re-write of major parts of the original patch.
> >
> > The approach the same in general, but the details have changed significantly
> > with respect to the previous version.  In particular:
> > * The decay of the idle state metrics is implemented differently.
> > * There is a more "clever" pattern detection (sort of along the lines
> >   of what the menu does, but simplified quite a bit and trying to avoid
> >   including timer wakeups).
> > * The "promotion" from the "polling" state is gone.
> > * The "safety net" wakeups are treated as the CPU might have been idle
> >   until the closest timer.
> 
> ...[snip]...
> 
> I have been testing this V2 against a baseline that includes all
> of the pending menu patches. My baseline kernel is somewhere
> after 4.19, at 345671e.
> 
> A side note:
> Recall that with the menu patch set tests, I found that the baseline
> reference performance for the pipe test on one core had changed
> significantly (worse - Kernel 4.19-rc1). Well, now it has changed
> significantly again (better, and even significantly better than it
> was for 4.18). 4.18 ~4.8 uSec/loop; 4.19 ~5.2 uSec/loop; 4.19+
> (345671e) 4.2 uSec/loop.
> 
> This V2 is pretty good.

That's awesome!

> All of the tests that I run gave similar
> performance and power use between the baseline reference and V2.
> I couldn't find any issues with the decay stuff, and I tried.
> (sorry, I didn't do pretty graphs.)
> 
> After reading Giovanni's reply the other day, I tried the
> Phoronix dbench test: 12 clients resulted in similar performance,
> But TEOv2 used a little less processor package power; 256 clients
> had about -7% performance using TEOv2, but (my numbers are not
> exact) also used less processor package power.

Good to know, thank you!

> On 2018.10.31 11:36 Giovanni Gherdovich wrote:
> 
> > Something I'd like to do now is verify that "teo"'s predictions
> > are better than "menu"'s; I'll probably use systemtap to make
> > some histograms of idle times versus what idle state was chosen
> > -- that'd be enough to compare the two.
> 
> I don't know what a "systemtap" is, but I have (crude) tools to
> post process trace data into histograms data. I did 5 minute
> traces during the 12 client Phoronix dbench test and plotted
> the results, [1]. Sometimes, to the right of the autoscaled
> graph is another with fixed scaling. Better grouping of idle
> durations with TEOv2 are clearly visible.
> 
> ... Doug
> 
> [1] http://fast.smythies.com/linux-pm/k419p/histo_compare.htm

Thanks for the graphs.  At least they show the consistent underestimation of
the idle duration in menu if I'm not mistaken.

Cheers,
Rafael


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-10-31 18:36 ` Giovanni Gherdovich
@ 2018-11-04 10:06   ` Rafael J. Wysocki
  2018-11-05 19:14     ` Giovanni Gherdovich
  2018-11-05 22:09     ` Doug Smythies
  0 siblings, 2 replies; 11+ messages in thread
From: Rafael J. Wysocki @ 2018-11-04 10:06 UTC (permalink / raw)
  To: Giovanni Gherdovich
  Cc: Linux PM, Srinivas Pandruvada, Peter Zijlstra, LKML,
	Frederic Weisbecker, Mel Gorman, Doug Smythies, Daniel Lezcano

On Wednesday, October 31, 2018 7:36:21 PM CET Giovanni Gherdovich wrote:
> On Fri, 2018-10-26 at 11:12 +0200, Rafael J. Wysocki wrote:
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

[cut]

> 
> Hello Rafael,

Hi Giovanni,

First off, many thanks for doing this work, it is very very much appreciated!

> your new governor has a neutral impact on performance, as you expected. This is
> a positive result, since the purpose of "teo" is to give improved
> predictions on idle times without regressing on the performance side.

Right.

> There are swings here and there but nothing looks extremely bad. v2 is largely
> equivalent to v1 in my tests, except for sockperf and netperf on the
> Haswell machine (v2 slightly worse) and tbench on the Skylake machine
> (again v2 slightly worse).

Thanks for the data.

I have some ideas on what may be the difference between the v1 and the v2 on
these machines, more about that below.

> I've tested your patches applying them on v4.18 (plus the backport
> necessary for v2 as Doug helpfully noted), just because it was the latest
> release when I started preparing this.
> 
> I've tested it on three machines, with different generations of Intel CPUs:
> 
> * single socket E3-1240 v5 (Skylake 8 cores, which I'll call 8x-SKYLAKE-UMA)
> * two sockets E5-2698 v4 (Broadwell 80 cores, 80x-BROADWELL-NUMA from here onwards)
> * two sockets E5-2670 v3 (Haswell 48 cores, 48x-HASWELL-NUMA from here onwards)
> 
> 
> BENCHMARKS WITH NEUTRAL RESULTS
> ===============================
> 
> These are the workloads where no noticeable difference is measured (on both
> v1 and v2, all machines), together with the corresponding MMTests[1]
> configuration file name:
> 
> * pgbench read-only on xfs, pgbench read/write on xfs
> 	* global-dhp__db-pgbench-timed-ro-small-xfs
> 	* global-dhp__db-pgbench-timed-rw-small-xfs
> * siege
> 	* global-dhp__http-siege
> * hackbench, pipetest
> 	* global-dhp__scheduler-unbound
> * Linux kernel compilation
> 	* global-dhp__workload_kerndevel-xfs
> * NASA Parallel Benchmarks, C-Class (linear algebra; run both with OpenMP
>   and OpenMPI, over xfs)
> 	* global-dhp__nas-c-class-mpi-full-xfs
> 	* global-dhp__nas-c-class-omp-full
> * FIO (Flexible IO) in several configurations
> 	* global-dhp__io-fio-randread-async-randwrite-xfs
> 	* global-dhp__io-fio-randread-async-seqwrite-xfs
> 	* global-dhp__io-fio-seqread-doublemem-32k-4t-xfs
> 	* global-dhp__io-fio-seqread-doublemem-4k-4t-xfs
> * netperf on loopback over TCP
> 	* global-dhp__network-netperf-unbound

The above is great to know.

> BENCHMARKS WITH NON-NEUTRAL RESULTS: OVERVIEW
> =============================================
> 
> These are benchmarks which exhibit a variation in their performance;
> you'll see the magnitude of the changes is moderate and it's highly variable
> from machine to machine. All percentages refer to the v4.18 baseline. In
> more than one case the Haswell machine seems to prefer v1 to v2.
> 
> * xfsrepair
> 	* global-dhp__io-xfsrepair-xfs
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		2% worse	2% worse
> 		80x-BROADWELL-NUMA	1% worse	1% worse
> 		48x-HASWELL-NUMA	1% worse	1% worse
> 
> * sqlite (insert operations on xfs)
> 	* global-dhp__db-sqlite-insert-medium-xfs
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		no change	no change
> 		80x-BROADWELL-NUMA	2% worse	3% worse
> 		48x-HASWELL-NUMA	no change	no change
> 
> * netperf on loopback over UDP
> 	* global-dhp__network-netperf-unbound
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		no change	6% worse
> 		80x-BROADWELL-NUMA	1% worse	4% worse
> 		48x-HASWELL-NUMA	3% better	5% worse
> 
> * sockperf on loopback over TCP, mode "under load"
> 	* global-dhp__network-sockperf-unbound
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		6% worse	no change
> 		80x-BROADWELL-NUMA	7% better	no change
> 		48x-HASWELL-NUMA	3% better	2% worse
> 
> * sockperf on loopback over UDP, mode "throughput"
> 	* global-dhp__network-sockperf-unbound

Generally speaking, I'm not worried about single-digit percent differences,
because overall they tend to fall into the noise range in the grand picture.

> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		1% worse	1% worse
> 		80x-BROADWELL-NUMA	3% better	2% better
> 		48x-HASWELL-NUMA	4% better	12% worse

But the 12% difference here is slightly worrisome.

> * sockperf on loopback over UDP, mode "under load"
> 	* global-dhp__network-sockperf-unbound
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		3% worse	1% worse
> 		80x-BROADWELL-NUMA	10% better	8% better
> 		48x-HASWELL-NUMA	1% better	no change
> 
> * dbench on xfs
>         * global-dhp__io-dbench4-async-xfs
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		3% better	4% better
> 		80x-BROADWELL-NUMA	no change	no change
> 		48x-HASWELL-NUMA	6% worse	16% worse

And same here.

> * tbench on loopback
> 	* global-dhp__network-tbench
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		1% worse	10% worse
> 		80x-BROADWELL-NUMA	1% worse	1% worse
> 		48x-HASWELL-NUMA	1% worse	2% worse
> 
> * schbench
> 	* global-dhp__workload_schbench
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		1% better	no change
> 		80x-BROADWELL-NUMA	2% worse	1% worse
> 		48x-HASWELL-NUMA	2% worse	3% worse
> 
> * gitsource on xfs (git unit tests, shell intensive)
> 	* global-dhp__workload_shellscripts-xfs
> 
> 					teo-v1		teo-v2
> 		-------------------------------------------------
> 		8x-SKYLAKE-UMA		no change	no change
> 		80x-BROADWELL-NUMA	no change	1% better
> 		48x-HASWELL-NUMA	no change	1% better
> 
> 
> BENCHMARKS WITH NON-NEUTRAL RESULTS: DETAIL
> ===========================================
> 
> Now some more detail. Each benchmark is run in a variety of configurations
> (eg. number of threads, number of concurrent connections and so forth) each
> of them giving a result. What you see above is the geometric mean of
> "sub-results"; below is the detailed view where there was a regression
> larger than 5% (either in v1 or v2, on any of the machines). That means
> I'll exclude xfsrepar, sqlite, schbench and the git unit tests "gitsource"
> that have negligible swings from the baseline.
> 
> In all tables asterisks indicate a statement about statistical
> significance: the difference with baseline has a p-value smaller than 0.1
> (small p-values indicate that the difference is real and not just random
> noise).
> 
> NETPERF-UDP
> ===========
> NOTES: Test run in mode "stream" over UDP. The varying parameter is the
>     message size in bytes. Each measurement is taken 5 times and the
>     harmonic mean is reported.
> MEASURES: Throughput in MBits/second, both on the sender and on the receiver end.
> HIGHER is better
> 
> machine: 8x-SKYLAKE-UMA
>                                      4.18.0                 4.18.0                 4.18.0
>                                     vanilla                 teo-v1        teo-v2+backport
> -----------------------------------------------------------------------------------------
> Hmean     send-64         362.27 (   0.00%)      362.87 (   0.16%)      318.85 * -11.99%*
> Hmean     send-128        723.17 (   0.00%)      723.66 (   0.07%)      660.96 *  -8.60%*
> Hmean     send-256       1435.24 (   0.00%)     1427.08 (  -0.57%)     1346.22 *  -6.20%*
> Hmean     send-1024      5563.78 (   0.00%)     5529.90 *  -0.61%*     5228.28 *  -6.03%*
> Hmean     send-2048     10935.42 (   0.00%)    10809.66 *  -1.15%*    10521.14 *  -3.79%*
> Hmean     send-3312     16898.66 (   0.00%)    16539.89 *  -2.12%*    16240.87 *  -3.89%*
> Hmean     send-4096     19354.33 (   0.00%)    19185.43 (  -0.87%)    18600.52 *  -3.89%*
> Hmean     send-8192     32238.80 (   0.00%)    32275.57 (   0.11%)    29850.62 *  -7.41%*
> Hmean     send-16384    48146.75 (   0.00%)    49297.23 *   2.39%*    48295.51 (   0.31%)
> Hmean     recv-64         362.16 (   0.00%)      362.87 (   0.19%)      318.82 * -11.97%*
> Hmean     recv-128        723.01 (   0.00%)      723.66 (   0.09%)      660.89 *  -8.59%*
> Hmean     recv-256       1435.06 (   0.00%)     1426.94 (  -0.57%)     1346.07 *  -6.20%*
> Hmean     recv-1024      5562.68 (   0.00%)     5529.90 *  -0.59%*     5228.28 *  -6.01%*
> Hmean     recv-2048     10934.36 (   0.00%)    10809.66 *  -1.14%*    10519.89 *  -3.79%*
> Hmean     recv-3312     16898.65 (   0.00%)    16538.21 *  -2.13%*    16240.86 *  -3.89%*
> Hmean     recv-4096     19351.99 (   0.00%)    19183.17 (  -0.87%)    18598.33 *  -3.89%*
> Hmean     recv-8192     32238.74 (   0.00%)    32275.13 (   0.11%)    29850.39 *  -7.41%*
> Hmean     recv-16384    48146.59 (   0.00%)    49296.23 *   2.39%*    48295.03 (   0.31%)

That is a bit worse than I would like it to be TBH.

> SOCKPERF-TCP-UNDER-LOAD
> =======================
> NOTES: Test run in mode "under load" over TCP. Parameters are message size
>     and transmission rate.
> MEASURES: Round-trip time in microseconds
> LOWER is better
> 
> machine: 8x-SKYLAKE-UMA
>                                                  4.18.0                 4.18.0                 4.18.0
>                                                 vanilla                 teo-v1        teo-v2+backport
> -----------------------------------------------------------------------------------------------------
> Amean        size-14-rate-10000        36.43 (   0.00%)       36.86 (  -1.17%)       20.24 (  44.44%)
> Amean        size-14-rate-24000        17.78 (   0.00%)       17.71 (   0.36%)       18.54 (  -4.29%)
> Amean        size-14-rate-50000        20.53 (   0.00%)       22.29 (  -8.58%)       16.16 (  21.30%)
> Amean        size-100-rate-10000       21.22 (   0.00%)       23.41 ( -10.35%)       33.04 ( -55.73%)
> Amean        size-100-rate-24000       17.81 (   0.00%)       21.09 ( -18.40%)       14.39 (  19.18%)
> Amean        size-100-rate-50000       12.31 (   0.00%)       19.65 ( -59.64%)       15.11 ( -22.77%)
> Amean        size-300-rate-10000       34.21 (   0.00%)       35.30 (  -3.19%)       34.20 (   0.05%)
> Amean        size-300-rate-24000       24.52 (   0.00%)       26.00 (  -6.04%)       27.42 ( -11.81%)
> Amean        size-300-rate-50000       20.20 (   0.00%)       20.39 (  -0.95%)       17.83 (  11.73%)
> Amean        size-500-rate-10000       21.56 (   0.00%)       21.31 (   1.15%)       29.32 ( -35.98%)
> Amean        size-500-rate-24000       30.58 (   0.00%)       27.41 (  10.38%)       27.21 (  11.03%)
> Amean        size-500-rate-50000       19.46 (   0.00%)       22.48 ( -15.55%)       16.29 (  16.30%)
> Amean        size-850-rate-10000       35.89 (   0.00%)       35.56 (   0.91%)       23.84 (  33.57%)
> Amean        size-850-rate-24000       29.11 (   0.00%)       28.18 (   3.20%)       17.44 (  40.08%)
> Amean        size-850-rate-50000       13.55 (   0.00%)       18.05 ( -33.26%)       21.30 ( -57.20%)

IMO there is too much variation here to draw any meaningful conclusions from it.

> SOCKPERF-UDP-THROUGHPUT
> =======================
> NOTES: Test run in mode "throughput" over UDP. The varying parameter is the
>     message size.
> MEASURES: Throughput, in MBits/second
> HIGHER is better
> 
> machine: 48x-HASWELL-NUMA
>                               4.18.0                 4.18.0                 4.18.0
>                              vanilla                 teo-v1        teo-v2+backport
> ----------------------------------------------------------------------------------
> Hmean     14        48.16 (   0.00%)       50.94 *   5.77%*       42.50 * -11.77%*
> Hmean     100      346.77 (   0.00%)      358.74 *   3.45%*      303.31 * -12.53%*
> Hmean     300     1018.06 (   0.00%)     1053.75 *   3.51%*      895.55 * -12.03%*
> Hmean     500     1693.07 (   0.00%)     1754.62 *   3.64%*     1489.61 * -12.02%*
> Hmean     850     2853.04 (   0.00%)     2948.73 *   3.35%*     2473.50 * -13.30%*

Well, in this case the consistent improvement in v1 turned into a consistent decline
in the v2, and over 10% for that matter.  Needs improvement IMO.

> DBENCH4
> =======
> NOTES: asyncronous IO; varies the number of clients up to NUMCPUS*8.
> MEASURES: latency (millisecs)
> LOWER is better
> 
> machine: 48x-HASWELL-NUMA
>                               4.18.0                 4.18.0                 4.18.0
>                              vanilla                 teo-v1        teo-v2+backport
> ----------------------------------------------------------------------------------
> Amean      1        37.15 (   0.00%)       50.10 ( -34.86%)       39.02 (  -5.03%)
> Amean      2        43.75 (   0.00%)       45.50 (  -4.01%)       44.36 (  -1.39%)
> Amean      4        54.42 (   0.00%)       58.85 (  -8.15%)       58.17 (  -6.89%)
> Amean      8        75.72 (   0.00%)       74.25 (   1.94%)       82.76 (  -9.30%)
> Amean      16      116.56 (   0.00%)      119.88 (  -2.85%)      164.14 ( -40.82%)
> Amean      32      570.02 (   0.00%)      561.92 (   1.42%)      681.94 ( -19.63%)
> Amean      64     3185.20 (   0.00%)     3291.80 (  -3.35%)     4337.43 ( -36.17%)

This one too.

> TBENCH4
> =======
> NOTES: networking counterpart of dbench. Varies the number of clients up to NUMCPUS*4
> MEASURES: Throughput, MB/sec
> HIGHER is better
> 
> machine: 8x-SKYLAKE-UMA
>                                     4.18.0                 4.18.0                 4.18.0
>                                    vanilla                    teo        teo-v2+backport
> ----------------------------------------------------------------------------------------
> Hmean     mb/sec-1       620.52 (   0.00%)      613.98 *  -1.05%*      502.47 * -19.03%*
> Hmean     mb/sec-2      1179.05 (   0.00%)     1112.84 *  -5.62%*      820.57 * -30.40%*
> Hmean     mb/sec-4      2072.29 (   0.00%)     2040.55 *  -1.53%*     2036.11 *  -1.75%*
> Hmean     mb/sec-8      4238.96 (   0.00%)     4205.01 *  -0.80%*     4124.59 *  -2.70%*
> Hmean     mb/sec-16     3515.96 (   0.00%)     3536.23 *   0.58%*     3500.02 *  -0.45%*
> Hmean     mb/sec-32     3452.92 (   0.00%)     3448.94 *  -0.12%*     3428.08 *  -0.72%*
> 

And same here.

> [1] https://github.com/gormanm/mmtests
> 
> 
> Happy to answer any questions on the benchmarks or the methods used to
> collect/report data.
> 
> Something I'd like to do now is verify that "teo"'s predictions are better
> than "menu"'s; I'll probably use systemtap to make some histograms of idle
> times versus what idle state was chosen -- that'd be enough to compare the
> two.

You can use the cpu_idle trace point to correlate the selected state index
with the observed idle duration (that's what Doug did IIUC).

Then, if the observed idle duration is between the target residency of the
selected state and the target residency of the next one, the selected state
is adequate and that's what we care about really.

If the observed idle duration is below the target residency of the selected
state, the selected state is too deep and it if is above (or equal to) the
target residency of the next state, it is too shallow.

> After that it would be nice to somehow know where timers came from; i.e. if
> I see that residences in a given state are consistently shorter than
> they're supposed to be, it would be interesting to see who set the timer
> that causes the wakeup. But... I'm not sure to know how to do that :) Do
> you have a strategy to track down the origin of timers/interrupts? Is there
> any script you're using to evaluate teo that you can share?

I need to think about that TBH.

The information that we can get readily should give use quite a good idea of
what happens on average, though, so let's first do that and then try to dig
deeper if need be.

I think that the difference between the v1 and v2 of the TEO governor comes
mostly from the way in which they handle patterns of "early" wakeups.  The
method used in v1 is very crude (and arguably invalid in general) and it
will cause shallow states to be selected more often, while the v2 tries to
be more "intelligent", but it may be overly conservative with that.

I'm working on a v3 that will try to address the above ATM, but I'd like to run
it on my systems first (I'm going back home from a conference right now).

Cheers,
Rafael


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
@ 2018-11-02 15:39 Doug Smythies
  2018-11-04 10:06 ` Rafael J. Wysocki
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Doug Smythies @ 2018-11-02 15:39 UTC (permalink / raw)
  To: 'Rafael J. Wysocki', 'Giovanni Gherdovich'
  Cc: 'Srinivas Pandruvada', 'Peter Zijlstra',
	'LKML', 'Frederic Weisbecker',
	'Mel Gorman', 'Daniel Lezcano',
	Doug Smythies, 'Linux PM'

On 2018.10.26 02:12 Rafael J. Wysocki wrote:

...[snip]...

> The v2 is a re-write of major parts of the original patch.
>
> The approach the same in general, but the details have changed significantly
> with respect to the previous version.  In particular:
> * The decay of the idle state metrics is implemented differently.
> * There is a more "clever" pattern detection (sort of along the lines
>   of what the menu does, but simplified quite a bit and trying to avoid
>   including timer wakeups).
> * The "promotion" from the "polling" state is gone.
> * The "safety net" wakeups are treated as the CPU might have been idle
>   until the closest timer.

...[snip]...

I have been testing this V2 against a baseline that includes all
of the pending menu patches. My baseline kernel is somewhere
after 4.19, at 345671e.

A side note:
Recall that with the menu patch set tests, I found that the baseline
reference performance for the pipe test on one core had changed
significantly (worse - Kernel 4.19-rc1). Well, now it has changed
significantly again (better, and even significantly better than it
was for 4.18). 4.18 ~4.8 uSec/loop; 4.19 ~5.2 uSec/loop; 4.19+
(345671e) 4.2 uSec/loop.

This V2 is pretty good. All of the tests that I run gave similar
performance and power use between the baseline reference and V2.
I couldn't find any issues with the decay stuff, and I tried.
(sorry, I didn't do pretty graphs.)

After reading Giovanni's reply the other day, I tried the
Phoronix dbench test: 12 clients resulted in similar performance,
But TEOv2 used a little less processor package power; 256 clients
had about -7% performance using TEOv2, but (my numbers are not
exact) also used less processor package power.

On 2018.10.31 11:36 Giovanni Gherdovich wrote:

> Something I'd like to do now is verify that "teo"'s predictions
> are better than "menu"'s; I'll probably use systemtap to make
> some histograms of idle times versus what idle state was chosen
> -- that'd be enough to compare the two.

I don't know what a "systemtap" is, but I have (crude) tools to
post process trace data into histograms data. I did 5 minute
traces during the 12 client Phoronix dbench test and plotted
the results, [1]. Sometimes, to the right of the autoscaled
graph is another with fixed scaling. Better grouping of idle
durations with TEOv2 are clearly visible.

... Doug

[1] http://fast.smythies.com/linux-pm/k419p/histo_compare.htm



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
  2018-10-26  9:12 Rafael J. Wysocki
@ 2018-10-31 18:36 ` Giovanni Gherdovich
  2018-11-04 10:06   ` Rafael J. Wysocki
  0 siblings, 1 reply; 11+ messages in thread
From: Giovanni Gherdovich @ 2018-10-31 18:36 UTC (permalink / raw)
  To: Rafael J. Wysocki, Linux PM
  Cc: Srinivas Pandruvada, Peter Zijlstra, LKML, Frederic Weisbecker,
	Mel Gorman, Doug Smythies, Daniel Lezcano

On Fri, 2018-10-26 at 11:12 +0200, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> 
> [... cut ...]
> 
> The new governor introduced here, the timer events oriented (TEO)
> governor, uses the same basic strategy as menu: it always tries to
> find the deepest idle state that can be used in the given conditions.
> However, it applies a different approach to that problem.  First, it
> doesn't use "correction factors" for the time till the closest timer,
> but instead it tries to correlate the measured idle duration values
> with the available idle states and use that information to pick up
> the idle state that is most likely to "match" the upcoming CPU idle
> interval.  Second, it doesn't take the number of "I/O waiters" into
> account at all and the pattern detection code in it tries to avoid
> taking timer wakeups into account.  It also only uses idle duration
> values less than the current time till the closest timer (with the
> tick excluded) for that purpose.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
> 
> The v2 is a re-write of major parts of the original patch.
> 
> The approach the same in general, but the details have changed significantly
> with respect to the previous version.  In particular:
> * The decay of the idle state metrics is implemented differently.
> * There is a more "clever" pattern detection (sort of along the lines
>   of what the menu does, but simplified quite a bit and trying to avoid
>   including timer wakeups).
> * The "promotion" from the "polling" state is gone.
> * The "safety net" wakeups are treated as the CPU might have been idle
>   until the closest timer.
> 
> I'm running this governor on all of my systems now without any
> visible adverse effects.
> 
> Overall, it selects deeper idle states more often than menu on average, but
> that doesn't seem to make a significant difference in the majority of cases.
> 
> In this preliminary revision it overtakes menu as the default governor
> for tickless systems (due to the higher rating), but that is likely
> to change going forward.  At this point I'm mostly asking for feedback
> and possibly testing with whatever workloads you can throw at it.
> 
> The patch should apply on top of 4.19, although I'm running it on
> top of my linux-next branch.  This version hasn't been run through
> benchmarks yet and that likely will take some time as I will be
> traveling quite a bit during the next few weeks.
> 
> ---
>  drivers/cpuidle/Kconfig            |   11 
>  drivers/cpuidle/governors/Makefile |    1 
>  drivers/cpuidle/governors/teo.c    |  491 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 503 insertions(+)
>  
> [... cut ...]

Hello Rafael,

your new governor has a neutral impact on performance, as you expected. This is
a positive result, since the purpose of "teo" is to give improved
predictions on idle times without regressing on the performance side. There
are swings here and there but nothing looks extremely bad. v2 is largely
equivalent to v1 in my tests, except for sockperf and netperf on the
Haswell machine (v2 slightly worse) and tbench on the Skylake machine
(again v2 slightly worse).

I've tested your patches applying them on v4.18 (plus the backport
necessary for v2 as Doug helpfully noted), just because it was the latest
release when I started preparing this.

I've tested it on three machines, with different generations of Intel CPUs:

* single socket E3-1240 v5 (Skylake 8 cores, which I'll call 8x-SKYLAKE-UMA)
* two sockets E5-2698 v4 (Broadwell 80 cores, 80x-BROADWELL-NUMA from here onwards)
* two sockets E5-2670 v3 (Haswell 48 cores, 48x-HASWELL-NUMA from here onwards)


BENCHMARKS WITH NEUTRAL RESULTS
===============================

These are the workloads where no noticeable difference is measured (on both
v1 and v2, all machines), together with the corresponding MMTests[1]
configuration file name:

* pgbench read-only on xfs, pgbench read/write on xfs
	* global-dhp__db-pgbench-timed-ro-small-xfs
	* global-dhp__db-pgbench-timed-rw-small-xfs
* siege
	* global-dhp__http-siege
* hackbench, pipetest
	* global-dhp__scheduler-unbound
* Linux kernel compilation
	* global-dhp__workload_kerndevel-xfs
* NASA Parallel Benchmarks, C-Class (linear algebra; run both with OpenMP
  and OpenMPI, over xfs)
	* global-dhp__nas-c-class-mpi-full-xfs
	* global-dhp__nas-c-class-omp-full
* FIO (Flexible IO) in several configurations
	* global-dhp__io-fio-randread-async-randwrite-xfs
	* global-dhp__io-fio-randread-async-seqwrite-xfs
	* global-dhp__io-fio-seqread-doublemem-32k-4t-xfs
	* global-dhp__io-fio-seqread-doublemem-4k-4t-xfs
* netperf on loopback over TCP
	* global-dhp__network-netperf-unbound


BENCHMARKS WITH NON-NEUTRAL RESULTS: OVERVIEW
=============================================

These are benchmarks which exhibit a variation in their performance;
you'll see the magnitude of the changes is moderate and it's highly variable
from machine to machine. All percentages refer to the v4.18 baseline. In
more than one case the Haswell machine seems to prefer v1 to v2.

* xfsrepair
	* global-dhp__io-xfsrepair-xfs

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		2% worse	2% worse
		80x-BROADWELL-NUMA	1% worse	1% worse
		48x-HASWELL-NUMA	1% worse	1% worse

* sqlite (insert operations on xfs)
	* global-dhp__db-sqlite-insert-medium-xfs

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		no change	no change
		80x-BROADWELL-NUMA	2% worse	3% worse
		48x-HASWELL-NUMA	no change	no change

* netperf on loopback over UDP
	* global-dhp__network-netperf-unbound

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		no change	6% worse
		80x-BROADWELL-NUMA	1% worse	4% worse
		48x-HASWELL-NUMA	3% better	5% worse

* sockperf on loopback over TCP, mode "under load"
	* global-dhp__network-sockperf-unbound

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		6% worse	no change
		80x-BROADWELL-NUMA	7% better	no change
		48x-HASWELL-NUMA	3% better	2% worse

* sockperf on loopback over UDP, mode "throughput"
	* global-dhp__network-sockperf-unbound

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		1% worse	1% worse
		80x-BROADWELL-NUMA	3% better	2% better
		48x-HASWELL-NUMA	4% better	12% worse

* sockperf on loopback over UDP, mode "under load"
	* global-dhp__network-sockperf-unbound

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		3% worse	1% worse
		80x-BROADWELL-NUMA	10% better	8% better
		48x-HASWELL-NUMA	1% better	no change

* dbench on xfs
        * global-dhp__io-dbench4-async-xfs

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		3% better	4% better
		80x-BROADWELL-NUMA	no change	no change
		48x-HASWELL-NUMA	6% worse	16% worse

* tbench on loopback
	* global-dhp__network-tbench

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		1% worse	10% worse
		80x-BROADWELL-NUMA	1% worse	1% worse
		48x-HASWELL-NUMA	1% worse	2% worse

* schbench
	* global-dhp__workload_schbench

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		1% better	no change
		80x-BROADWELL-NUMA	2% worse	1% worse
		48x-HASWELL-NUMA	2% worse	3% worse

* gitsource on xfs (git unit tests, shell intensive)
	* global-dhp__workload_shellscripts-xfs

					teo-v1		teo-v2
		-------------------------------------------------
		8x-SKYLAKE-UMA		no change	no change
		80x-BROADWELL-NUMA	no change	1% better
		48x-HASWELL-NUMA	no change	1% better


BENCHMARKS WITH NON-NEUTRAL RESULTS: DETAIL
===========================================

Now some more detail. Each benchmark is run in a variety of configurations
(eg. number of threads, number of concurrent connections and so forth) each
of them giving a result. What you see above is the geometric mean of
"sub-results"; below is the detailed view where there was a regression
larger than 5% (either in v1 or v2, on any of the machines). That means
I'll exclude xfsrepar, sqlite, schbench and the git unit tests "gitsource"
that have negligible swings from the baseline.

In all tables asterisks indicate a statement about statistical
significance: the difference with baseline has a p-value smaller than 0.1
(small p-values indicate that the difference is real and not just random
noise).

NETPERF-UDP
===========
NOTES: Test run in mode "stream" over UDP. The varying parameter is the
    message size in bytes. Each measurement is taken 5 times and the
    harmonic mean is reported.
MEASURES: Throughput in MBits/second, both on the sender and on the receiver end.
HIGHER is better

machine: 8x-SKYLAKE-UMA
                                     4.18.0                 4.18.0                 4.18.0
                                    vanilla                 teo-v1        teo-v2+backport
-----------------------------------------------------------------------------------------
Hmean     send-64         362.27 (   0.00%)      362.87 (   0.16%)      318.85 * -11.99%*
Hmean     send-128        723.17 (   0.00%)      723.66 (   0.07%)      660.96 *  -8.60%*
Hmean     send-256       1435.24 (   0.00%)     1427.08 (  -0.57%)     1346.22 *  -6.20%*
Hmean     send-1024      5563.78 (   0.00%)     5529.90 *  -0.61%*     5228.28 *  -6.03%*
Hmean     send-2048     10935.42 (   0.00%)    10809.66 *  -1.15%*    10521.14 *  -3.79%*
Hmean     send-3312     16898.66 (   0.00%)    16539.89 *  -2.12%*    16240.87 *  -3.89%*
Hmean     send-4096     19354.33 (   0.00%)    19185.43 (  -0.87%)    18600.52 *  -3.89%*
Hmean     send-8192     32238.80 (   0.00%)    32275.57 (   0.11%)    29850.62 *  -7.41%*
Hmean     send-16384    48146.75 (   0.00%)    49297.23 *   2.39%*    48295.51 (   0.31%)
Hmean     recv-64         362.16 (   0.00%)      362.87 (   0.19%)      318.82 * -11.97%*
Hmean     recv-128        723.01 (   0.00%)      723.66 (   0.09%)      660.89 *  -8.59%*
Hmean     recv-256       1435.06 (   0.00%)     1426.94 (  -0.57%)     1346.07 *  -6.20%*
Hmean     recv-1024      5562.68 (   0.00%)     5529.90 *  -0.59%*     5228.28 *  -6.01%*
Hmean     recv-2048     10934.36 (   0.00%)    10809.66 *  -1.14%*    10519.89 *  -3.79%*
Hmean     recv-3312     16898.65 (   0.00%)    16538.21 *  -2.13%*    16240.86 *  -3.89%*
Hmean     recv-4096     19351.99 (   0.00%)    19183.17 (  -0.87%)    18598.33 *  -3.89%*
Hmean     recv-8192     32238.74 (   0.00%)    32275.13 (   0.11%)    29850.39 *  -7.41%*
Hmean     recv-16384    48146.59 (   0.00%)    49296.23 *   2.39%*    48295.03 (   0.31%)

SOCKPERF-TCP-UNDER-LOAD
=======================
NOTES: Test run in mode "under load" over TCP. Parameters are message size
    and transmission rate.
MEASURES: Round-trip time in microseconds
LOWER is better

machine: 8x-SKYLAKE-UMA
                                                 4.18.0                 4.18.0                 4.18.0
                                                vanilla                 teo-v1        teo-v2+backport
-----------------------------------------------------------------------------------------------------
Amean        size-14-rate-10000        36.43 (   0.00%)       36.86 (  -1.17%)       20.24 (  44.44%)
Amean        size-14-rate-24000        17.78 (   0.00%)       17.71 (   0.36%)       18.54 (  -4.29%)
Amean        size-14-rate-50000        20.53 (   0.00%)       22.29 (  -8.58%)       16.16 (  21.30%)
Amean        size-100-rate-10000       21.22 (   0.00%)       23.41 ( -10.35%)       33.04 ( -55.73%)
Amean        size-100-rate-24000       17.81 (   0.00%)       21.09 ( -18.40%)       14.39 (  19.18%)
Amean        size-100-rate-50000       12.31 (   0.00%)       19.65 ( -59.64%)       15.11 ( -22.77%)
Amean        size-300-rate-10000       34.21 (   0.00%)       35.30 (  -3.19%)       34.20 (   0.05%)
Amean        size-300-rate-24000       24.52 (   0.00%)       26.00 (  -6.04%)       27.42 ( -11.81%)
Amean        size-300-rate-50000       20.20 (   0.00%)       20.39 (  -0.95%)       17.83 (  11.73%)
Amean        size-500-rate-10000       21.56 (   0.00%)       21.31 (   1.15%)       29.32 ( -35.98%)
Amean        size-500-rate-24000       30.58 (   0.00%)       27.41 (  10.38%)       27.21 (  11.03%)
Amean        size-500-rate-50000       19.46 (   0.00%)       22.48 ( -15.55%)       16.29 (  16.30%)
Amean        size-850-rate-10000       35.89 (   0.00%)       35.56 (   0.91%)       23.84 (  33.57%)
Amean        size-850-rate-24000       29.11 (   0.00%)       28.18 (   3.20%)       17.44 (  40.08%)
Amean        size-850-rate-50000       13.55 (   0.00%)       18.05 ( -33.26%)       21.30 ( -57.20%)

SOCKPERF-UDP-THROUGHPUT
=======================
NOTES: Test run in mode "throughput" over UDP. The varying parameter is the
    message size.
MEASURES: Throughput, in MBits/second
HIGHER is better

machine: 48x-HASWELL-NUMA
                              4.18.0                 4.18.0                 4.18.0
                             vanilla                 teo-v1        teo-v2+backport
----------------------------------------------------------------------------------
Hmean     14        48.16 (   0.00%)       50.94 *   5.77%*       42.50 * -11.77%*
Hmean     100      346.77 (   0.00%)      358.74 *   3.45%*      303.31 * -12.53%*
Hmean     300     1018.06 (   0.00%)     1053.75 *   3.51%*      895.55 * -12.03%*
Hmean     500     1693.07 (   0.00%)     1754.62 *   3.64%*     1489.61 * -12.02%*
Hmean     850     2853.04 (   0.00%)     2948.73 *   3.35%*     2473.50 * -13.30%*

DBENCH4
=======
NOTES: asyncronous IO; varies the number of clients up to NUMCPUS*8.
MEASURES: latency (millisecs)
LOWER is better

machine: 48x-HASWELL-NUMA
                              4.18.0                 4.18.0                 4.18.0
                             vanilla                 teo-v1        teo-v2+backport
----------------------------------------------------------------------------------
Amean      1        37.15 (   0.00%)       50.10 ( -34.86%)       39.02 (  -5.03%)
Amean      2        43.75 (   0.00%)       45.50 (  -4.01%)       44.36 (  -1.39%)
Amean      4        54.42 (   0.00%)       58.85 (  -8.15%)       58.17 (  -6.89%)
Amean      8        75.72 (   0.00%)       74.25 (   1.94%)       82.76 (  -9.30%)
Amean      16      116.56 (   0.00%)      119.88 (  -2.85%)      164.14 ( -40.82%)
Amean      32      570.02 (   0.00%)      561.92 (   1.42%)      681.94 ( -19.63%)
Amean      64     3185.20 (   0.00%)     3291.80 (  -3.35%)     4337.43 ( -36.17%)

TBENCH4
=======
NOTES: networking counterpart of dbench. Varies the number of clients up to NUMCPUS*4
MEASURES: Throughput, MB/sec
HIGHER is better

machine: 8x-SKYLAKE-UMA
                                    4.18.0                 4.18.0                 4.18.0
                                   vanilla                    teo        teo-v2+backport
----------------------------------------------------------------------------------------
Hmean     mb/sec-1       620.52 (   0.00%)      613.98 *  -1.05%*      502.47 * -19.03%*
Hmean     mb/sec-2      1179.05 (   0.00%)     1112.84 *  -5.62%*      820.57 * -30.40%*
Hmean     mb/sec-4      2072.29 (   0.00%)     2040.55 *  -1.53%*     2036.11 *  -1.75%*
Hmean     mb/sec-8      4238.96 (   0.00%)     4205.01 *  -0.80%*     4124.59 *  -2.70%*
Hmean     mb/sec-16     3515.96 (   0.00%)     3536.23 *   0.58%*     3500.02 *  -0.45%*
Hmean     mb/sec-32     3452.92 (   0.00%)     3448.94 *  -0.12%*     3428.08 *  -0.72%*


[1] https://github.com/gormanm/mmtests


Happy to answer any questions on the benchmarks or the methods used to
collect/report data.

Something I'd like to do now is verify that "teo"'s predictions are better
than "menu"'s; I'll probably use systemtap to make some histograms of idle
times versus what idle state was chosen -- that'd be enough to compare the
two.

After that it would be nice to somehow know where timers came from; i.e. if
I see that residences in a given state are consistently shorter than
they're supposed to be, it would be interesting to see who set the timer
that causes the wakeup. But... I'm not sure to know how to do that :) Do
you have a strategy to track down the origin of timers/interrupts? Is there
any script you're using to evaluate teo that you can share?

Thanks,
Giovanni Gherdovich

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems
@ 2018-10-26  9:12 Rafael J. Wysocki
  2018-10-31 18:36 ` Giovanni Gherdovich
  0 siblings, 1 reply; 11+ messages in thread
From: Rafael J. Wysocki @ 2018-10-26  9:12 UTC (permalink / raw)
  To: Linux PM
  Cc: Srinivas Pandruvada, Peter Zijlstra, LKML, Frederic Weisbecker,
	Mel Gorman, Giovanni Gherdovich, Doug Smythies, Daniel Lezcano

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

The venerable menu governor does some thigns that are quite
questionable in my view.  First, it includes timer wakeups in
the pattern detection data and mixes them up with wakeups from
other sources which in some cases causes it to expect what
essentially would be a timer wakeup in a time frame in which
no timer wakeups are possible (becuase it knows the time until
the next timer event and that is later than the expected wakeup
time).  Second, it uses the extra exit latency limit based on
the predicted idle duration and depending on the number of tasks
waiting on I/O, even though those tasks may run on a different
CPU when they are woken up.  Moreover, the time ranges used by it
for the sleep length correction factors depend on whether or not
there are tasks waiting on I/O, which again doesn't imply anything
in particular, and they are not correlated to the list of available
idle states in any way whatever.  Also,  the pattern detection code
in menu may end up considering values that are too large to matter
at all, in which cases running it is a waste of time.

A major rework of the menu governor would be required to address
these issues and the performance of at least some workloads (tuned
specifically to the current behavior of the menu governor) is likely
to suffer from that.  It is thus better to introduce an entirely new
governor without them and let everybody use the governor that works
better with their actual workloads.

The new governor introduced here, the timer events oriented (TEO)
governor, uses the same basic strategy as menu: it always tries to
find the deepest idle state that can be used in the given conditions.
However, it applies a different approach to that problem.  First, it
doesn't use "correction factors" for the time till the closest timer,
but instead it tries to correlate the measured idle duration values
with the available idle states and use that information to pick up
the idle state that is most likely to "match" the upcoming CPU idle
interval.  Second, it doesn't take the number of "I/O waiters" into
account at all and the pattern detection code in it tries to avoid
taking timer wakeups into account.  It also only uses idle duration
values less than the current time till the closest timer (with the
tick excluded) for that purpose.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

The v2 is a re-write of major parts of the original patch.

The approach the same in general, but the details have changed significantly
with respect to the previous version.  In particular:
* The decay of the idle state metrics is implemented differently.
* There is a more "clever" pattern detection (sort of along the lines
  of what the menu does, but simplified quite a bit and trying to avoid
  including timer wakeups).
* The "promotion" from the "polling" state is gone.
* The "safety net" wakeups are treated as the CPU might have been idle
  until the closest timer.

I'm running this governor on all of my systems now without any
visible adverse effects.

Overall, it selects deeper idle states more often than menu on average, but
that doesn't seem to make a significant difference in the majority of cases.

In this preliminary revision it overtakes menu as the default governor
for tickless systems (due to the higher rating), but that is likely
to change going forward.  At this point I'm mostly asking for feedback
and possibly testing with whatever workloads you can throw at it.

The patch should apply on top of 4.19, although I'm running it on
top of my linux-next branch.  This version hasn't been run through
benchmarks yet and that likely will take some time as I will be
traveling quite a bit during the next few weeks.

---
 drivers/cpuidle/Kconfig            |   11 
 drivers/cpuidle/governors/Makefile |    1 
 drivers/cpuidle/governors/teo.c    |  491 +++++++++++++++++++++++++++++++++++++
 3 files changed, 503 insertions(+)

Index: linux-pm/drivers/cpuidle/governors/teo.c
===================================================================
--- /dev/null
+++ linux-pm/drivers/cpuidle/governors/teo.c
@@ -0,0 +1,491 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Timer events oriented CPU idle governor
+ *
+ * Copyright (C) 2018 Intel Corporation
+ * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+ *
+ * The idea of this governor is based on the observation that on many systems
+ * timer events are two or more orders of magnitude more frequent than any
+ * other interrupts, so they are likely to be the most significant source of CPU
+ * wakeups from idle states.  Moreover, information about what happened in the
+ * (relatively recent) past can be used to estimate whether or not the deepest
+ * idle state with target residency within the time to the closest timer is
+ * likely to be suitable for the upcoming idle time of the CPU and, if not, then
+ * which of the shallower idle states to choose.
+ *
+ * Of course, non-timer wakeup sources are more important in some use cases and
+ * they can be covered by detecting patterns among recent idle time intervals
+ * of the CPU.  However, even in that case it is not necessary to take idle
+ * duration values greater than the time till the closest timer into account, as
+ * the patterns that they may belong to produce average values close enough to
+ * the time till the closest timer (sleep length) anyway.
+ *
+ * Thus this governor estimates whether or not the upcoming idle time of the CPU
+ * is likely to be significantly shorter than the sleep length and selects an
+ * idle state for it in accordance with that, as follows:
+ *
+ * - If there is a pattern of 5 or more recent non-timer wakeups earlier than
+ *   the closest timer event, expect one more of them to occur and use the
+ *   average of the idle duration values corresponding to them to select an
+ *   idle state for the CPU.
+ *
+ * - Otherwise, find the state on the basis of the sleep length and state
+ *   statistics collected over time:
+ *
+ *   o Find the deepest idle state whose target residency is less than or euqal
+ *     to the sleep length.
+ *
+ *   o Select it if it matched both the sleep length and the idle duration
+ *     measured after wakeup in the past more often than it matched the sleep
+ *     length, but not the idle duration (i.e. the measured idle duration was
+ *     significantly shorter than the sleep length matched by that state).
+ *
+ *   o Otherwise, select the shallower state with the greatest matched "early"
+ *     wakeups metric.
+ */
+
+#include <linux/cpuidle.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/sched/clock.h>
+#include <linux/tick.h>
+
+/*
+ * The SPIKE value is added to metrics when they grow and the DECAY_SHIFT value
+ * is used for decreasing metrics on a regular basis.
+ */
+#define SPIKE		1024
+#define DECAY_SHIFT	3
+
+/*
+ * Number of the most recent idle duration values to take into consideration for
+ * the detection of wakeup patterns.
+ */
+#define INTERVALS	8
+/*
+ * Minimum number of recent idle duration values needed to compute a "typical"
+ * one.
+ */
+#define COUNT_LIMIT	5
+
+/**
+ * struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
+ * @early_hits: "Early" CPU wakeups matched by this state.
+ * @hits: "On time" CPU wakeups matched by this state.
+ * @misses: CPU wakeups "missed" by this state.
+ *
+ * A CPU wakeup is "matched" by a given idle state if the idle duration measured
+ * after the wakeup is between the target residency of that state and the target
+ * residnecy of the next one (or if this is the deepest available idle state, it
+ * "matches" a CPU wakeup when the measured idle duration is at least equal to
+ * its target residency).
+ *
+ * Also, from the TEO governor prespective, a CPU wakeup from idle is "early" if
+ * it occurs significantly earlier than the closest expected timer event (that
+ * is, early enough to match an idle state shallower than the one matching the
+ * time till the closest timer event).  Otherwise, the wakeup is "on time", or
+ * it is a "hit".
+ *
+ * A "miss" occurs when the given state doesn't match the wakeup, but it matches
+ * the time till the closest timer event used for idle state selection.
+ */
+struct teo_idle_state {
+	unsigned int early_hits;
+	unsigned int hits;
+	unsigned int misses;
+};
+
+/**
+ * struct teo_cpu - CPU data used by the TEO cpuidle governor.
+ * @time_span_ns: Time between idle state selection and post-wakeup update.
+ * @sleep_length_ns: Time till the closest timer event (at the selection time).
+ * @states: Idle states data corresponding to this CPU.
+ * @last_state: Idle state entered by the CPU last time.
+ * @interval_idx: Index of the most recent saved idle interval.
+ * @intervals: Saved idle duration values.
+ * @intervals_sq: Saved idle duration values squared.
+ */
+struct teo_cpu {
+	u64 time_span_ns;
+	u64 sleep_length_ns;
+	struct teo_idle_state states[CPUIDLE_STATE_MAX];
+	int last_state;
+	int interval_idx;
+	unsigned int intervals[INTERVALS];
+	unsigned int max_duration:1;
+};
+
+static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
+
+/**
+ * teo_update - Update CPU data after wakeup.
+ * @drv: cpuidle driver containing state data.
+ * @dev: Target CPU.
+ */
+static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	unsigned int sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
+	int i, idx_hit = -1, idx_timer = -1;
+	unsigned int measured_us;
+
+	if (cpu_data->max_duration) {
+		measured_us = sleep_length_us;
+	} else {
+		measured_us = dev->last_residency;
+		i = cpu_data->last_state;
+		if (measured_us >= 2 * drv->states[i].exit_latency)
+			measured_us -= drv->states[i].exit_latency;
+		else
+			measured_us /= 2;
+	}
+
+	/*
+	 * Decay the "early hits" metric for all of the states and find the
+	 * states matching the sleep length and the measured idle duration.
+	 */
+	for (i = 0; i < drv->state_count; i++) {
+		unsigned int early_hits = cpu_data->states[i].early_hits;
+
+		cpu_data->states[i].early_hits -= early_hits >> DECAY_SHIFT;
+
+		if (drv->states[i].target_residency <= measured_us)
+			idx_hit = i;
+
+		if (drv->states[i].target_residency <= sleep_length_us)
+			idx_timer = i;
+	}
+
+	/*
+	 * Update the "hits" and "misses" data for the state matching the sleep
+	 * length.  If it matches the measured idle duration too, this is a hit,
+	 * so increase the "hits" metric for it then.  Otherwise, this is a
+	 * miss, so increase the "misses" metric for it.  In the latter case
+	 * also increase the "early hits" metric for the state that actually
+	 * matches the measured idle duration.
+	 */
+	if (idx_timer >= 0) {
+		unsigned int hits = cpu_data->states[idx_timer].hits;
+		unsigned int misses = cpu_data->states[idx_timer].misses;
+
+		hits -= hits >> DECAY_SHIFT;
+		misses -= misses >> DECAY_SHIFT;
+
+		if (idx_timer > idx_hit) {
+			misses += SPIKE;
+			if (idx_hit >= 0)
+				cpu_data->states[idx_hit].early_hits += SPIKE;
+		} else {
+			hits += SPIKE;
+		}
+
+		cpu_data->states[idx_timer].misses = misses;
+		cpu_data->states[idx_timer].hits = hits;
+	}
+
+	/*
+	 * Save idle duration values corresponding to non-timer wakeups for
+	 * pattern detection.
+	 *
+	 * If the total time span between idle state selection and the "reflect"
+	 * callback is greater than or equal to the sleep length determined at
+	 * the idle state selection time, the wakeup is likely to be due to a
+	 * timer event.
+	 */
+	if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
+		measured_us = UINT_MAX;
+
+	cpu_data->intervals[cpu_data->interval_idx++] = measured_us;
+	if (cpu_data->interval_idx > INTERVALS)
+		cpu_data->interval_idx = 0;
+}
+
+/**
+ * teo_idle_duration - Estimate the duration of the upcoming CPU idle time.
+ * @drv: cpuidle driver containing state data.
+ * @cpu_data: Governor data for the target CPU.
+ * @sleep_length_us: Time till the closest timer event in microseconds.
+ */
+unsigned int teo_idle_duration(struct cpuidle_driver *drv,
+			       struct teo_cpu *cpu_data,
+			       unsigned int sleep_length_us)
+{
+	u64 sum, sq_sum, max, limit;
+	unsigned int count;
+
+	/*
+	 * If the sleep length is below the target residency of idle state 1,
+	 * the only viable choice is to select the first available (enabled)
+	 * idle state, so return immediately in that case.
+	 */
+	if (sleep_length_us < drv->states[1].target_residency)
+		return sleep_length_us;
+
+	/*
+	 * The purpose of this function is to check if there is a pattern of
+	 * wakeups indicating that it would be better to select a state
+	 * shallower than the deepest one matching the sleep length or the
+	 * deepest one at all if the sleep lenght is long.  Larger idle duration
+	 * values are beyond the interesting range.
+	 *
+	 * Narrowing the range of interesting values down upfront also helps to
+	 * avoid overflows during the computation below.
+	 */
+	max = drv->states[drv->state_count-1].target_residency;
+	max = min_t(u64, sleep_length_us, max + (max >> 2));
+
+	/*
+	 * The limit here is the value to compare with the variance of the saved
+	 * recent idle duration values in order to decide whether or not it is
+	 * small.  Take 1/8 of the interesting range and the with a 10 us cap.
+	 */
+	limit = max_t(u64, max >> 3, 10);
+	limit *= limit;
+
+	do {
+		u64 cap = max;
+		int i;
+
+		/*
+		 * Compute the sum of the saved intervals below the cap and the
+		 * sum of of their squares.  Count them and find the maximum
+		 * interval below the cap.
+		 */
+		count = 0;
+		sum = 0;
+		sq_sum = 0;
+		max = 0;
+
+		for (i = 0; i < INTERVALS; i++) {
+			u64 val = cpu_data->intervals[i];
+
+			if (val >= cap)
+				continue;
+
+			count++;
+			sum += val;
+			sq_sum += val * val;
+			if (max < val)
+				max = val;
+		}
+
+		/*
+		 * If the number of intervals is too small to get a meaningful
+		 * result from them, return the original sleep length.
+		 */
+		if (count < COUNT_LIMIT)
+			return sleep_length_us;
+
+		/*
+		 * A pattern appears to be there if the variance is small
+		 * relative to the limit determined earlier.
+		 */
+	} while (count * sq_sum - sum * sum > count * count * limit);
+
+	return div64_u64(sum, count);
+}
+
+/**
+ * teo_select - Selects the next idle state to enter.
+ * @drv: cpuidle driver containing state data.
+ * @dev: Target CPU.
+ * @stop_tick: Indication on whether or not to stop the scheduler tick.
+ */
+static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+		      bool *stop_tick)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	int latency_req = cpuidle_governor_latency_req(dev->cpu);
+	unsigned int sleep_length_us, duration_us;
+	unsigned int max_early_count;
+	int max_early_idx, idx, i;
+	ktime_t delta_tick;
+
+	if (cpu_data->last_state >= 0) {
+		teo_update(drv, dev);
+		cpu_data->last_state = -1;
+	}
+
+	cpu_data->time_span_ns = local_clock();
+
+	cpu_data->sleep_length_ns = tick_nohz_get_sleep_length(&delta_tick);
+	sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
+
+	duration_us = teo_idle_duration(drv, cpu_data, sleep_length_us);
+
+	/*
+	 * If the time needed to enter and exit the idle state matching the
+	 * expected idle duration is comparable with the expected idle duration
+	 * itself, the time to spend in that state is likely to be small, so it
+	 * probably is better to select a shallower state then.  Tweak the
+	 * latency limit to enforce that.
+	 */
+	if (duration_us < latency_req)
+		latency_req = duration_us;
+
+	max_early_count = 0;
+	max_early_idx = -1;
+	idx = -1;
+
+	for (i = 0; i < drv->state_count; i++) {
+		struct cpuidle_state *s = &drv->states[i];
+		struct cpuidle_state_usage *su = &dev->states_usage[i];
+
+		if (s->disabled || su->disable) {
+			/*
+			 * If the "early hits" metric of a disabled state is
+			 * greater than the current maximum, it should be taken
+			 * into account, because it would be a mistake to select
+			 * a deeper state with lower "early hits" metric.  The
+			 * index cannot be changed to point to it, however, so
+			 * just increase the max count alone and let the index
+			 * still point to a shallower idle state.
+			 */
+			if (max_early_idx >= 0 &&
+			    max_early_count < cpu_data->states[i].early_hits)
+				max_early_count = cpu_data->states[i].early_hits;
+
+			continue;
+		}
+
+		if (idx < 0)
+			idx = i; /* first enabled state */
+
+		if (s->target_residency > duration_us) {
+			/*
+			 * If the next wakeup is expected to be "early", the
+			 * time frame of it is known already.
+			 */
+			if (duration_us < sleep_length_us)
+				break;
+
+			/*
+			 * If the "hits" metric of the state matching the sleep
+			 * length is greater than its "misses" metric, that is
+			 * the one to use.
+			 */
+			if (cpu_data->states[idx].hits >= cpu_data->states[idx].misses)
+				break;
+
+			/*
+			 * It is more likely that one of the shallower states
+			 * will match the idle duration measured after wakeup,
+			 * so take the one with the maximum "early hits" metric,
+			 * but if that cannot be determined, just use the state
+			 * selected so far.
+			 */
+			if (max_early_idx >= 0) {
+				idx = max_early_idx;
+				duration_us = drv->states[idx].target_residency;
+			}
+			break;
+		}
+		if (s->exit_latency > latency_req) {
+			/*
+			 * If we break out of the loop for latency reasons, use
+			 * the target residency of the selected state as the
+			 * expected idle duration to avoid stopping the tick
+			 * as long as that target residency is low enough.
+			 */
+			duration_us = drv->states[idx].target_residency;
+			break;
+		}
+
+		idx = i;
+
+		if (max_early_count < cpu_data->states[i].early_hits) {
+			max_early_count = cpu_data->states[i].early_hits;
+			max_early_idx = i;
+		}
+	}
+
+	if (idx < 0)
+		idx = 0; /* No states enabled. Must use 0. */
+
+	/*
+	 * Don't stop the tick if the selected state is a polling one or if the
+	 * expected idle duration is shorter than the tick period length.
+	 */
+	if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
+	    duration_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
+		unsigned int delta_tick_us = ktime_to_us(delta_tick);
+
+		*stop_tick = false;
+
+		if (idx > 0 && drv->states[idx].target_residency > delta_tick_us) {
+			/*
+			 * The tick is not going to be stopped and the target
+			 * residency of the state to be returned is not within
+			 * the time until the closest timer event including the
+			 * tick, so try to correct that.
+			 */
+			for (i = idx - 1; i > 0; i--) {
+				if (drv->states[i].disabled ||
+				    dev->states_usage[i].disable)
+					continue;
+
+				if (drv->states[i].target_residency <= delta_tick_us)
+					break;
+			}
+			idx = i;
+		}
+	}
+
+	return idx;
+}
+
+/**
+ * teo_reflect - Note that governor data for the CPU need to be updated.
+ * @dev: Target CPU.
+ * @state: Entered state.
+ */
+static void teo_reflect(struct cpuidle_device *dev, int state)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+
+	cpu_data->last_state = state;
+	cpu_data->time_span_ns = local_clock() - cpu_data->time_span_ns;
+	/*
+	 * If the wakeup was not "natural", but triggered by one of the safety
+	 * nets, assume that the CPU might have been idle for the entire sleep
+	 * length time.
+	 */
+	cpu_data->max_duration = (tick_nohz_idle_got_tick() &&
+				  cpu_data->sleep_length_ns > TICK_NSEC) ||
+				 dev->poll_time_limit;
+}
+
+/**
+ * teo_enable_device - Initialize the governor's data for the target CPU.
+ * @drv: cpuidle driver (not used).
+ * @dev: Target CPU.
+ */
+static int teo_enable_device(struct cpuidle_driver *drv,
+			     struct cpuidle_device *dev)
+{
+	struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
+	int i;
+
+	memset(cpu_data, 0, sizeof(*cpu_data));
+
+	for (i = 0; i < INTERVALS; i++)
+		cpu_data->intervals[i] = UINT_MAX;
+
+	return 0;
+}
+
+static struct cpuidle_governor teo_governor = {
+	.name =		"teo",
+	.rating =	22,
+	.enable =	teo_enable_device,
+	.select =	teo_select,
+	.reflect =	teo_reflect,
+};
+
+static int __init teo_governor_init(void)
+{
+	return cpuidle_register_governor(&teo_governor);
+}
+
+postcore_initcall(teo_governor_init);
Index: linux-pm/drivers/cpuidle/Kconfig
===================================================================
--- linux-pm.orig/drivers/cpuidle/Kconfig
+++ linux-pm/drivers/cpuidle/Kconfig
@@ -23,6 +23,17 @@ config CPU_IDLE_GOV_LADDER
 config CPU_IDLE_GOV_MENU
 	bool "Menu governor (for tickless system)"
 
+config CPU_IDLE_GOV_TEO
+	bool "Timer events oriented governor (for tickless systems)"
+	help
+	  Menu governor derivative that uses a simplified idle state
+	  selection method focused on timer events and does not do any
+	  interactivity boosting.
+
+	  Some workloads benefit from using this governor and it generally
+	  should be safe to use.  Say Y here if you are not happy with the
+	  alternatives.
+
 config DT_IDLE_STATES
 	bool
 
Index: linux-pm/drivers/cpuidle/governors/Makefile
===================================================================
--- linux-pm.orig/drivers/cpuidle/governors/Makefile
+++ linux-pm/drivers/cpuidle/governors/Makefile
@@ -4,3 +4,4 @@
 
 obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
 obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
+obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-11-05 22:10 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-27  6:37 [RFC/RFT][PATCH v2] cpuidle: New timer events oriented governor for tickless systems Doug Smythies
2018-10-30  7:19 ` Rafael J. Wysocki
  -- strict thread matches above, loose matches on Subject: below --
2018-11-02 15:39 Doug Smythies
2018-11-04 10:06 ` Rafael J. Wysocki
2018-11-05 19:11 ` Giovanni Gherdovich
2018-11-05 21:28 ` Doug Smythies
2018-10-26  9:12 Rafael J. Wysocki
2018-10-31 18:36 ` Giovanni Gherdovich
2018-11-04 10:06   ` Rafael J. Wysocki
2018-11-05 19:14     ` Giovanni Gherdovich
2018-11-05 22:09     ` Doug Smythies

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).