linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Galbraith <mgalbraith@suse.de>
To: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Peter Zijlstra <peterz@infradead.org>,
	Linux PM <linux-pm@vger.kernel.org>,
	Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Thomas Ilsche <thomas.ilsche@tu-dresden.de>,
	Doug Smythies <dsmythies@telus.net>,
	Rik van Riel <riel@surriel.com>,
	Aubrey Li <aubrey.li@linux.intel.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC/RFT][PATCH v3 0/6] sched/cpuidle: Idle loop rework
Date: Sat, 10 Mar 2018 06:01:31 +0100	[thread overview]
Message-ID: <1520658091.15339.4.camel@suse.de> (raw)
In-Reply-To: <2450532.XN8DODrtDf@aspire.rjw.lan>

On Fri, 2018-03-09 at 10:34 +0100, Rafael J. Wysocki wrote:
> Hi All,
> 
> Thanks a lot for the discussion and testing so far!
> 
> This is a total respin of the whole series, so please look at it afresh.
> Patches 2 and 3 are the most similar to their previous versions, but
> still they are different enough.

Respin of testdrive...

i4790 booted nopti nospectre_v2

30 sec tbench
4.16.0.g1b88acc-master (virgin)
Throughput 559.279 MB/sec  1 clients  1 procs  max_latency=0.046 ms
Throughput 997.119 MB/sec  2 clients  2 procs  max_latency=0.246 ms
Throughput 1693.04 MB/sec  4 clients  4 procs  max_latency=4.309 ms
Throughput 3597.2 MB/sec  8 clients  8 procs  max_latency=6.760 ms
Throughput 3474.55 MB/sec  16 clients  16 procs  max_latency=6.743 ms

4.16.0.g1b88acc-master (+ v2)
Throughput 588.929 MB/sec  1 clients  1 procs  max_latency=0.291 ms
Throughput 1080.93 MB/sec  2 clients  2 procs  max_latency=0.639 ms
Throughput 1826.3 MB/sec  4 clients  4 procs  max_latency=0.647 ms
Throughput 3561.01 MB/sec  8 clients  8 procs  max_latency=1.279 ms
Throughput 3382.98 MB/sec  16 clients  16 procs  max_latency=4.817 ms

4.16.0.g1b88acc-master (+ v3)
Throughput 588.711 MB/sec  1 clients  1 procs  max_latency=0.067 ms
Throughput 1077.71 MB/sec  2 clients  2 procs  max_latency=0.298 ms
Throughput 1803.47 MB/sec  4 clients  4 procs  max_latency=0.667 ms
Throughput 3591.4 MB/sec  8 clients  8 procs  max_latency=4.999 ms
Throughput 3444.74 MB/sec  16 clients  16 procs  max_latency=1.995 ms

4.16.0.g1b88acc-master (+ my local patches)
Throughput 722.559 MB/sec  1 clients  1 procs  max_latency=0.087 ms
Throughput 1208.59 MB/sec  2 clients  2 procs  max_latency=0.289 ms
Throughput 2071.94 MB/sec  4 clients  4 procs  max_latency=0.654 ms
Throughput 3784.91 MB/sec  8 clients  8 procs  max_latency=0.974 ms
Throughput 3644.4 MB/sec  16 clients  16 procs  max_latency=5.620 ms

turbostat -q -- firefox /root/tmp/video/BigBuckBunny-DivXPlusHD.mkv & sleep 300;killall firefox

                        PkgWatt
                              1     2     3
4.16.0.g1b88acc-master     6.95  7.03  6.91 (virgin)
4.16.0.g1b88acc-master     7.20  7.25  7.26 (+v2)
4.16.0.g1b88acc-master     7.04  6.97  7.07 (+v3)
4.16.0.g1b88acc-master     6.90  7.06  6.95 (+my patches)

No change wrt nohz high frequency cross core scheduling overhead, but
the light load power consumption oddity did go away.

(btw, don't read anything into max_latency numbers, that's GUI noise)

	-Mike

  parent reply	other threads:[~2018-03-10  5:01 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-09  9:34 [RFC/RFT][PATCH v3 0/6] sched/cpuidle: Idle loop rework Rafael J. Wysocki
2018-03-09  9:36 ` [RFC/RFT][PATCH v3 1/6] time: tick-sched: Reorganize idle tick management code Rafael J. Wysocki
2018-03-09  9:38 ` [RFC/RFT][PATCH v3 2/6] sched: idle: Do not stop the tick upfront in the idle loop Rafael J. Wysocki
2018-03-09  9:39 ` [RFC/RFT][PATCH v3 3/6] sched: idle: Do not stop the tick before cpuidle_idle_call() Rafael J. Wysocki
2018-03-09  9:41 ` [RFC/RFT][PATCH v3 4/6] cpuidle: Return nohz hint from cpuidle_select() Rafael J. Wysocki
2018-03-09  9:46 ` [RFC/RFT][PATCH v3 5/6] sched: idle: Select idle state before stopping the tick Rafael J. Wysocki
2018-03-11  1:44   ` Frederic Weisbecker
2018-03-11 10:31     ` Rafael J. Wysocki
2018-03-09  9:49 ` [RFC/RFT][PATCH v3 6/6] cpuidle: menu: Refine idle state selection for running tick Rafael J. Wysocki
2018-03-09 15:19 ` [RFC/RFT][PATCH v3 0/6] sched/cpuidle: Idle loop rework Rik van Riel
2018-03-10  5:01 ` Mike Galbraith [this message]
2018-03-10  9:09   ` Rafael J. Wysocki
2018-03-10  7:41 ` Doug Smythies
2018-03-10  9:00   ` Rafael J. Wysocki
2018-03-10 16:07   ` Doug Smythies
2018-03-10 23:55     ` Rafael J. Wysocki
2018-03-11  7:43     ` Doug Smythies
2018-03-11 10:21       ` Rafael J. Wysocki
2018-03-11 10:34         ` Rafael J. Wysocki
2018-03-11 15:52       ` Doug Smythies
2018-03-11 23:02       ` Doug Smythies
2018-03-12  9:28         ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1520658091.15339.4.camel@suse.de \
    --to=mgalbraith@suse.de \
    --cc=aubrey.li@linux.intel.com \
    --cc=dsmythies@telus.net \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=rjw@rjwysocki.net \
    --cc=tglx@linutronix.de \
    --cc=thomas.ilsche@tu-dresden.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).