linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Giovanni Gherdovich <ggherdovich@suse.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Linux PM <linux-pm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Doug Smythies <dsmythies@telus.net>
Subject: Re: [PATCH v2 0/3] cpufreq: Allow drivers to receive more information from the governor
Date: Wed, 23 Dec 2020 14:06:43 +0100	[thread overview]
Message-ID: <1608728803.14392.59.camel@suse.com> (raw)
In-Reply-To: <CAJZ5v0jfgFRqXisWQUH0J-Xfvh_jjWw8mC_AKyd-tAgRNamj9Q@mail.gmail.com>

On Mon, 2020-12-21 at 17:11 +0100, Rafael J. Wysocki wrote:
> Hi,
> 
> On Fri, Dec 18, 2020 at 5:22 PM Giovanni Gherdovich wrote:
> > 
> > Gitsource: this test show the most compelling case against the
> >     sugov-HWP.desired series: on the Cascade Lake sugov-HWP.desired is 10%
> >     faster than sugov-HWP.min (it was expected to be slower!) and 35% less
> >     efficient (we expected more performance-per-watt, not less).
> 
> This is a bit counter-intuitive, so it is good to try to understand
> what's going on instead of drawing conclusions right away from pure
> numbers.
> 
> My interpretation of the available data is that gitsource benefits
> from the "race-to-idle" effect in terms of energy-efficiency which
> also causes it to suffer in terms of performance.  Namely, completing
> the given piece of work faster causes some CPU idle time to become
> available and that effectively reduces power, but it also increases
> the response time (by the idle state exit latency) which causes
> performance to drop. Whether or not this effect can be present depends
> on what CPU idle states are available etc. and it may be a pure
> coincidence.
>
> [snip]

Right, race-to-idle might explain the increased efficiency of HWP.MIN.
As you note, increased exit latencies from idle can also explain the overall
performance difference.

> There is a whole broad category of workloads involving periodic tasks
> that do the same amount of work in every period regardless of the
> frequency they run at (as long as the frequency is sufficient to avoid
> "overrunning" the period) and they almost never benefit from
> "race-to-idle".There is zero benefit from running them too fast and
> the energy-efficiency goes down the sink when that happens.
> 
> Now the problem is that with sugov-HWP.min the users who care about
> these workloads don't even have an option to use the task utilization
> history recorded by the scheduler to bias the frequency towards the
> "sufficient" level, because sugov-HWP.min only sets a lower bound on
> the frequency selection to improve the situation, so the choice
> between it and sugov-HWP.desired boils down to whether or not to give
> that option to them and my clear preference is for that option to
> exist.  Sorry about that.  [Note that it really is an option, though,
> because "pure" HWP is still the default for HWP-enabled systems.]

Sure, the periodic workloads benefit from this patch, Doug's test shows that.

I guess I'm still confused by the difference between setting HWP.DESIRED and
disabling HWP completely. The Intel manual says that a non-zero HWP.DESIRED
"effectively disabl[es] HW autonomous selection", but then continues with "The
Desired_Performance input is non-constraining in terms of Performance and
Energy optimizations, which are independently controlled". The first
statement sounds like HWP is out of the picture (no more autonomous
frequency selections) but the latter part implies there are other
optimizations still available. I'm not sure how to reason about that.

> It may be possible to restore some "race-to-idle" benefits by tweaking
> HWP_REQ.EPP in the future, but that needs to be investigated.
> 
> BTW, what EPP value was there on the system where you saw better
> performance under sugov-HWP.desired?  If it was greater than zero, it
> would be useful to decrease EPP (by adjusting the
> energy_performance_preference attributes in sysfs for all CPUs) and
> see what happens to the performance difference then.

For sugov-HWP.desired the EPP was 0x80 (the default value).


Giovanni


  reply	other threads:[~2020-12-23 13:08 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-07 16:25 [PATCH v1 0/4] cpufreq: Allow drivers to receive more information from the governor Rafael J. Wysocki
2020-12-07 16:28 ` [PATCH v1 1/4] cpufreq: schedutil: Add util to struct sg_cpu Rafael J. Wysocki
2020-12-08  8:33   ` Viresh Kumar
2020-12-09 17:17     ` Rafael J. Wysocki
2020-12-07 16:29 ` [PATCH v1 2/4] cpufreq: schedutil: Adjust utilization instead of frequency Rafael J. Wysocki
2020-12-08  8:51   ` Viresh Kumar
2020-12-08 17:01     ` Rafael J. Wysocki
2020-12-09  5:16       ` Viresh Kumar
2020-12-09 15:32         ` Rafael J. Wysocki
2020-12-14 11:07           ` Viresh Kumar
2020-12-07 16:35 ` [PATCH v1 3/4] cpufreq: Add special-purpose fast-switching callback for drivers Rafael J. Wysocki
2020-12-08  9:02   ` Viresh Kumar
2020-12-15  4:16     ` Viresh Kumar
2020-12-15 15:38       ` Rafael J. Wysocki
2020-12-07 16:38 ` [PATCH v1 4/4] cpufreq: intel_pstate: Implement the ->adjust_perf() callback Rafael J. Wysocki
2020-12-08 12:43   ` Peter Zijlstra
2020-12-08 17:10     ` Rafael J. Wysocki
2020-12-08 16:30 ` [PATCH v1 0/4] cpufreq: Allow drivers to receive more information from the governor Giovanni Gherdovich
2020-12-08 17:13   ` Rafael J. Wysocki
2020-12-08 19:14     ` Doug Smythies
2020-12-13 19:12       ` Doug Smythies
2020-12-18 15:32       ` Peter Zijlstra
2020-12-14 20:01 ` [PATCH v2 0/3] " Rafael J. Wysocki
2020-12-14 20:04   ` [PATCH v2 1/3] cpufreq: schedutil: Add util to struct sg_cpu Rafael J. Wysocki
2020-12-14 20:08   ` [PATCH v2 2/3] cpufreq: Add special-purpose fast-switching callback for drivers Rafael J. Wysocki
2020-12-14 20:09   ` [PATCH v2 3/3] cpufreq: intel_pstate: Implement the ->adjust_perf() callback Rafael J. Wysocki
2020-12-15  3:29     ` Srinivas Pandruvada
2020-12-15  4:16   ` [PATCH v2 0/3] cpufreq: Allow drivers to receive more information from the governor Viresh Kumar
2020-12-17 15:26   ` Doug Smythies
2020-12-21 10:41     ` Rafael J. Wysocki
2020-12-18 16:11   ` Giovanni Gherdovich
2020-12-21 16:11     ` Rafael J. Wysocki
2020-12-23 13:06       ` Giovanni Gherdovich [this message]
2020-12-28 19:11         ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1608728803.14392.59.camel@suse.com \
    --to=ggherdovich@suse.com \
    --cc=dsmythies@telus.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).