linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Francisco Jerez <currojerez@riseup.net>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Linux PM <linux-pm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Amit Kucheria <amit.kucheria@linaro.org>, "Pandruvada\,
	Srinivas" <srinivas.pandruvada@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface
Date: Thu, 13 Feb 2020 00:07:50 -0800	[thread overview]
Message-ID: <878sl6j4fd.fsf@riseup.net> (raw)
In-Reply-To: <CAJZ5v0hAn0V-QhebFt=vqKK6gBLxjTq7SNOWOStt7huCXMSH7g@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 7870 bytes --]

"Rafael J. Wysocki" <rafael@kernel.org> writes:

> On Thu, Feb 13, 2020 at 12:31 AM Francisco Jerez <currojerez@riseup.net> wrote:
>>
>> "Rafael J. Wysocki" <rjw@rjwysocki.net> writes:
>>
>> > Hi All,
>> >
>> > This series of patches is based on the observation that after commit
>> > c3082a674f46 ("PM: QoS: Get rid of unused flags") the only global PM QoS class
>> > in use is PM_QOS_CPU_DMA_LATENCY, but there is still a significant amount of
>> > code dedicated to the handling of global PM QoS classes in general.  That code
>> > takes up space and adds overhead in vain, so it is better to get rid of it.
>> >
>> > Moreover, with that unuseful code removed, the interface for adding QoS
>> > requests for CPU latency becomes inelegant and confusing, so it is better to
>> > clean it up.
>> >
>> > Patches [01/28-12/28] do the first part described above, which also includes
>> > some assorted cleanups of the core PM QoS code that doesn't go away.
>> >
>> > Patches [13/28-25/28] rework the CPU latency QoS interface (in the classic
>> > "define stubs, migrate users, change the API proper" manner), patches
>> > [26-27/28] update the general comments and documentation to match the code
>> > after the previous changes and the last one makes the CPU latency QoS depend
>> > on CPU_IDLE (because cpuidle is the only user of its target value today).
>> >
>> > The majority of the patches in this series don't change the functionality of
>> > the code at all (at least not intentionally).
>> >
>> > Please refer to the changelogs of individual patches for details.
>> >
>> > Thanks!
>>
>> Hi Rafael,
>>
>> I believe some of the interfaces removed here could be useful in the
>> near future.
>
> I disagree.
>
>>  It goes back to the energy efficiency- (and IGP graphics
>> performance-)improving series I submitted a while ago [1].  It relies on
>> some mechanism for the graphics driver to report an I/O bottleneck to
>> CPUFREQ, allowing it to make a more conservative trade-off between
>> energy efficiency and latency, which can greatly reduce the CPU package
>> energy usage of IO-bound applications (in some graphics benchmarks I've
>> seen it reduced by over 40% on my ICL laptop), and therefore also allows
>> TDP-bound applications to obtain a reciprocal improvement in throughput.
>>
>> I'm not particularly fond of the global PM QoS interfaces TBH, it seems
>> like an excessively blunt hammer to me, so I can very much relate to the
>> purpose of this series.  However the finer-grained solution I've
>> implemented has seen some push-back from i915 and CPUFREQ devs due to
>> its complexity, since it relies on task scheduler changes in order to
>> track IO bottlenecks per-process (roughly as suggested by Peter Zijlstra
>> during our previous discussions), pretty much in the spirit of PELT but
>> applied to IO utilization.
>>
>> With that in mind I was hoping we could take advantage of PM QoS as a
>> temporary solution [2], by introducing a global PM QoS class similar but
>> with roughly converse semantics to PM_QOS_CPU_DMA_LATENCY, allowing
>> device drivers to report a *lower* bound on CPU latency beyond which PM
>> shall not bother to reduce latency if doing so would have negative
>> consequences on the energy efficiency and/or parallelism of the system.
>
> So I really don't quite see how that could be responded to, by cpuidle
> say.  What exactly do you mean by "reducing latency" in particular?
>

cpuidle wouldn't necessarily have to do anything about it since it would
be intended merely as a hint that a device in the system other than the
CPU has a bottleneck.  It could provide a lower bound for the wake-up
latency of the idle states that may be considered by cpuidle.  It seems
to me like it could be useful when a program can tell from the
characteristics of the workload that a latency reduction below a certain
time bound wouldn't materially affect the performance of the system
(e.g. if you have 20 ms to render a GPU-bound frame, you may not care at
all about the CPU taking a fraction of a millisecond more to wake up a
few times each frame).

For cpufreq I was planning to have it influence a time parameter of the
utilization averaging done by the governor, which would allow it to have
a more optimal response in the long term (in the sense of lowering the
energy cost of performing the same work in the specified timeframe),
even if such a large time parameter wouldn't normally be considered
appropriate for utilization averaging due to latency concerns.

>> Of course one would expect the current PM_QOS_CPU_DMA_LATENCY upper
>> bound to take precedence over the new lower bound in cases where the
>> former is in conflict with the latter.
>
> So that needs to be done on top of this series.
>
>> I can think of several alternatives to that which don't involve
>> temporarily holding off your clean-up,
>
> The cleanup goes in.  Please work on top of it.
>

Hopefully we can come up with an alternative in that case.  TBH I'd love
to see your clean-up go in too, but global PM QoS seemed fairly
appealing as a way to split up my work so it could be reviewed
incrementally, even though I'm aiming for a finer-grained solution than
that.

>> but none of them sound particularly exciting:
>>
>>  1/ Use an interface specific to CPUFREQ, pretty much like the one
>>     introduced in my original submission [1].
>
> It uses frequency QoS already today, do you really need something else?
>

Yes.  I don't see how frequency QoS could be useful for this as-is,
unless we're willing to introduce code in every device driver that takes
advantage of this and have them monitor the utilization of every CPU in
the system, so they can calculate an appropriate max frequency
constraint -- One which we can be reasonably certain won't hurt the
long-term performance of the CPU cores these constraints are being
placed on.

>>  2/ Use per-CPU PM QoS, which AFAICT would require the graphics driver
>>     to either place a request on every CPU of the system (which would
>>     cause a frequent operation to have O(N) complexity on the number of
>>     CPUs on the system), or play a cat-and-mouse game with the task
>>     scheduler.
>
> That's in place already too in the form of device PM QoS; see
> drivers/base/power/qos.c.

But wouldn't that have the drawbacks I was talking about above when
trying to use it in order to set this kind of constraints on CPU power
management?

>
>>  3/ Add a new global PM QoS mechanism roughly duplicating the
>>     cpu_latency_qos_* interfaces introduced in this series.  Drop your
>>     change making this available to CPU IDLE only.
>
> It sounds like you really want performance for energy efficiency and
> CPU latency has a little to do with that.
>

The mechanism I've been working on isn't intended to sacrifice long-term
performance of the CPU (e.g. if a CPU core is 100% utilized in the
steady state by the same or an unrelated application the CPUFREQ
governor should still request the maximum turbo frequency for it), it's
only meant to affect the trade-off between energy efficiency and latency
(e.g. the time it takes for the CPUFREQ governor to respond to an
oscillation of the workload that chooses to opt in).

>>  3/ Go straight to a scheduling-based approach, which is likely to
>>     greatly increase the review effort required to upstream this
>>     feature.  (Peter might disagree though?)
>
> Are you familiar with the utilization clamps mechanism?
>

Sure, that would be a possibility as alternative to PM QoS, but it would
most likely involve task scheduler changes to get it working
effectively, which Srinivas and Rodrigo have asked me to leave out from
my next RFC submission in the interest of reviewability.  I wouldn't
mind plumbing comparable information through utilization clamps instead
or as follow-up if you think that's the way forward.

> Thanks!


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 227 bytes --]

  parent reply	other threads:[~2020-02-13  8:07 UTC|newest]

Thread overview: 74+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-11 22:51 [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Rafael J. Wysocki
2020-02-11 22:52 ` [PATCH 01/28] PM: QoS: Drop debugfs interface Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 02/28] PM: QoS: Drop pm_qos_update_request_timeout() Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 03/28] PM: QoS: Drop the PM_QOS_SUM QoS type Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 04/28] PM: QoS: Clean up pm_qos_update_target() and pm_qos_update_flags() Rafael J. Wysocki
2020-02-11 22:58 ` [PATCH 05/28] PM: QoS: Clean up pm_qos_read_value() and pm_qos_get/set_value() Rafael J. Wysocki
2020-02-11 22:59 ` [PATCH 06/28] PM: QoS: Drop iterations over global QoS classes Rafael J. Wysocki
2020-02-11 23:00 ` [PATCH 07/28] PM: QoS: Clean up misc device file operations Rafael J. Wysocki
2020-02-11 23:01 ` [PATCH 08/28] PM: QoS: Redefine struct pm_qos_request and drop struct pm_qos_object Rafael J. Wysocki
2020-02-11 23:02 ` [PATCH 09/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY notifier chain Rafael J. Wysocki
2020-02-11 23:04 ` [PATCH 10/28] PM: QoS: Rename things related to the CPU latency QoS Rafael J. Wysocki
2020-02-12 10:34   ` Rafael J. Wysocki
2020-02-12 19:13   ` Greg Kroah-Hartman
2020-02-11 23:06 ` [PATCH 11/28] PM: QoS: Simplify definitions of CPU latency QoS trace events Rafael J. Wysocki
2020-02-11 23:07 ` [PATCH 12/28] PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h Rafael J. Wysocki
2020-02-11 23:07 ` [PATCH 13/28] PM: QoS: Add CPU latency QoS API wrappers Rafael J. Wysocki
2020-02-11 23:08 ` [PATCH 14/28] cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request() Rafael J. Wysocki
2020-02-11 23:10 ` [PATCH 15/28] x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*() Rafael J. Wysocki
2020-02-12 10:14   ` Andy Shevchenko
2020-02-11 23:12 ` [PATCH 16/28] drm: i915: " Rafael J. Wysocki
2020-02-12 10:32   ` Rafael J. Wysocki
2020-02-14  7:42   ` Jani Nikula
2020-02-11 23:13 ` [PATCH 17/28] drivers: hsi: " Rafael J. Wysocki
2020-02-13 21:06   ` Sebastian Reichel
2020-02-11 23:17 ` [PATCH 18/28] drivers: media: " Rafael J. Wysocki
2020-02-12  5:37   ` Mauro Carvalho Chehab
2020-02-11 23:21 ` [PATCH 19/28] drivers: mmc: " Rafael J. Wysocki
2020-02-11 23:24 ` [PATCH 20/28] drivers: net: " Rafael J. Wysocki
2020-02-11 23:48   ` Jeff Kirsher
2020-02-12  5:49   ` Kalle Valo
2020-02-11 23:26 ` [PATCH 21/28] drivers: spi: " Rafael J. Wysocki
2020-02-11 23:27 ` [PATCH 22/28] drivers: tty: " Rafael J. Wysocki
2020-02-12 10:35   ` Rafael J. Wysocki
2020-02-12 19:13   ` Greg Kroah-Hartman
2020-02-11 23:28 ` [PATCH 23/28] drivers: usb: " Rafael J. Wysocki
2020-02-12 18:38   ` Greg KH
2020-02-18  8:03     ` Peter Chen
2020-02-18  8:08       ` Greg KH
2020-02-18  8:11         ` Peter Chen
2020-02-19  1:09   ` Peter Chen
2020-02-11 23:34 ` [PATCH 24/28] sound: " Rafael J. Wysocki
2020-02-12 10:08   ` Mark Brown
2020-02-12 10:16     ` Rafael J. Wysocki
2020-02-12 10:21       ` Takashi Iwai
2020-02-12 10:18   ` Mark Brown
2020-02-11 23:35 ` [PATCH 25/28] PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions Rafael J. Wysocki
2020-02-11 23:35 ` [PATCH 26/28] PM: QoS: Update file information comments Rafael J. Wysocki
2020-02-11 23:36 ` [PATCH 27/28] Documentation: PM: QoS: Update to reflect previous code changes Rafael J. Wysocki
2020-02-11 23:37 ` [PATCH 28/28] PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE Rafael J. Wysocki
2020-02-12  8:37 ` [PATCH 00/28] PM: QoS: Get rid of unuseful code and rework CPU latency QoS interface Ulf Hansson
2020-02-12  9:17   ` Rafael J. Wysocki
2020-02-12  9:39 ` Rafael J. Wysocki
2020-02-12 23:32 ` Francisco Jerez
2020-02-13  0:16   ` Rafael J. Wysocki
2020-02-13  0:37     ` Rafael J. Wysocki
2020-02-13  8:10       ` Francisco Jerez
2020-02-13 11:38         ` Rafael J. Wysocki
2020-02-21 22:10           ` Francisco Jerez
2020-02-24  0:29             ` Rafael J. Wysocki
2020-02-24 21:06               ` Francisco Jerez
2020-02-13  8:07     ` Francisco Jerez [this message]
2020-02-13 11:34       ` Rafael J. Wysocki
2020-02-13 16:35         ` Rafael J. Wysocki
2020-02-14  0:15           ` Francisco Jerez
2020-02-14 10:42             ` Rafael J. Wysocki
2020-02-14 20:32               ` Francisco Jerez
2020-02-24 10:39                 ` Rafael J. Wysocki
2020-02-24 21:16                   ` Francisco Jerez
2020-02-14  0:14         ` Francisco Jerez
2020-02-13  7:10 ` Amit Kucheria
2020-02-13 10:17   ` Rafael J. Wysocki
2020-02-13 10:22     ` Rafael J. Wysocki
2020-02-13 10:49     ` Amit Kucheria
2020-02-13 11:36       ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=878sl6j4fd.fsf@riseup.net \
    --to=currojerez@riseup.net \
    --cc=amit.kucheria@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=rodrigo.vivi@intel.com \
    --cc=srinivas.pandruvada@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).