linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY
@ 2019-10-25 18:00 Leonard Crestez
  2019-10-25 18:00 ` [PATCH 1/3] PM: QoS: Reorder pm_qos/freq_qos/dev_pm_qos structs Leonard Crestez
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Leonard Crestez @ 2019-10-25 18:00 UTC (permalink / raw)
  To: Rafael J. Wysocki, Viresh Kumar
  Cc: MyungJoo Ham, Kyungmin Park, Matthias Kaehlcke, Chanwoo Choi,
	Artur Świgoń,
	linux-pm, linux-imx

Support for frequency limits in dev_pm_qos was removed when cpufreq was
switched to freq_qos, this series attempts to restore it by
reimplementing top of freq_qos.

Previous discussion here:

https://lore.kernel.org/linux-pm/VI1PR04MB7023DF47D046AEADB4E051EBEE680@VI1PR04MB7023.eurprd04.prod.outlook.com/T/#u

The cpufreq core switched away because it needs contraints at the level
of a "cpufreq_policy" which cover multiple cpus so dev_pm_qos coupling
to struct device was not useful (and was incorrectly handling). Cpufreq
could only use dev_pm_qos by implementing an additional layer of
aggregation from CPU to policy.

However the devfreq subsystem scaling is always performed for each
device so dev_pm_qos is a very good match. Support for dev_pm_qos
inside devfreq is implemented by this series:

	https://patchwork.kernel.org/cover/11171807/

Rafael: If this makes sense to you I could incorporate the restoration
of DEV_PM_QOS_MIN/MAX_FREQUENCY in v10 of the devfreq qos series.

In theory if freq_qos is extended to handle conflicting min/max values then
this sharing would be useful. Right now freq_qos just ties two unrelated
pm_qos aggregations for min/max freq.

---
This is implemented by embeding a freq_qos_request inside dev_pm_qos_request:
the data field was already an union in order to deal with flag requests.

The internal _freq_qos_apply is exported so that it can be called from
dev_pm_qos apply_constraints.

The dev_pm_qos_constraints_destroy function has no obvious equivalent in
freq_qos but really the whole approach of "removing requests" is somewhat dubios:
request objects should be owned by consumers and the list of qos requests
should be empty when the target device is deleted. Clearing the request
list and would likely result in a WARN next time "update_request" is
called by the requestor.

Leonard Crestez (3):
  PM: QoS: Reorder pm_qos/freq_qos/dev_pm_qos structs
  PM: QoS: Export _freq_qos_apply
  PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY

 drivers/base/power/qos.c | 69 +++++++++++++++++++++++++++++---
 include/linux/pm_qos.h   | 86 +++++++++++++++++++++++-----------------
 kernel/power/qos.c       | 11 ++---
 3 files changed, 119 insertions(+), 47 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-11-14 15:37 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-25 18:00 [PATCH 0/3] PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY Leonard Crestez
2019-10-25 18:00 ` [PATCH 1/3] PM: QoS: Reorder pm_qos/freq_qos/dev_pm_qos structs Leonard Crestez
2019-10-25 18:00 ` [PATCH 2/3] PM: QoS: Export _freq_qos_apply Leonard Crestez
2019-11-13 22:23   ` Rafael J. Wysocki
2019-11-14 15:37     ` Leonard Crestez
2019-10-25 18:00 ` [PATCH 3/3] PM: QoS: Restore DEV_PM_QOS_MIN/MAX_FREQUENCY Leonard Crestez
2019-11-11 19:40 ` [PATCH 0/3] " Leonard Crestez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).